text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Stephen Early
University of Cambridge
July 1997
This document describes a port of the Nemesis operating system to Intel Pentium based platforms. The majority of personal computers sold are based on Pentium-compatible processors, and share the same system architecture (commonly known as the `PC' architecture).
Some background of the nature of Pentium systems will be described, and an introduction will be given to relevant parts of the processor and PC architecture. Then the Pentium port will be described, describing in the context of these architectural features the various decisions made. This is followed by a description of the tools used in the port.
Finally I conclude with a description of some of the issues raised by the Pentium work and discuss their effects on the continuing development of Nemesis.
The Intel Pentium is a 32-bit processor which has a superset of the features of the earlier 8086, 80186, 80286, 80386 and 80486 processors. It is used as the processor in most PC architecture machines currently on the market. See section 3 for a description of the PC architecture.
Like all of the members of the Intel Architecture family of processors, the Pentium preserves binary compatibility with earlier members of the family. However, in order to obtain the best performance different optimisations must be made in both the operating system design and compiled code.
The Pentium and Pentium Pro are described in detail in [2,3].
As of the 80286, the Intel architecture supports two distinct modes of operation known as real-address mode and protected mode . Real-address mode is provided for backwards compatibility with earlier Intel architecture processors, and is the default mode on initialisation. Protected mode is the native operating mode of the processor, and allows all of the instructions and architectural features to be used. All of the following sections describe the behaviour of the processor while it is in protected mode.
The Pentium has a segmented address space. Memory references for code, data and stack are made through the appropriate segment registers . These registers contain an index into one of two tables, the global descriptor table or the local descriptor table . The virtual address within the segment is translated using the information in the descriptor to a linear address . Finally the linear address is translated using the page tables to a physical address .
It is necessary to define at least two segment descriptors to enable code to run in protected mode; one for code access and one for data and stack access. If protection is to be implemented then four descriptors must be defined; two for `user mode' and two for `kernel mode' memory accesses.
The linear addresses and lengths of the base of the global and local descriptor tables are stored in two registers, the GDTR and the LDTR. These registers can only be changed when the processor is in its most privileged mode.
The Pentium recognises four privilege levels, or `rings' numbered 0-3. Level 0 is the most highly privileged level. The current privilege level is determined by the privilege bits in the current code segment selector.
Coarse-grained control over access to memory can be gained using bits in segment descriptors. These can describe segments as read/write, read only, or execute only, as well as having some other attributes such as `expand-down' and an `accessed' flag. The accessibility of segment descriptors is determined by the descriptor privilege level ; this is compared with the requestor privilege level and the current privilege level when an attempt is made to load a selector into a segment register. If an invalid request is made then a protection exception is generated.
Finer grained control over access is managed using the page tables. Each page table entry has two bits which control access to the page; one bit restricts access based on the current privilege level, and the other is a write-protect flag.
The current privilege level also controls access to IO ports, and the ability to use some registers and instructions.
The Pentium insists on the concept of the current task . A data structure called the `task state segment' (TSS) holds information about the task. Task state segments are accessed through entries in the global descriptor table.
The TSS holds enough information to be able to restore a task. Part of it may be written to by the processor; this part holds the general purpose registers, the current segment selectors, the EFLAGS register, the instruction pointer and a field to link to the `previous' TSS. The other part is set up by the operating system, and holds a variety of information:
The TR register holds information about the current TSS. It can be loaded with a TSS descriptor using the LTR instruction. Internally the processor caches the linear address of the base of the TSS; this is not accessible to software.
A number of types of descriptor are valid in the interrupt descriptor table, but Nemesis only uses interrupt gate descriptors. When an interrupt occurs and an interrupt gate descriptor is found by the processor, interrupts are disabled, the stack is switched to the appropriate stack for the privilege level of the descriptor (always 0 in Nemesis), and the handler specified in the descriptor is called.
Registers in the Intel architecture can be divided into two main groups; those used by user-level code, and those used for system management. There is one register, the EFLAGS register, that has some bits that are used by user-level code, and some that can only be modified by privileged code.
The system registers are shown in Table 1, and the user registers are shown in Table 2. Note that it is possible to refer to parts of the four general purpose registers EAX-EDX by calling them AX, BX, etc. to access the low 16 bits, and AH, AL, BH, BL, etc. to access the upper and lower 8 bits of the low 16 bits. This is for compatibility with the 80286 and earlier processors.
From the point of view of a Nemesis port, the PC architecture has two interesting features:
Part of the I/O and memory spaces address devices on an ISA bus. While any devices on this bus may be add-in cards, there are several devices which are expected to be present, and are vital to the operation of the machine:
Information on the above devices is available in manufacturers' data sheets. It is also available in books on the PC architecture; one used during the development of Nemesis is [5].
A PC can be booted in many different ways. The usual methods involve loading a sector (512 bytes) from either floppy disk or hard disk into memory and running it. Alternatively control can be passed to code in a BIOS extension ROM on a plug-in card like a network card.
No matter how the initial code is loaded, control is passed to it with the processor in real-address mode. This is to retain compatibility with legacy operating systems like MS-DOS. The code is responsible for loading the rest of the boot loader using BIOS calls to access the boot device. The boot loader can then load the operating system image and start it.
Several adequate boot loaders have already been written, and are available under the GNU General Public License. Many of these were designed to load Linux, so the Nemesis operating system image file has been made compatible with Linux operating system images.
A Nemesis image has three sections, referred to as the boot sector, the setup code and the system image. If an image is written directly to a floppy disk then it will load itself and run when the floppy is booted. Alternatively, another loader program can be used to load the setup code and system image from other media.
If the image is being booted from floppy then the BIOS loads the first 512 bytes at 0x7c00 and jumps at it in real-address mode. This code copies itself to 0x90000, loads the setup code at 0x90200 and the system image at 0x100000.
If the image is being loaded by some other loader, that loader reads the setup code size from a well-known location in the boot sector, loads the setup code at 0x90200 and the system image at 0x100000. The setup code is then jumped to in Real mode.
The setup code stores some values from the BIOS like memory size and hard disk parameters in well-known locations starting at 0x90000, sets up the two 8259A interrupt controllers, switches to Protected mode and jumps at the start of the third section.
In the Computer Laboratory we originally used a simple network boot loader program to start Nemesis: the boot loader was loaded from floppy disk, and then used bootp and tftp to load a Nemesis image over the network. This process was rather slow, so now a pre-built Nemesis kernel is loaded from the hard disk of the test machine using LILO [1]. This kernel loads another Nemesis image using either NFS or TFTP and starts it using a chain system call that was added for this purpose.
There are four main components which require consideration when porting Nemesis to a new processor. These are initialisation, the NTSC interface (system calls), interrupts and timer code. These will now be described.
When the 32-bit protected mode code is entered, the processor is not in a suitable state to run Nemesis. The initialisation code in the NTSC sets up a GDT with seven entries (three code segment descriptors, three corresponding data segment descriptors, and a TSS descriptor). The TSS is initialised minimally; only the ring 0 stack segment selector and base address fields are used. The IDT is initialised with descriptors for all of the processor internal exceptions, hardware interrupts, and system calls. Finally, the generic `Primal' routine is called in user mode to continue initialisation.
Currently the processor is left in physical address mode when Primal is started; it is up to the Intel-specific memory management code in user space to enable virtual addressing. This may change in the future, when new memory management code is integrated with the Pentium port.
Console output from the NTSC is provided using a trivial serial driver that accesses a UART in polled mode. Use of this serial driver involves a busy wait in the NTSC with interrupts disabled, and so is only used when it is the only means by which information can be output.
It is possible to access the NTSC console output code from user mode using a system call. This is useful in two situations: firstly during system startup, before the serial driver has been initialised. Secondly, during domain initialisation before the domain has had a chance to establish IDC connections.
When the video BIOS has finished initialisation the graphics chipset
is left in an
character text mode with the start of
screen memory at a well-known address. It is possible to use the
display without any further initialisation. The current NTSC puts a
banner at the top of the screen to enable people physically at the
console to see which image the machine is running.
During initialisation, the two 8259 interrupt controllers are
programmed to map the 16 possible interrupts to vectors 32-47. The
handlers for those interrupts are all very similar
; they call the k_irq() routine with the
interrupt number as an argument.
k_irq() performs a few sanity checks (making sure that an interrupt didn't occur while interrupts were supposed to be disabled, for example), masks the interrupt in the 8259 and finally acknowledges it. This prevents the interrupt from occurring again until the appropriate driver has had a chance to deal with the device. The interrupt is looked up in a table, and the appropriate interrupt stub is called, if one has been registered. The stub is passed the address of the k_event() routine and a pointer to its private data. k_event() can be used to send an event to the appropriate device driver domain.
Some system calls can only be made by privileged domains. Access to
these is controlled by the DPL
field in the interrupt descriptor. Normal Nemesis domains run at ring
3; privileged domains spend part of their time running at ring 2, and
must be in this state in order to make privileged system calls.
Interrupt gate descriptors are used in the IDT for system calls, so interrupts are disabled automatically during system calls and NTSC code is run on the NTSC stack. Almost all of the system call stubs call the save_context routine to store the processor context that the processor has left in registers and on the stack in the appropriate context slot. They then call the k_syscall() routine with the system call number as an argument.
Eventually it is intended to implement some of the system calls directly in assembler, so that the call to C and, for some calls, the context save may be omitted.
The PC platform has a number of timers as standard. There is an 8254 programmable timer chip, and a DS1287a real-time clock chip that can be programmed to generate interrupts at a particular rate.
Initial work on Intel Nemesis programmed the real-time clock chip to
generate interrupts at 8192Hz
(its fastest possible rate) to keep the notion of
`current time' up to date, and attempted to use the programmable timer
as an interval timer. This failed because of interrupt priority
problems; the programmable timer is wired to interrupt 0, the highest
priority interrupt, and the real-time clock is wired to interrupt 8.
The scheduler would occasionally get into a state where it asks for an
interrupt after a very small interval of time. The interval timer
interrupt would occur almost immediately, but because the scheduler's
idea of `current time' has not changed the same small interval would
be requested again. The continual processing of interval timer
interrupts prevents ticker interrupts from being dealt with.
The first working implementation of the timer ignored the real-time
clock chip, and programmed the other timer to generate interrupts at
8192Hz. Interval timing was performed in software, with a minimum
interval of 122.07
s.
Starting with Pentium processors, Intel introduced the
rdtsc instruction. This returns a 64-bit time
stamp
. If this instruction is present then more
accurate timer code can be used. A calibration is performed at NTSC
initialisation time to determine the number of picoseconds per single
time stamp. The real-time clock chip is then programmed to generate
interrupts at 2Hz. The handler for the real-time clock interrupt
records the value of the time stamp counter at the time of the
interrupt, and the current scheduler time. Whenever the current
scheduler time needs to be known, it is calculated using the value
stored at the last ticker interrupt and the current value of the time
stamp counter. This enables the current scheduler time to be
determined very accurately.
Using the low-frequency ticker and time stamp counter frees up the
programmable timer, so this can now be used once more as an interval
timer for the scheduler. We have found that the timer is unreliable if
intervals below 1
s are requested, so this has been made the
minimum possible interval in software.
There are various pieces of assembler code which need to be written for user space Nemesis code. These include the system call stubs, the thread startup code, and setjmp()/longjmp(). All of these were straightforward.
The current `ring' is determined by the privilege level of the current code segment selector. Nemesis defines three of these, which are identical apart from the privilege level. Levels 0, 2 and 3 are defined.
User space code usually runs in ring 3. However, if a domain has the kernel privilege (`k') flag set in the read-only part of its control block, it can use a system call to increase its privilege to ring 2. This enables the code in the domain to use privileged system calls and access any part of the virtual address space.
Instead we have defined a page of memory at a well-known virtual address to contain `public' NTSC data. A macro is provided to access data in this area. The following are some of the things included in the PIP:
User-level code uses the pervasives to fetch commonly-used pointers like the Event system closure and the root of the thread's namespace. The alternative would be to look these up in the namespace each time they were required, but then of course the pointer to the root of the namespace would have to be passed as a parameter to every procedure.
On most architectures, PVS() is implemented using compiler options to make it access a designated register directly. On Intel this is not sensible because there are very few general purpose registers available. Instead, the current Pervasives register value is stored in the read/write section of the DCB. The PVS() macro accesses this value by dereferencing the pointer to the current DCBRW that is stored in the PIP (see section 4.5).
The context save and restore code in the NTSC, and the implementation of setjmp/longjmp have been modified for Intel Nemesis to treat the Pervasives register as part of the current processor context.
Initial work on the Pentium port of Nemesis has been done with a one-to-one mapping between virtual and physical addresses. The processor's paging mechanism has been used only to provide memory protection.
Context switches and protection domain switches occur very often in Nemesis, so the implementation attempts to minimise the number of TLB flushes as much as possible. The processor's page table is initialised with the global permissions for each page. When a domain attempts an access to a page that requires more than the global permissions, a page fault occurs and the NTSC can alter the page table to allow the access. A list of all the pages modified in this fashion is kept, and when the protection domain is next switched the list is used to return the page table to its default state and flush only those TLB entries which are affected.
A floating point context on Intel is large
relative to the standard Intel context, and takes a relatively long
time to save and restore. Very little code in Nemesis uses the
floating point unit, so it is useful to defer floating point context
save and restore until it is known that it will be needed.
When a context switch is performed, a flag is set in CR0 which will make the processor generate a Device Unavailable exception whenever a floating point instruction is encountered. The NTSC traps this exception and performs the floating point context switch.
Once the NTSC has noticed that a domain is performing floating point operations, a flag is set in the domain's DCB. User space code like setjmp() and longjmp() can use this flag to decide whether to bother saving and restoring floating point state.
In Nemesis the processor features are read during NTSC initialisation, and are recorded in the PIP (section 4.5). Currently the main users of this information are the timer code (section 4.2.5), which changes behaviour depending on whether the rdtsc instruction exists, and the accounting code which also uses rdtsc. However, future user-space programs may read this information to detect the presence of architecture extensions like MMX.
The current status is that the core of the Nemesis system runs on Pentium-based machines. Memory protection is provided, but paging is not. The following device drivers exist and have been tested:
It was discovered early on in the port that the previous version of Nemesis made several assumptions about a 64-bit word size. The type system uses 64-bit values for typecodes, and the `Type.Any' type has a 64-bit pointer field. This caused problems with the compiler and linker, which could not extend a 32-bit value to 64 bits at build time.
The manipulation of time as a 64-bit quantity in the scheduler is inefficient. This has led to consideration of restructuring time within Nemesis so that the NTSC deals entirely in whatever the natural resolution of the machine is, and library code is provided for applications.
This would be a significant change and so may never actually be effected. Further evaluation will be performed when profiling is available later in the project.
The tools used for the Pentium port are GCC version 2.7.2, and GNU binutils version 2.7. The nembuild program used to create Nemesis kernel images is written using BFD 2.7.0.2. The intelbuild program used to join the three parts of the Nemesis image file together is derived from Linux. All of the tools are hosted on Intel Linux.
Machines based on the Pentium and other Intel architecture processors are important because they are commonly and cheaply available. Nemesis has been ported to these machines. Further work and evaluation of this port will continue throughout the Pegasus II project. In particular the memory management system for Nemesis is being designed with Intel processors in mind along with Alpha and ARM.
|
http://www.cl.cam.ac.uk/research/srg/netos/old-projects/pegasus/packages/2.1.2-pentium-port/report.html
|
CC-MAIN-2014-15
|
refinedweb
| 3,554
| 51.68
|
This:
- Eclipse
We're using the CDT, which is a plug-in to Eclipse, so of course you need Eclipse. The article uses Eclipse V3.2.
- Java Runtime Environment
We're building a C++ application, but we're using Eclipse. Eclipse is a Java application itself, so it needs a Java Runtime Environment (JRE). The article uses Eclipse V3.2, which requires a JRE of V1.4 or higher. If you want to also use Eclipse for Java development, you'll need a Java Development Kit (JDK).
- Eclipse C/C++ Development Toolkit (CDT)
This article is about the CDT, so you'll need it, of course. For instructions on installing the CDT on early versions of Eclipse, read a "C/C++ Development with the Eclipse Platform" (developerWorks 2003) .
- Cygwin
If you're using Microsoft Windows®, you will find Cygwin — which provides a Linux®-like environment on Windows — helpful.
- GNU C/C++ Development Tools
The CDT uses the standard GNU C/C++ tools for compiling your code, building your project, and debugging the applications. These tools are GNU Compiler Collection (GCC) for C++ (g++), make, and the GNU Project Debugger (GDB). If you're a programmer using Linux or Mac OS X, there's a pretty good chance these tools are installed on your machine. The article contains instructions for setting up these tools for Windows.<<
Next, you'll want to choose Search for new features to install.
Figure 2. Search for new features
.
Figure 7. Select C/C++ perspective
Eclipse should now look something like Figure 8.
Figure 8. The C/C++ perspective
Eclipse organizes your code into projects, so we'll want to create a new project. Select File > New > Managed Make C++ Project.
Figure 9. New C++ project
.
Figure 10. New class
This should bring up the New Class wizard. We'll give our class a namespace lotto, and we'll call our class Lottery.
Figure 11. Lottery class
.
Listing 1. Lottery.
Listing 2. Lottery.cpp
.
When you save the files, Eclipse builds your project automatically. Again, if you save the project, it should be compiled and you should see compilation messages in the console, as shown in Listing 3.
Listing 3. Compiler output in console
****.
Figure 12.
MegaLottery class
.
Figure 13. Choose base classes
We can enter the code for
MegaLottery, as shown in Listings 4
and 5.
Listing 4. MegaLottery.h
.
Listing 5. MegaLottery.cpp
.
Listing 6. #ifndef LOTTERYFACTORY.
Listing 7. LotteryFactory.cpp
.
Listing 8. Main.cpp
..
- Learn about MinGW, the GNU C/C++ tools for Windows included with Cygwin.
- Download Cygwin a Linux-like environment for Windows. It consists of two parts: A DLL that acts as a Linux API emulation layer providing substantial Linux API functionality and a collection of tools that provide a Linux look and feel.
- The Eclipse C/C++ Development Toolkit (CDT) download information contains the latest information about the available versions of CDT.
-.
|
http://www.ibm.com/developerworks/library/os-eclipse-stlcdt/
|
CC-MAIN-2014-23
|
refinedweb
| 483
| 65.12
|
Read and write data in spreadsheet files, including
.xls and
.xlsx files.
Import spreadsheet data interactively using the Import Tool. Import or export spreadsheet data
programmatically using the functions on this page. To compare primary
import options for spreadsheet files, see Ways to Import Spreadsheets.
Select Spreadsheet Data Using Import Tool
This example shows how to import data from a spreadsheet into the workspace with the Import Tool.
Import a Worksheet or Range
This example shows how to import mixed numeric
and text data from a spreadsheet into a table, using the
readtable function.
Import All Worksheets from a File
This example shows how to import worksheets
in an Excel file that contains only numeric data (no row or column
headers, and no inner cells with text) into a structure array, using
the
importdata function.
Import and Export Dates to Excel Files
Microsoft Excel software can represent dates as text or numeric values.
Export to Excel Spreadsheets
This example shows how to export a numeric
array and a cell array to a Microsoft Excel spreadsheet file,
using the
xlswrite function.
Import or Export a Sequence of Files
To import or export multiple files, create a control loop to process one file at a time.
Define Import Options for Tables
Typically, you can import tables using the
readtable function.
Ways to Import Spreadsheets
You can import data from spreadsheet files into MATLAB® interactively, using the Import Tool, or programmatically, using an import function.
System Requirements for Importing Spreadsheets
If your system has Excel for Windows® installed, including the COM server (part of the typical installation of Excel):
|
http://uk.mathworks.com/help/matlab/spreadsheets.html?requestedDomain=uk.mathworks.com&nocookie=true
|
CC-MAIN-2016-40
|
refinedweb
| 266
| 50.67
|
eow!! can u make a code for fibonacci series using for loop?? please..
What happen if I enter 100?
why does this code have errors in my turbo c?
hi can you create a pr5ogram that generates N series of the fibonaccci nos. with MAX of 50 using an array,, please..
#include
using namespace std;
int main()
{
int fib1 = 0;
int fib2 = 1;
int fib3;
cout
Here is a simpler way and it deals with first two numbers:-Migs
/* Given: n A non-negative integer.
Task: To find the nth Fibonacci number.
Return: This Fibonacci number in the function name.
*/
long fib(int n)
{
if ((n == 0) || (n == 1)) // stopping cases
return 1;
else // recursive case
return fib(n - 1) + fib(n - 2);
}
Thanks eXceed69 looking at your code helped me figure out what I did wrong in my code
|
http://www.dreamincode.net/code/snippet790.htm
|
crawl-002
|
refinedweb
| 140
| 81.63
|
template<typename T> class Dial : public Encoder { private: T m_value; T m_min; T m_max; T m_step; ... Dial(Board::InterruptPin clk, Board::InterruptPin dt, Mode mode, T initial, T min, T max, T step) : Encoder(clk, dt, mode), m_value(initial), m_min(min), m_max(max), m_step(step) {}};...Rotary::Dial<int> dial(Board::PCI6, Board::PCI7, Rotary::Encoder::FULL_CYCLE, -100, -100, 10, 1);
Great! Thanks!The accelerated dial is nice to have when you need to enter/change a frequency for example. ...Idea N.6: dialing of a specific digit within a number . ...
The accelerated dial is such a great idea so I added a first attempt. It is also a template class with value data type and threshold parameter. Please try it out.
en.cycle - step increment size.. 1.. 1.. 1threshold kicks1. 1002. 2003. 400..8. 2560009. 25600010. 256000threshold off11. 112. 1..
// NRF24L01+ Wireless communication using default pins(SPI, D9, D10, D2)NRF24L01P nrf(0xc05a0001);// Luminance and temperature sensor based on analog pins(A2, A3)#include "Cosa/Pins.hh"namespace LTB { const Socket::addr_t dest = { 0xc05a0002, 7000 }; Socket socket(&nrf, 6000); AnalogPin luminance(Board::A2); AnalogPin temperature(Board::A3); uint16_t nr = 0; struct msg_t { uint16_t nr; uint16_t voltage; uint16_t luminance; uint16_t temperature; }; void send_update() { msg_t msg; msg.nr = nr++; msg.luminance = luminance.sample(); msg.temperature = temperature.sample(); msg.voltage = AnalogPin::bandgap(1100); socket.send(&msg, sizeof(msg), dest); }};
To demonstrate how Socket::Device is going to be used the NRF24L01+ device driver has been rewritten. With 32-bit addresses and 16-bit port numbers (as IP) it is possible to send datagrams point-to-point between any number of nodes (limited only by number of sockets in receiving node). The current implementation and examples for the NRF24L01+ device driver shows datagrams only. The next step is to introduce client-server connect-oriented sockets.
...do you intend to support streaming also? The 32-byte buffer is much to small for me. I'm used to the RFM12B with its 66 byte (or more?) packetbuffer and would like to send & receive 128 byte packets (max). In other words: I need some sort of packetstreaming
Hi MarsWarrior. Thanks for your interest in this project. The Cosa network sub-system is work in progress. The goal is to hide as much of possible of the low level link layer from the application. And support a number of different Socket::Device's driver ranging from RF315/433, NRF24, Zigbee, BlueTooth to Ethernet (W5100). The NRF24L01+ device driver is the first one out.
For datagrams (connectionless communication) there will be an upper limit as this has to do with the amount of memory available for buffering. For the NRF24L01+ socket driver the limit is right now the hardware buffer size, 32 byte payload (minus the datagram header which is 8 bytes = 24 byte payload). The device driver is "raw" send and single package buffered receive. There are several ways to approach datagram fragmentation. One approach would to use the NRF24L01+ internal FIFO and allow datagrams that could fill this queue. The max size would then be 3X32 minus header information and some additional sequence information (a byte). This would work for small devices such as ATtinyX4/X5 which have very limited SRAM (512 byte). For devices with more SRAM the issue of fragmentation is easier to solve with an intermediate buffer (such as IOBuffer).
I find your approach of building a complete system / infrastructure simply interesting because it is a very clean approach!My approach is the opposite: re-use of existing libraries and modify them as less as possible and/or wrap them up a bit to create the required uniform interface/class.As I'm using Nil/Chibi as RTOS, most changes are about changing (blocking) delays() with the more RTOS friendly versions...Queues, Events & buffers are used to create very loosly coupled parts.
The receiving side is indeed fairly simple by using an intermediate buffer. For the sending side I rely on the NRF24L01+ses auto-retry/auto-ack feature where I assume that packages always arrive, meaning I don't need difficult retry mechanisms and ACK/NACKing packages. And since that assumption is not alwasy true , I wondered what your approach would be Since you are using the socket paradigm I assume you will try to mimic UDP sockets, but I have no idea how those work...
I guess that you are doing serious applications while I am trying to building frameworks through many rewrite and refactoring iterations. This is an attempt to get a set of components to "play together" and find "glue" that makes it easy to integrate them.
[...]One of the more interesting design iterations in Cosa is the refactoring of the UART device driver. It started as a subset of the original Arduino code and ended up with a generic IOBuffer template class and a very small footprint UART device driver. It actually handles serial communication from ATtiny up to Arduino Mega's many ports. The UART became a simple interrupt handler and a delegation to the IOBuffers for input and output. But all this takes time and rewriting/refactoring to evolve. To be able to go forward with more efficient frameworks and tooling I really need to see study different usage patterns and requirements. That's why feedback as yours is so important.
For UDP style of sockets NRF24L01 transmit packets are too small and fragmentation is needed. The low-level retransmission and auto-ack is not always sufficient as the application still needs to handle what to do when the max number of retransmissions occur. At 2.4GHz there are bursts of noise that give that (e.g. micro-wave oven). Also typical high level retransmission will have timeouts in the range of 0.1-1 s while 24NRF01 retransmissions are less than 1 ms @ 2 Mbps. A totally different scale. For sensors with messages that fit into the payload (32 byte) NRF24 is perfect as is but for larger payloads and streaming a link level protocol is needed. I am working slowly upwards in this chain of support.
OWI owi(Board::D7);DS18B20 outdoors(&owi);DS18B20 indoors(&owi);DS18B20 basement(&owi);...void loop(){ ... DS18B20::convert_request(&owi, 12, true); indoors.read_scratchpad(); outdoors.read_scratchpad(); basement.read_scratchpad(); trace << PSTR("indoors = ") << indoors << PSTR(", outdoors = ") << outdoors << PSTR(", basement = ") << basement << endl; ...}
indoors = 24.75, outdoors = 24.68, basement = 24.75
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=150299.105
|
CC-MAIN-2016-40
|
refinedweb
| 1,092
| 56.45
|
public class Solution { public List<List<String>> findLadders(String endWord, String beginWord, Set<String> wordList) { wordList.add(endWord); Queue<Node> queue = new LinkedList<>(); Set<String> visited = new HashSet<>(), unvisited = new HashSet<>(); unvisited.addAll(wordList); int level = 0, minDist = Integer.MAX_VALUE; List<List<String>> result = new ArrayList<>(); queue.add(new Node(beginWord, null, 0)); visited.add(beginWord); while (!queue.isEmpty()) { Node current = queue.remove(); if (current.val.equals(endWord) && current.dist <= minDist) { minDist = current.dist; addPath(result, current); continue; } if (current.dist > minDist) { break; } if (current.dist > level) { unvisited.removeAll(visited); level = current.dist; } addNeighbours(queue, visited, unvisited, current); } return result; } private void addNeighbours(Queue<Node> queue, Set<String> visited, Set<String> unvisited, Node current) { char[] chars = current.val.toCharArray(); for (int i = 0; i < chars.length; ++i) { for (char c = 'a'; c <= 'z'; ++c) { char tmp = chars[i]; chars[i] = c; String nbr = new String(chars); if (unvisited.contains(nbr)) { queue.add(new Node(nbr, current, current.dist + 1)); visited.add(nbr); } chars[i] = tmp; } } } private void addPath(List<List<String>> result, Node current) { List<String> path = new ArrayList<>(current.dist); while (current != null) { path.add(current.val); current = current.parent; } result.add(path); } private class Node { String val; Node parent; int dist; private Node(String val, Node parent, int dist) { this.val = val; this.parent = parent; this.dist = dist; } } }
in general BFS, visited is added when each iteration. but you do that differently,
what is the difference compared to the standard BFS?
if (current.dist > level) { unvisited.removeAll(visited); level = current.dist; }
@darren5 The difference is that in this task we want to find all shortest paths. This means that we might need to reuse some nodes and make sure that we don't prune paths that are still valid. Consider the following example:
sit / \ hit sat - bat \ / hat
In this example the result is
[ [hit, sit, sat, bat], [hit, hat, sat, bat] ].
Let's say we go up from
hit. From
sit we can reach
sat and the total path length will be
2. If we directly remove
sat, we won't be able to reach it from
hat. Thus, we need to make sure that we remove nodes when there is no shorter path that leads to them. That's why we have
if (current.dist > level).
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/63151/clear-java-solution-with-a-single-bfs
|
CC-MAIN-2017-51
|
refinedweb
| 396
| 62.44
|
The Adapter Pattern is the first software design pattern of the Structural Pattern, that the Gang of Four (GOF) Design Patterns, presented in their book , Design Patterns – Elements of Reusable Object-Oriented Software.
The Adapter pattern is a structural design pattern that enables incompatible interfaces, from disparate systems to exchange data and work together. It is extremely useful when integrating tool-kits, libraries and other utilities together.
Apple Macbook users will be very familiar adaptors, which they will frequently use to plug various devices and network connections.
This is essentially a physical implementation of the Adapter pattern. Head First Design Patterns provides a really good succinct definition of the Adapter Pattern.
he Adapter pattern allows you to provide an object instance to a client that has a dependency on an interface that your instance does not implement. An Adapter class is created that fulfils the expected interface of the client but that implements the methods of the interface by delegating to different methods of another object.
Adaptive Code – Agile coding with design patterns and SOLID principles
Adaptive Code
Agile coding with design patterns and SOLID principles
Two types of Adapters
There are typically two kinds of adapter pattern implementations
- Object Adapters
- _ Class _ Adapters
Class adapters
Typically you may only really encounter this type of pattern when using C or C++ or other languages that enable multiple inheritance.
Object adapters
It basically the only adapter pattern C# developers have available and is the type that is discussed in this post.
The Adapter Pattern
Converts the interface of a class into another interface this client expects.
Adapter lets classes work together that couldn’t otherwise because of incompatible interfaces.
The object Adapter pattern uses composition to delegate from the methods of the interface to those of a contained encapsulated class. This is a more common implementation of the Adapter pattern.
The main advantage of this pattern is that it enables the use of another library in an application that has an incompatible interface by using an adapter that does the conversion.
Head First Design Patterns
Building Extensible and Maintainable Object-Oriented Software
When to use the Adapter Pattern?
There are a number of situations when making use of the Adapter pattern can be a great solution
- A class needs to be reused that does not have an interface that a client requires.
- Allow a system to use classes of another system that is incompatible with it.
- Allow communication between a new and already existing system that is independent of each other.
- Sometimes a toolkit or class library cannot be used because its interface is incompatible with the interface required by an application.
Simple Adapter pattern implementation.
Software Design Patterns
Contents
Software Design patterns are typically…
In its most simple form the Adapter Pattern can just be a simple _ wrapper _ class which implements an interface. However the implementation within the class my implement a different set of classes to deliver the functionality required.
You may have an interface for Transport class that defines a Commute method. public interface ITransport { void Commute(); }
csharp
However, the only class you have a available is
Bicycle class that has a
Pedal method which will work for what you need.
public class Bicycle { public void Pedal() { Console.WriteLine("Pedaling"); } }
The snag is that the method or class that is going to use it, can only use classes that implement the
ITransport interface.
We can use the Adapter pattern here to create a class that implements the
ITransport interface, but it actually just wraps the
Bicycle class.
public class Transport : ITransport { private Bicycle _bike => new Bicycle(); public void Commute() { _bike.Pedal(); } }
You can now implement the
Transport class in your application, because it implements the
ITransport interface.
class Program { static void Main(string[] args) { var transport = new Transport(); transport.Commute(); } }
That is all there is to the Adapter pattern. Even in this most simplistic of implementations you can see the power of enabling classes that my seem incompatible with your application , yet you can still make use of them.
The adapter pattern only needs to do as much as it needs to do in order to adapt a class to work..
Conclusion
The Adapter Pattern is a very simple pattern, but can be quite powerful and extremely useful, and is really a worthwhile pattern to be aware of. Many developers, will most likely have used the Adapter pattern, without actually being explicitly aware of it.
The are more advanced implementation details of the Adapter pattern, but the fundamentals of the pattern remain the same.
Discussion (0)
|
https://practicaldev-herokuapp-com.global.ssl.fastly.net/gary_woodfine/the-adapter-pattern-20f6
|
CC-MAIN-2022-33
|
refinedweb
| 761
| 50.87
|
Introduction
Some times we want to change a component while some event of another component is triggered, we can achieve this by posting an event with some data to the component we want to change instead of modify it directly in the event listener of another component.
This seems a little bit weird and useless but this would be helpful in some situation such like implementing inplace editing of grid with renderer under MVVM (will be described in another article later)
Pre-request
(must)
Basic MVC with SelectorComposer
The Composer
PasseventTestComposer.java
Simply post an event to label while the value of textbox is changed, update label's value in the event listener itself.
package test.event.passeventtest; import java.util.HashMap; import java.util.Map; import org.zkoss.zk.ui.event.Event; import org.zkoss.zk.ui.event.Events; import org.zkoss.zk.ui.event.InputEvent; * * Test passing event with data from one component to another * * This seems a little bit weird and useless here * but this would be helpful in some situation * such like implement inplace editing of grid with renderer * under MVVM * (will be described in another article later) * * @author benbai * */ public class PasseventTestComposer extends SelectorComposer { @Wire Label lb; /** * Post an event with the new value to label * while the value of textbox is changed * instead of modify the value of label directly * @param event */ @Listen("onChange=#tbx") public void onChange$tbx (InputEvent event) { Map data = new HashMap(); data.put("value", event.getValue()); Events.postEvent("onValueChange", lb, data); } /** * Really modify the value of label here * @param event */ @Listen("onValueChange=#lb") public void onValueChange$lb (Event event) { Map data = (Map)event.getData(); String value = (String)data.get("value"); lb.setValue(value); } }
The ZUL Page
pass_event_to_other_component.zul
<zk> <!-- Tested with ZK 6.0.2 --> <window apply="test.event.passeventtest.PasseventTestComposer"> <label id="lb" value="label" /> <textbox id="tbx" /> </window> </zk>
The Result
View demo on line
Reference's_Reference/Event_Handling/Event_Firing
Download
Files at github
pass_event_to_other_component.zul
PasseventTestComposer.java
pass_event_to_other_component.swf
How to pass KeyStroke event in Child Pages I have hierarchy MainParent->MainParentChild->MainParentChildsChild now i will want to pass the KeyStroke Event in the MainParentChildsChild page like If User Press Ctrl+q or Ctril+S or Ctrl+A i have pass these keys in the Child pages.
How do you include child page? Using ?
This comment has been removed by the author.
The pages are included like this..
<tabpanels>
<tabpanel style="color:#333399;">
<include src="dashboard.zul" />
</tabpanel>
What i will want here to send Keypress event code in Child page ViewModel. As i added
ctrlKeys="^a^s^d#f8" onCtrlKey="@command('ctrlKeyClick',code=event.getKeyCode())"
In My parent ZUL and now i will want KeyCode in my Children viewmodel How can i do this?
Child page can access parent vm directly, please refer to the sample at zkfiddle
I think you miss understood my problem in my case i have send data to Child viewmodel from parent viewmodel.In you example you are passing Ctrl event from child to parent but i need opposite..I have asked a question here..
Maybe you can try Global Command Binding:'s%20Reference/MVVM/Data%20Binding/Global%20Command%20Binding
Thanks i have done this with GlobalCommand can you please tell me is any drawback to use GlobalCommand ?
One more thing i have to fire Ctrl key event on active tab as my parent window can open lots of tab and if i will use Global command how i will figure out which tab is active so that i can fire the command only on that tab.
If the page is cached and then refreshed, it may bind command multiple times.
You can try pass the information of selected tab (e.g., index, label, etc) and use it as a condition in command fumction.
Here in my case tab contain a wholee page different tab means a different page also tab are created dynamically from java code and when tab or page is selected and user doing something then if user use ctrl key then I have to fire save, refresh, delete etc.
Are the tabs under parent VM or child VM?
I have home.zul which contain menu with item and when you will click on any menu item it will open a tab for each menu item a new tab will open now each tab contain a zul page where user can do certain operation and by Ctrl keys like Ctrl+s for Saving, Ctrl+r for Refresh, Ctrl+q for query etc will perform. If I will use global command I have to write that global command in each tab view model or we can say child of home. Now suppose I open a tab then ctrl key will fire now I open another tab now 2 times ctrl key will fire , while I will want only active tab view model will fire a event on ctrl key
Maybe you can try the form binding ('s%20Reference/MVVM/Data%20Binding/Form%20Binding), if this still not work in your case, you can try customize the tabbox as needed, e.g., like this sample at zkfiddle ()
Thanks Let me try your example
Here you added ctrl key event in tab while in my case i added ctrl key event in windows compoentn so it will available for each tab because my all tab is added from java code and its too complex to add ctrl key in tab component
I've tested it and the onCtrlKey event fired for each tab without any problem, basically the parent component (tabbox) will receive the events of its children (all tabs)
By the way, the sample above firing event to selected tab child from tabbox instead of using global command
can it possible we can change this line
Events.postEvent("onActionRequest", tp.getFirstChild().getFellow("div"), data);
As you are doing id binding i will want to give command name or other thing because i have plenty with different component if i will add id static id in each pages i have to change plenty of places .Can we have any other solution here?
You can also use tp.getFirstChild().getFirstChild() if the first element of each inner page is the element that apply child vm.
In my case its very hard to do that Can we do something like postcommand where we can give command name and the command will fire child viewmodel ?
Maybe you can try global command in this way:
1. Store the selected tab in parent vm, and pass it to each child vm with global command
2. Store any component (maybe the first one under child vm) in the child zul within child vm, and continuously get parent component until find a tab then check whether it is the selected tab when global command triggered.
Ok but If we are adding a calling a GlobalComamnd from Parent View Model and creating a global command in each of the Child viewmodel then it is going to each viewmodel global command because name is same in each child viewmodel
That's right, then you can find the parent tab (from child vm) and detect whether it is the selected tab (in the global event from parent vm).
Can it possible to run bind.postCommand("methodName", map); here ?
According to the javadoc ((java.lang.String, java.util.Map)), postCommand will post a command to current binder, it can not post command from parent vm to child vm. You can try use EventQueue directly as needed.
There's an idea coming in to my mind, since requests are thread safe, we can assume the create event of tab and the init of child vm are occur at the same time, in other words, if we maintain two list say tabList and vmList, we can assume the order of tab in tabList will match the order of child vm in vmList, i.e., we can do something as below:
1. find the order of selected tab in parent vm
2. get the corresponding child vm based on the order found before in vmList
3. call binder.postCommand of child vm
just a rough concept, not tested
Thanks Again Ben ...i got another idea and look like it is working fine with me What i did? I made a Singleton Class and added a variable of Component class with get/set method. Now in each of my view Model afterCompose() i am calling setter method of Component variable from singleton class, and passing current viewModel Component object to it and in my HomeViewModel CTRL key event method i adding this code so it calling method from Selected tab .
Component ctrlkeyComp = idBinder.getCompObject();
if(ctrlkeyComp != null){
Binder bind = (Binder) ctrlkeyComp.getAttribute("binder");
if (bind == null)
return;
bind.postCommand("doActionInChildVM", map);
}
Yeah this seems a good way to go.
These is one issue with this approach here in using Singleton Class .Let us suppose i have open A<B,C,D tab and i am in D tab so Singleton class have COmponent class of D now if i will click on tab A then methods of TabD will be call Which is a issue.
Is there a sample that can reproduce this issue?
I have asked one question here
Ben do you have any idea about this
Seems the spec of modal window, replied
Thanks Ben for your help
|
http://ben-bai.blogspot.com/2012/12/pass-event-to-other-component.html
|
CC-MAIN-2018-30
|
refinedweb
| 1,563
| 58.52
|
I want to know if there is a way to use target to test users on safari 11 browsers? I believe you can do this using target's personalization. Is this assumption correct? If so, what are the step to create this on the tool?
When you go to Audiences you will find many prebuilt browser selection options. If you need something not already built out (i.e., Safari 11) you can use a Profile Script to evaluate anything in the useragent string. Something like this for your profile script will return a 'true' or 'false':
var br1 = user.browser.match(/Mac OS X/);
var br2 = user.browser.match(/Version\/11\./);
return (br1 != null && br2 != null);
I don't exactly know what you should be looking for to identify Safari 11, but this should work as a template for you. Then you can create an audience rule that keys off of the true/false value of this profile script. All profile scripts once activated can be selected in an audience rule under Visitor Profile with the name you gave the profile script prefaced with a "user."
|
https://forums.adobe.com/thread/2404295
|
CC-MAIN-2018-05
|
refinedweb
| 186
| 73.58
|
How_0<<
What if you just want a simple map without all the GIS stuff? In this post, I’ll show you how to make a county-specific choropleth map using only free tools.
Here’s what we’re after. It’s an unemployment map from 2009.
Step 0. System requirements
This tutorial was written with Python 2.5 and Beautiful Soup 3. If you’re using a more recent version of either, you might have to modify the code. See comments below for tips.
Step 1. Prepare county-specific data
The first step of every visualization is to get the data. You can’t do anything without it. In this example we’re going to use county-level unemployment data from the Bureau of Labor Statistics. However, you have to go through FTP to get the most recent numbers, so to save some time, download the comma-separated (CSV) file here.
It was originally an Excel file. All I did was remove the headers and save as a CSV.
Step 2. Get the blank map
Luckily, we don’t have to start from scratch. We can get a blank USA counties map from Wikimedia Commons. The page links to the map in four sizes in PNG format and then one as SVG (
USA_Counties_with_FIPS_and_names.svg‎). We want the SVG one. Download the SVG file onto your computer and save it as
counties.svg.
Blank US counties map in SVG format
The important thing here, if you’re not familiar with SVG (which stands for scalable vector graphics), is that it’s actually an XML file. It’s text with tags, and you can edit it in a text editor like you would a HTML file. The browser or image viewer reads the XML. The XML tells the browser what to show.
Anyways, we’ve downloaded our SVG map. Let’s move on to the next step.
Step 3. Open the SVG file in a text editor
I want to make sure we’re clear on what we’re editing. Like I said in Step 2, our SVG map is simply an XML file. We’re not doing any photoshop or image-editing. We’re editing an XML file. Open up the SVG file in a text editor so that we can see what we’re dealing with.
You should see something like this:
SVG is just XML that you can change in a text editor.
We don’t care so much about the beginning of the SVG file, other than the
width and
height variables, but we’ll get back to that later.
Scroll down some more, and we’ll get into the meat of the map:
The
path tags contain the geographies of each county.
Each
path is a county. The long run of numbers are the coordinates for the county’s boundary lines. We’re not going to fuss with those numbers.
We only care about the beginning and very end of the
path tag. We’re going to change the
style attribute, namely
fill color. We want the darkness of
fill to correspond to the unemployment rate in each given county.
We could change each one manually, but there are over 3,000 counties. That would take too long. Instead we’ll use Beautiful Soup, an XML parsing Python library, to change colors accordingly.
Each
path also has an
id, which is actually something called a FIPS code. FIPS stands for Federal Information Processing Standard. Every county has a unique FIPS code, and it’s how we are going to associate each path with our unemployment data.
Step 4. Create Python script
Open a blank file in the same directory as the SVG map and unemployment data. Save it as color_map.py.
Step 5. Import necessary modules
Our script is going to do a few things. The first is read in our CSV file of unemployment data. So we’ll import the
csv module in Python. We’re also going to use Beautiful Soup later, so let’s import that too.
import csv from BeautifulSoup import BeautifulSoup
Step 6. Read in unemployment data with Python
Now let’s read in the data.
# Read in unemployment rates unemployment = {} reader = csv.reader(open('unemployment09.csv'), delimiter=",") for row in reader: try: full_fips = row[1] + row[2] rate = float( row[8].strip() ) unemployment[full_fips] = rate except: pass
We read in the data with
csv.reader() and then iterate through each row in the CSV file. The FIPS code is split up in the CSV by state code (second column) and then county code (third column). We put the two together for the full FIPS county code, making a five digit number.
Rate is the ninth column. We convert it to a
float since it’s initially a string when we read it from the CSV.
The
rate is then stored in the
unemployment dictionary with the
full_fips as key.
Cool. The data is in. Now let’s load the SVG map, which remember, is an XML file.
Step 7. Load county map
Loading the map is straightforward. It’s just one line of code.
# Load the SVG map svg = open('counties.svg', 'r').read()
The entire string is stored in
svg.
Step 8. Parse it with Beautiful Soup
svg into Beautiful Soup is also straightforward.
# Load into Beautiful Soup soup = BeautifulSoup(svg, selfClosingTags=['defs','sodipodi:namedview'])
Step 9. Find all the counties in the SVG
Beautiful Soup has a nifty
findAll() function that we can use to find all the counties in our SVG file.
# Find counties paths = soup.findAll('path')
All paths are stored in the
paths array.
Step 10. Decide what colors to use for map
There are plenty of color schemes to choose from, but if you don’t want to think about it, give the ColorBrewer a whirl. It’s a tool to help you decide your colors. For this particular map, I chose the PurpleRed scheme with six data classes.
ColorBrewer interface for easy, straightforward way to pick colors
In the bottom, left-hand corner, are our color codes. Select the hexadecimal option (HEX), and then create an array of those hexadecimal colors.
# Map colors colors = ["#F1EEF6", "#D4B9DA", "#C994C7", "#DF65B0", "#DD1C77", "#980043"]
Step 11. Prepare style for paths
We’re getting close to the climax. Like I said earlier, we’re going to change the
style attribute for each path in the SVG. We’re just interested in fill color, but to make things easier we’re going to replace the entire
style instead of parsing to replace only the color.
# County style 'font-size:12px;fill-rule:nonzero;stroke:#FFFFFF;stroke-opacity:1; stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt; marker-start:none;stroke-linejoin:bevel;fill:'
Everything is the same as the previous style except we moved
fill to the end and left the value blank. We’re going to fill that in just a second. We also changed
stroke to
#FFFFFF to make county borders white. We didn’t have to leave that value blank, because we want all borders to be white while
fill depends on unemployment rate.
Step 12. Change color of counties
We’re ready to change colors now! Loop through all the paths, find the unemployment rate from the
unemployment dictionary, and then select color class accordingly. Here’s the code:
# Color the counties based on unemployment rate for p in paths: if p['id'] not in ["State_Lines", "separator"]: # pass
Notice the
if statement. I don’t want to change the style of state lines or the line that separates Hawaii and Alaska from the rest of the states.
I also hard-coded the conditions to decide the color class because I knew beforehand what the distribution is like. If however, you didn’t know the distribution, you could use something like this:
float(len(colors)-1) * float(rate - min_value) / float(max_value - min_value).
Step 13. Output modified map
Almost done. We just need to output the newly colored SVG map.
# Output map print soup.prettify()
Save your Python script. For the sake of completeness, here’s what your Python script should now look like:
### color_map.py import csv from BeautifulSoup import BeautifulSoup # Read in unemployment rates unemployment = {} min_value = 100; max_value = 0 reader = csv.reader(open('unemployment09.csv'), delimiter=",") for row in reader: try: full_fips = row[1] + row[2] rate = float( row[8].strip() ) unemployment[full_fips] = rate except: pass # Load the SVG map svg = open('counties.svg', 'r').read() # Load into Beautiful Soup soup = BeautifulSoup(svg, selfClosingTags=['defs','sodipodi:namedview']) # Find counties paths = soup.findAll('path') # Map colors colors = ["#F1EEF6", "#D4B9DA", "#C994C7", "#DF65B0", "#DD1C77", "#980043"] # County style path_style = 'font-size:12px;fill-rule:nonzero;stroke:#FFFFFF;stroke-opacity:1; stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt; marker-start:none;stroke-linejoin:bevel;fill:' # Color the counties based on unemployment rate for p in paths: if p['id'] not in ["State_Lines", "separator"]: print soup.prettify()
Step 14. Run script and save new map
Now all we have to do is run our script and save the output.
Running script in OS X Terminal
We’re done! Open your SVG in FIrefox or Safari, and you should see a nicely colored map similar to the one below.
Oh wait, there’s one teeny little thing. The state borders are still dark grey. We can make those white by editing the the SVG file manually.
We open our new SVG in a text editor, and change the stroke to
#FFFFFF from
#221e1f around line 15780. Do something similar on line 15785 for the separator. Okay. Now we’re done.
Where to Go From Here
While this tutorial was focused on unemployment data, I tried to keep it general enough so that you could apply it to other datasets. All you need are data with FIPS codes, and it should be fairly straightforward to hack the above script.
You can also load the SVG into Adobe Illustrator or your favorite open source vector art software and edit the map from there, which is what I did for the final graphic.
So go ahead. Give it a try. Have fun.
For more examples, guidance, and all-around data goodness like this, order Visualize This, the FlowingData book on visualization, design, and statistics.
111 Comments
More Tutorials See All →
How to Visualize and Compare Distributions
Single data points from a large dataset can make it more relatable, but those individual numbers don’t mean much without something to compare to. That’s where distributions come in.
How to Make a Sankey Diagram to Show Flow
These tend to be made ad hoc and are usually pieced together manually, which takes a lot of time. Here’s a way to lay the framework in R, so you don’t have to do all the work yourself.
A Course for Visualization in R, Taking You From Beginner to Advanced
Where to start? What to learn next? Here’s a course to help take you from beginner to advanced.
This is a superb demonstration that there exist ample raw materials lying around for us to use, and that data don’t care how they’re rendered. If your objective is to illustrate a point, results can be minutes away.
If you’re doing work on the web you would be well served by considering the new cartographer.js library:
yeah, i posted about that last week:
it only works at the state level right now though.
Doh, of course, that’s where I learned about Chris’ latest project!
Nice approach, thanks for sharing :)
Thanks Nathan for the step-by-step instructions
for those interested in world map choropleths (thes?), the kind folks at Nagoya University also propose a python script:
Pingback: Progress report: Google Maps and Dutch elections « Weird Correlations
Pingback: marc (murdelta) 's status on Thursday, 12-Nov-09 12:22:31 UTC - Identi.ca
Would you post the original site where you got the unemployment data excel file please?
@Matt: The Bureau of Labor Statistics provides state-by-state unemployment data as well as comprehensive data on the nation’s workforce.
Got it. For the viewers at home, here’s the hosted table for 2008…
or see the page linking to it here…
and scroll down to the heading “COUNTY DATA”. Looks like they have tables going back to 1990. Anyone want to make an animated version of this map showing the changes from 1990 to present?
Actually, I don’t see any county unemployment statistics for 2009, anyone see where that info is located?
Matt, here are the instructions I got from BLS (I had to email them) on how to get up-to-date county data via FTP:
Very nice article, but I think the counties map you want to link to is USA_Counties_with_FIPS_and_names.svg. USA_Counties_with_names.svg does not contain the FIPS data.
thanks, scott for pointing that out. fixed.
What type of classification does ColorBrewer use? It could drastically change the appearance of the map depending on the distribution of the data
ColorBrewer doesn’t deal with the classification; he’s classified the data in the code in step 12, in this case dividing it by equal intervals.
I see now, thanks! Probably would have behooved me to actually look at the code.
It’s not really accurate to call BeautifulSoup “the XML parsing Python library”, it’s only one of many python libraries that can parse XML.
right. i just meant that it’s, well, an XML parsing python library.
Really cool; thanks!
Would be nice to see the final result allow interactive comparison of different years; e.g. use a single map. How would one do that? Use some JS to switch the color values depending on year?
How about some R code?! It would be much more succinct.
I tried, and then I got fed up with undocumented details that I should know about. Care to point to some guides?
Here’s my attempt –. A better county data source would result in more succinct code.
Also, what’s with your choice of colour scale? The data range from 1.2 to 30.1.
thanks, i’ll check that out.
i based by color scale on the BLS maps:
I took a crack at it too, first using maps and then switching over to ggplot2, which I’m quickly learning to love…
Hi, very cool tutorial. For preparing the data, you could take a look at a web API I created: Elev.at (). It converts an XLS or Delimited File (e.g. CSV) into XML in real-time. It can also return results in JSON/P for consumption in JavaScript.
Following along, we construct an unemployment dictionary using five digit numbers as keys. But the SVG file uses strings (e.g. AK_North_Slope) has keys.
rate = unemployment[p[‘id’]], thus never finds a match.
stephen, use this SVG file instead (just updated link in the post):
sorry, i linked to the wrong SVG originally.
There we go! And great tutorial BTW.
Awesome!
Great tutorial. More posts like this please!
Pingback: Thematic Mapping at the US County Level with Free Tools | GIS Lounge – Geographic Information Systems
Ugh, Python + BeautifulSoup? How would anyone guess that that is an XML Parser? In Perl we have XML::Parser (SVG::Parser too, I wonder what the Python equivalent is called).
At least Kudos for using free tools until you then built a flash site with it.
flash? there are ways to do that for free too. that’s for another post though.
Wondering if the flash comment is regarding the drag and zoom capability of your final map. I’d like to know how you did that. If would be a really great follow up post. This one’s definately a break-through post, and thanks very much. I’ll be emailing this one around my office.
If you want to do the same with Google Earth .KML files check out my project at:
Pingback: Google Maps and Dutch Elections – Working demo « Weird Correlations
Thanks for the tutorial, its really cool and helps beginners like myself try Python for visualisation.
One question, I liked the font you used in the unemployment map for the text – can you post what it was as it seemed to really complement the image ?
eoin, it was nothing fancy. georgia for the copy and avenir for the header.
This is a much less dramatic map if you spread the map colors evenly between the range of values.
Pingback: How to Make a US County Thematic Map « Chris Farris
Awesome post!
If you decide to do another tutorial, could it be using Processing :)?
Thanks for the great blog.
Here’s a version of Nathan’s map in Processing:
See the sourcecode and for details of a couple of changes from the files posted here.
@Tom: That’s awesome. Only starting to learn Processing now, this gives me hopes :)
Pingback: Choropleths in R at This is the Green Room
For web-only use, wouldn’t it be simpler to create a separate CSS file adding the coloring styles to the SVG? You could probably do the processing in Excel, frankly, and export a text file from there, rather than writing a custom script to change every path in the original SVG file.
It wouldn’t be the most elegant CSS, but what I have in mind is
#02220 {fill:medium}
#02260 {fill:darker}
…
Basically calculate the fill color, match it up with a FIPS id, and use CSS to style the path by referring to its id.
Pingback: links for 2009-11-12 « doug – off the record
Pingback: michelle m. hudson / blog » Blog Archive » Daily Digest for November 13th
Pingback: Destillat KW46-2009 | duetsch.info - GNU/Linux, Open Source, Softwareentwicklung, Selbstmanagement, Vim ...
Pingback: Daily Digest for November 13th
As someone who’s never used Python, I found this intriguing yet ultimately confusing—especially Step 4, which assumes readers will know how to start writing a script etc. You don’t have to include a Python tutorial, but a little more explanation of how you get from Step 3 to 4 would be nice. I spent over an hour trying to get the script up and running. Is this how-to for pros only?
Step 4 is really just creating a new blank file. From there just sequentially add in the text from each step to that file. At the end of the process your file should be very similar (or identical) to the sample of the complete script at the end of the article.
I did that and kept getting error messages when I ran the script.
It the goal is to make thematic maps with free tools, it’s surprising R wasn’t mentioned. Several examples of creating the same chart in R (using shape files instead of SVG maps) are here:
As a bonus, R provides an environment for not just plotting but also analyzing the unemployment data.
Great tutorial, Nathan!
For those of you using BI tools like Pentaho, I’ve created a proof of concept of the solution above using Pentaho data integration, which is also free. Pentaho is a little less intimidating than scripting in Python, since it has a Visio-style GUI. My solution required a bit of XML knowledge, but the actual formulas I used in the solution were quite rudimentary.
You could easily tweak the Pentaho solution to load the data from Excel, a database or any other type of data source.
(Yeah, I know my CSS is a little ugly, I’m working on it!)
Fascinating tutorial, Nathan… but after spending about 8 hours, I find some gaps. Of course, I find gaps… because I’m Python (and most other code-illiterate). Sticking point for me seems to be figuring out WHERE and HOW to install BeautifulSoup in a place where Python will see it.
Can you explain that step?
Paul, start Python, import sys, and look at sys.path, which gives you a list of folders, with complete paths. One (or more) of these folders are called “site-packages”. Copy BeautifulSoup.py to there.
On my machine, it’s “/Library/Python/2.5/site-packages”.
New to code and confused still. How do I “look” at sys.path? Is that $echo or list or what?
You need to start an interactive Python session, and how you do it depends on your platform. When you get the >>> prompt, type “import sys” and enter, then type “sys.path” to see the list of folders.
Here’s my map:
It’s a movie of twenty years of data 1990-2009. Rather than group counties together and color each group exactly the same, I used the same color (dark red #800000), but made the opacity proportional to the rate. (Specifically fill-opacity = min(rate*5, 1). A 20% rate is fully opaque, % is transparent. ackground is white) And I thought it looked best with zero-width boundaries.
20 frames; 0.5 sec/frame.
The biggest PITA was saving each of the 20 SVG files in Illustrator, to PNG so GraphicConverter could use them.
Any suggestions for how to do this based on congressional districts rather than counties? I have CD data that this would work great for.
The government supplies lat/long coordinates for congressional districts here:
That data should be compatible with an R-based methods using ggplot, similar to what Hadley and I have detailed above. You’d have to do a little work to line up your data with the coordinate data, but it shouldn’t be too bad.
Unfortunately I’m traveling until tomorrow and can’t really take a look at the data myself right now. Pls let me know if this gives you some direction, happy to help further if I can.
Pingback: Ennuyer.net » Blog Archive » Rails Reading - November 15, 2009
I’m having a strange problem that I hope somebody can help me with. The script runs OK, but in addition to modifying the path tags as expected, it’s also changing the and having a strange problem that I hope somebody can help me with. The script runs OK, but in addition to modifying the path tags as expected, it’s also changing the sodipodi:namedview … / and defs id=”defs9561″ / stuck on this too.
I installed BeautifulSoup as in the comments above.
I used the script above except I added
import csv
from BeautifulSoup import BeautifulSoup
(at the beginning)
and made the line that copied over as three lines one line again.
Now when I try “python colorize_svg.py > unemployment2009.svg”, it makes the svg file, but it the image part is blank (but the right size). The svg file has plenty of code in it, but it appears to not display anything. The fill colors have been changed as expected, and I don’t see any errors, but something isn’t working.
Any ideas?
Now that I see Russel’s post, it looks like I’m having the same problem. I’m running Mac OS 10.6, Python 2.6.1, and BeautifulSoup 3.0.7a
I may be a bit slow, but I think I’ve figured this out. Beautiful Soup is doing a lot more than just parsing the svg file. The prettify command tells it to fix anything it thinks is wrong with the file, and it doesn’t seem to like the embedded closing tags in the sodipodi:namedview and defs id tags, so it strips them out and adds explicit closing tags at the end of the file, which breaks it.
The easy solution is to edit the original svg file to add explicit closing tags where they should have been in the first place. This keeps Beautiful Soup happy and works fine. I’ll post a link to my corrected version in a bit.
Thanks, Russell, that works great (and sorry I spelled your name wrong earlier) :)
Here’s a link to my modified counties.svg file. It should fix the problem if your svg output file doesn’t render but everything else seems to be OK:
Thanks Russell, that .svg file of yours just helped me finally solve the puzzle and get the whole thing to work :)
I ran into this too, and solved it by running the output of BeautifulSoup.prettify() through the following function:
import re
reSodipodi = re.compile(r”)
reDefs = re.compile(r”)
def CleanSoup(s):
‘Remove the problematic tags written by BeautifulSoup.’
s = reSodipodi.sub(”, s)
s = reDefs.sub(”, s)
return s
This trashes both the beginning and the end tags, which are unneeded to display the SVG.
Pingback: Choropleth maps — head’s up, Google « Everything Right Is Wrong Again
Nice tutorial Nathan!
(Lots of calls for R solutions I see, though none of the Revolutions challengers included Alaska or Hawaii in their solution.)
I’m personally interested in (a) keeping things dynamic and (b) making things web-native.
With that in mind, I ported some of your code over to javascript, hacked the CSV into JSON/P-ish form and used SVGWeb to render the county shapes:
I haven’t used SVG in a long time so this was a nice refresher. SVGWeb works well: it doesn’t mess with Safari or Firefox but I tested this in IE8 too and got comparable results* as promised.
*The only tweak I made was to set stroke-opacity to 0.1 instead of stroke-weight, because the flash renderer used by SVGWeb didn’t like fractional line weight.
PS I noticed two errors in the SVG file from Wikipedia. I’m trying to figure out how to correct it there, in the meantime here’s the one I’m using if you want to avoid the blank (grey) counties:
Hey Tom, when I was running through the exercise to render the map using Pentaho’s tools I found that the Inkscape generated XML is a little less than standards compliant.
You might want to open up the SVG in Inkscape and save it as a “Plain SVG” which will strip out a lot of junk attributes.
oh wow, that’s awesome. thanks, tom
I kept getting errors until someone pointed out that these commands have to be the colorize_py.svg script.
import csv
from BeautifulSoup import BeautifulSoup
They’re missing from the “complete Python script” referenced at the end.
Very basic question here – how do I run the script (step 14) in Windows?
My file path is C:\Users\Owner\Desktop\thematicmaps\colorize_svg.py
I was reading about making python scripts ‘double-clickable’ and it was a bit over my head, but when I double-click to open it, it opens python quickly and then closes. I was told to use CMD but I’m a newbie and don’t know what to type to run the script with the proper output. Also – I’m using Python 2.6.4 Windows installer (Windows binary — does not include source) and I put Beautiful Soup in the Site Packages folder correctly as well.
Start –> Run –> Type in ‘cmd’
cd over to the directory you mention above
colorize_svg.py > unemployment2009.svg
A new file called “unemployment2009.svg” will appear in the same directory.
Drag the file to your browser. Et voila!
Thank you! I did get an error though –
“C:\Users\Owner\Desktop\thematicmaps\colorize_svg.py”, line 31
path_style = ‘font-size:12px;fill-rule:nonzero;stroke:#FFFFFF;stroke-opacity:1;
^
SyntaxError: EOL while scanning string literal
When you copy that line over, it turns into three lines
path_style = ‘font-size:12px;fill-rule:nonzero;stroke:#FFFFFF;stroke-opacity:1;
stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt;
marker-start:none;stroke-linejoin:bevel;fill:’
but should really be one :)
path_style = ‘font-size:12px;fill-rule:nonzero;stroke:#FFFFFF;stroke-opacity:1;stroke-width:0.1;stroke-miterlimit:4;stroke-dasharray:none;stroke-linecap:butt;marker-start:none;stroke-linejoin:bevel;fill:’
Thanks Mollie that did the trick, I appreciate you taking the time to look at that!
Nathan, awesome script! But, help! Got all the way through the end, ran it in Firefox and this is what I got:
XML Parsing Error:
Location:
Line Number 18, Column 101:
I don’t have a skins-1.5 directory, but somehow the code refers to one? What’s up?
Hmm, comment fail. Anyway, line 18 in that file refers to this href:
href=”/skins-1.5/common/shared.css?243z2″
Nevermind – I used that Tom provided above as the base and everything works now.
Thanks!
Thanks for the great tutorial! I’m sure I’ll make good use of it. When I have some free time on my hands I’m going to try to figure out how to do something similar with world maps. It should be pretty straight forward now that I understand this tutorial.
Also, the comments on this post are very helpful, both in pointing out other ways to make similar maps and in clarifying how to make this one this way.
No more filling in maps one region at a time in Photoshop. Hooray!
Wow! This looks awesome. I’m going to have to come back later when I have some time to walk through it. Thanks for posting this!
Pingback: VatulBlog: Making US Unemployment Maps
Pingback: Esempio di infomappa con R…
Pingback: How to Make a US County Thematic Map Using Free Tools (FlowingData) « Kelso’s Corner
Pingback: Make your own choropleth map « big mike in DC — большой Миша
Pingback: data visualizations and the tragically hip, part 0 | newsprint fray
Pingback: US School Density SVG « Programmish
Pingback: How to make a Russian regional thematic map « Anton From Perm
Thanks for this useful tutorial. I tested interactive mapping with SVG a while ago for a paper, and produced some classification algorithms (and proportional symbols sizing too) in JS which could be interesting.
The paper :
The new SVG version :
Pingback: Answer My Searches » Blog Archive » Mini Searches with Answers
Thanks for a very interesting tutorial — Weird problem:
import csv
from library/frameworks/Python.framework/Versions/3.1/lib/python3.1/site-packages/BeautifulSoup-3.1.01/BeautifulSoup import BeautifulSoup
Returns a syntax error. Any ideas? Thanks ever so much!
Hi
When I run the program, I get the following, which is Greek to me :-)
Traceback (most recent call last):
File “/Users/MaryJane/Desktop/PythonFiles/colorize_svg.py”, line 4, in
from BeautifulSoup import BeautifulSoup
File “/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/BeautifulSoup.py”, line 427
raise AttributeError, “‘%s’ object has no attribute ‘%s'” % (self.__class__.__name__, attr)
^
SyntaxError: invalid syntax
Python 3 is a significant departure from Python 2.x, and many third-party libraries, evidently including BeautifulSoup, are broken and need to be rewritten by their developer. You’re better off using Python 2.6 for this.
Pingback: Infomaps using R – Visualizing German unemployment rates by district on a map « R you ready? temporary R-Test-Blog
Pingback: Choropleth maps of Africa - My Microcosm
Not knowing anything about python, I copied and pasted the code in step 13, following ### colorize_svg.py into a new document that I titled colorize_svg.py. I also removed any line breaks in the county style section, which seemed to give me problems initially.
When I run the script, it does generate a file titled unemployment2009.svg, but all I see is a blank white document. When I open the document in a text editor, there is definitely information in it, though I cannot yet decipher it (being the layman that I am).
I am running python 2.5.1 and have beautiful soup 3.0.8 in the same folder as all of my unemployment/colorize files.
Any ideas?
A couple of tags in the SVG file get mangled by BeautifulSoup. See earlier comments for solutions.
I just used this method to create a map for a project I’m working on. Thanks for the tutorial!
FIPS is now INCITS,
see
Hi Nathan
Excellent tutorial. I am a little behind the curve, but I have taken your tutorial and converted into a Processing App for the time series 2005 to September 2009.
You might have to increase your Java Heap Space to run it, there is a lot of data to load! Thanks again, I can’t wait to work through the next one!
Regards
Anthony
The link might help:
Pingback: How to Make a US County Thematic Map Using Free Tools « PixelBits
|
https://flowingdata.com/2009/11/12/how-to-make-a-us-county-thematic-map-using-free-tools/
|
CC-MAIN-2017-30
|
refinedweb
| 5,423
| 73.98
|
%matplotlib inline import matplotlib.pyplot as plt # Mandatory imports... import numpy as np import torch from torch import tensor from torch.nn import Parameter, ParameterList from torch.autograd import grad # Custom modules: from model import Model # Optimization blackbox from display import plot, train_and_display from torch_complex import rot # Rotations in 2D
We've seen how to work with unlabeled segmentation masks encoded as measures $\alpha$ and $\beta$: the key idea here is to use a data fidelity term which is well-defined up to resampling.
def scal( α, f ) : "Scalar product between two vectors." return torch.dot( α.view(-1),|
Most often, we work with measures that have been rigidly aligned with each other:
class IsometricRegistration(Model) : "Find the optimal translation and rotation." def __init__(self, α) : "Defines the parameters of a rigid deformation of α." super(Model, self).__init__() self.α, self.x = α[0].detach(), α[1].detach() self.θ = Parameter(tensor( 0. )) # Angle self.τ = Parameter(tensor( [0.,0.] )) # Position def __call__(self, t=1.) : # At the moment, PyTorch does not support complex numbers... x_t = rot(self.x, t*self.θ) + t*self.τ return self.α, x_t def cost(self, target) : "Returns a cost to optimize." return fidelity( self(), target)
from sampling import load_measure α_0 = load_measure("data/hippo_a.png") β = load_measure("data/hippo_b.png") print("Number of atoms : {} and {}.".format(len(α_0[0]), len(β[0]))) # In practice, we often quotient out affine deformations: isom = IsometricRegistration(α_0) isom.fit(β) α = isom() α = α[0].detach().requires_grad_(), α[1].detach().requires_grad_() # Let's display everyone: plt.figure(figsize=(10,10)) plot(β, "blue", alpha=.7) plot(α_0, "purple", alpha=.2) plot(α, "red", alpha=.7) plt.axis("equal") plt.axis([0,1,0,1]) plt.show()
Number of atoms : 252 and 182. It 1:0.028. It 2:0.009. It 3:0.008. It 4:0.008. It 5:0.008. It 6:0.008. It 7:0.008. It 8:0.008. It 9:0.007. It 10:0.007. It 11:0.006. It 12:0.006. It 13:0.006. It 14:0.006. It 15:0.007. It 16:0.006. b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
Let us now present some weakly parametrized models that let us study the variability between $\alpha$ and $\beta$ in a continuous way. In the previous notebook, we've seen that the "free particles" setting could be implemented as follows - with control variables $(p_i)\in\mathbb{R}^{N\times 2}$:
class L2Registration(Model) : "Find the optimal mass positions." def __init__(self, α) : "Defines the parameters of a free deformation of α." super(Model, self).__init__() self.α, self.x = α[0].detach(), α[1].detach() self.p = Parameter(torch.zeros_like(self.x)) def __call__(self, t=1.) : "Applies the model on the source point cloud." x_t = self.x + t*self.p return self.α, x_t def cost(self, target) : "Returns a cost to optimize." return fidelity( self(), target) l2_reg = train_and_display(L2Registration, α, β)
It 1:0.006. It 2:0.005. It 3:0.001. It 4:0.001. It 5:0.000. It 6:0.000. It 7:0.000. It 8:0.000. It 9:0.000. It 10:0.000. It 11:0.000. It 12:0.000. It 13:0.000. It 14:0.000. It 15:0.000. It 16:0.000. It 17:0.000. It 18:0.000. It 19:0.000. It 20:0.000. It 21:0.000. It 22:0.000. It 23:0.000. It 24:0.000. It 25:0.000. It 26:0.000. It 27:0.000. It 28:0.000. It 29:0.000. It 30:0.000. It 31:0.000. It 32:0.000. It 33:0.000. It 34:0.000. It 35:0.000. It 36:0.000. b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
Is this a desirable result?
Smoothing. Crucially, the "free particles" setting does not enforce any kind of smoothness on the deformation, and can induce tearings in the registration. Using a kernel norm as a fidelity, we thus observe left-out particles that lag behind because of the fidelity's vanishing gradients. Going further, we may alleviate this problem with Wasserstein-like fidelities... But even then, we may then observe mass splittings instead of, say, rotations when trying to register two star-shaped measures - try it with an "X" and a "+"!
To alleviate this problem, a simple idea is to smooth the displacement field. That is, to use a vector field
$$ \begin{align} v(x)~=~\sum_{i=1}^N k(x-x_i)\,p_i ~=~ \big(k\star \sum_{i=1}^N p_i\,\delta_{x_i}\big)(x) \end{align} $$
to move around our Dirac masses, with $k$ a blurring kernel function - e.g. a Gaussian.
Regularization. Keep in mind that if the Fourier transform of $k$ is positive, any sampled vector field $v$ may be generated through such linear combinations. To enforce some kind of prior, we thus have to introduce an additional regularization term to our cost function.
From a theoretical point of view, using a positive kernel, the best penalty term is the dual kernel norm
$$ \begin{align} \big\|\sum_{i=1}^N p_i\,\delta_{x_i}\big\|_k^2 ~=~ \big\langle \sum_{i=1}^N p_i\,\delta_{x_i}, k\star \sum_{i=1}^N p_i\,\delta_{x_i} \big\rangle ~=~\sum_{i,j=1}^N \langle p_i,p_j\rangle\,k(x_i-x_j) \end{align} $$
which can be rewritten as a Sobolev-like penalty on the vector field $v$: $$ \begin{align} \big\|\sum_{i=1}^N p_i\,\delta_{x_i}\big\|_k^2 ~=~\iint_{\omega\in\mathbb{R}^2} \frac{|\widehat{v}(\omega)|^2}{\widehat{k}(\omega)}\,\text{d}\omega ~=~ \|v\|_{k}^{*2}. \end{align} $$
As we strive to optimize a cost that reads
$$ \begin{align} \text{Cost}(p) ~~=~~ \lambda_1\,\text{Fidelity}\big(\sum \alpha_i \delta_{x_i+v_i}, \beta\big) ~+~ \lambda_2\,\big\|\sum p_i\,\delta_{x_i}\big\|_k^2, \end{align} $$
the algorithm will find a compromise between accuracy and stretching.
Kernel dot product. In practice, we use a sampled interpolator
$$ \begin{align} \Phi^k_p~:~ x=(x_i)\in\mathbb{R}^{N\times2} ~\mapsto~ x~+~v~=~ x~+~ K_{xx} p, \end{align} $$
where $K$ is the kernel matrix of the $x_i$'s and $p$ is the momentum field encoded as an N-by-2 tensor.
Exercise 1: Implement a (linear) kernel registration module.
run -i nt_solutions/measures_2/exo1
It 1:6.390. It 2:10.851. It 3:2.041. It 4:1.065. It 5:0.707. It 6:0.452. It 7:0.335. It 8:0.287. It 9:0.232. It 10:0.177. It 11:0.151. It 12:0.139. It 13:0.132. It 14:0.126. It 15:0.122. It 16:0.119. It 17:0.115. It 18:0.114. It 19:0.113. It 20:0.113. It 21:0.112. It 22:0.111. It 23:0.109. It 24:0.109. It 25:0.108. It 26:0.108. It 27:0.108. It 28:0.108. It 29:0.107. It 30:0.107. It 31:0.107. It 32:0.107. It 33:0.107. It 34:0.106. It 35:0.106. It 36:0.106. It 37:0.106. It 38:0.105. It 39:0.105. It 40:0.105. It 41:0.105. It 42:0.105. It 43:0.105. It 44:0.105. It 45:0.104. It 46:0.104. It 47:0.105. It 48:0.104. It 49:0.103. It 50:0.103. It 51:0.103. It 52:0.103. It 53:0.102. It 54:0.102. It 55:0.102. It 56:0.102. It 57:0.102. It 58:0.102. It 59:0.102. It 60:0.102. It 61:0.102. It 62:0.102. It 63:0.102. It 64:0.101. It 65:0.101. It 66:0.101. It 67:0.101. It 68:0.101. It 69:0.101. It 70:0.101. It 71:0.101. It 72:0.101. It 73:0.101. It 74:0.101. It 75:0.101. It 76:0.101. It 77:0.100. It 78:0.100. It 79:0.100. It 80:0.100. It 81:0.100. It 82:0.100. It 83:0.100. It 84:0.100. It 85:0.100. It 86:0.100. It 87:0.100. It 88:0.100. It 89:0.100. It 90:0.100. It 91:0.100. It 92:0.100. It 93:0.100. It 94:0.100. It 95:0.100. It 96:0.100. It 97:0.100. It 98:0.100. It 99:0.100. It100:0.100. It101:0.100. It102:0.100. It103:0.100. It104:0.100. b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
<Figure size 432x288 with 0 Axes>
Smooth linear deformations have been extensively studied in the 90's, especially with Bookstein's Thin Plate Spline kernel:
$$ \begin{align} \|v\|_{k}^{*2} ~~=~~ \iint_{\mathbb{R}^2} \|\partial^2_{xx} v(x)\|^2~ \text{d}x \end{align} $$
which allows us to retrieve affine deformations for free.
Limits of the linear model. Assuming that the model $\Phi(\alpha)$ is close enough to $\beta$, we may use the matching momentum $p$ to characterize the deformation $\alpha\rightarrow\beta$. Its $k$-norm can be computed through
$$ \begin{align} \big\|\sum_{i=1}^N p_i\,\delta_{x_i}\big\|_k ~~=~~ \sqrt{\langle p, K_{xx}p\rangle}_{\mathbb{R}^{N\times2}} \end{align} $$
and can be used as a "shape distance" $\text{d}_k(\alpha\rightarrow \beta)$ that penalizes tearings. Going further, we may compute the Fréchet mean of a population and perform kernel PCA on the momenta that link a mean shape to the observed samples.
Path distance in a space of measures. Unfortunately though, such an approach has a major drawback: The "kernel distance" is not symmetric. As it only makes use of the kernel matrix $K_{xx}$ computed on the source point cloud, it induces a bias which may be detrimental to statistical studies.
An elegant solution to this problem is to understand the kernel cost
$$ \langle p, K_{xx} p\rangle ~=~ \langle v, K_{xx}^{-1} v\rangle $$
as a Riemannian, infinitesimal metric that penalizes small deformations. We may then look for trajectories
$$ \begin{align} \alpha(\cdot)~:~t\in[0,1] ~\mapsto~ \alpha(t)~=~\sum_{i=1}^N \alpha_i\,\delta_{x_i(t)} \end{align} $$
such that $\alpha(0)=\alpha$ and minimize a cost
$$ \begin{align} \text{Cost}(\alpha(\cdot)) ~~=~~ \lambda_1\,\text{Fidelity}(\alpha(1),\beta) ~+~ \lambda_2\,\int_0^1 \underbrace{\big\langle \dot{x}(t), K_{x(t),x(t)}^{-1}\,\dot{x}(t)\big\rangle}_{\|\dot{x}(t)\|_{x(t)}^2}\,\text{d}t \end{align} $$
with a regularization matrix that evolves along the path.
Momentum field. In practice, inverting the smoothing matrix is an ill-posed problem. We may thus parameterize the problem through the momentum field $(p_i(t))\in\mathbb{R}^{N\times 2}$ and write
$$ \begin{align} \text{Cost}(\alpha(\cdot)) ~~=~~ \lambda_1\,\text{Fidelity}(\alpha(1),\beta) ~+~ \lambda_2\,\int_0^1 \big\langle p(t), K_{x(t),x(t)}\,p(t)\big\rangle\,\text{d}t, \end{align} $$
keeping in mind that $x(0)$ is given by $\alpha$ and that $\dot{x}=v=K_{x,x}p$
Exercise 2: Implement a path registration method, sampling the interval $[0,1]$ with 10 timesteps.
run -i nt_solutions/measures_2/exo2
It 1:6.390. It 2:2.067. It 3:1.256. It 4:0.893. It 5:0.482. It 6:0.326. It 7:0.258. It 8:0.232. It 9:0.175. It 10:0.156. It 11:0.146. It 12:0.139. It 13:0.133. It 14:0.122. It 15:0.122. It 16:0.119. It 17:0.116. It 18:0.114. It 19:0.111. It 20:0.111. It 21:0.110. It 22:0.109. It 23:0.108. It 24:0.108. It 25:0.107. It 26:0.106. It 27:0.105. It 28:0.104. It 29:0.103. It 30:0.103. It 31:0.102. It 32:0.102. It 33:0.102. It 34:0.101. It 35:0.101. It 36:0.100. It 37:0.100. It 38:0.100. It 39:0.100. It 40:0.100. It 41:0.099. It 42:0.099. It 43:0.099. It 44:0.098. It 45:0.098. It 46:0.098. It 47:0.097. It 48:0.097. It 49:0.097. It 50:0.097. It 51:0.097. It 52:0.096. It 53:0.096. It 54:0.096. It 55:0.096. It 56:0.096. It 57:0.096. It 58:0.096. It 59:0.096. It 60:0.096. It 61:0.096. It 62:0.096. It 63:0.096. It 64:0.096. It 65:0.096. It 66:0.095. It 67:0.095. It 68:0.095. It 69:0.095. It 70:0.095. It 71:0.095. It 72:0.095. It 73:0.095. It 74:0.095. It 75:0.095. It 76:0.095. It 77:0.095. It 78:0.095. It 79:0.095. It 80:0.095. It 81:0.095. It 82:0.095. It 83:0.095. It 84:0.095. It 85:0.095. It 86:0.095. It 87:0.095. It 88:0.095. It 89:0.095. It 90:0.095. It 91:0.095. It 92:0.095. It 93:0.095. It 94:0.095. It 95:0.095. It 96:0.094. It 97:0.094. It 98:0.094. It 99:0.094. It100:0.094. It101:0.094. It102:0.094. It103:0.094. It104:0.094. It105:0.094. It106:0.094. It107:0.094. It108:0.094. It109:0.094. It110:0.094. It111:0.094. It112:0.094. It113:0.094. It114:0.094. It115:0.094. It116:0.094. It117:0.094. It118:0.094. It119:0.094. It120:0.094. It121:0.094. b'CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH'
|
http://jeanfeydy.com/Teaching/GeomData_2018/measures_2.html
|
CC-MAIN-2021-17
|
refinedweb
| 2,393
| 55.27
|
fourth post in the Little Pitfalls series where I explore these issues; the previous Little Pitfall post can be found here.
Today we are going to look at a potential pitfall that can bite developers who expect the default behavior of declaring the same method (with same signature) in a subclass to perform an override.
In particular, if the developer came from the C++ world, this may run counter to their expectations. While the C# compiler does a good job of warning you of this event, it is not an error that will break your build, so it’s worth noting and watching out for.
Note: even though I’m just covering methods in this post, properties can also be overridden and hidden with the same potential pitfall.
When you have a base-class you are going to inherit from, there are two basic choices for “replacing” the functionality of a base-class method in the sub-class: hiding and overriding.
Let’s look at overriding first because it is the behavior many people tend to expect. To override behavior from a base-class method, the method must be marked virtual or abstract in the base-class. The virtual keyword indicates the method may be overridden in the subclass and allows you to define a default implementation of the method, and the abstract keyword says that the method must be overridden in a concrete subclass and that there is no default implementation of the method..
For example, let’s take the classes below:
1: public class A {
2: // Must be marked virtual (if has body) or abstract (if no body)
3: public virtual void WhoAmI() {
4: Console.WriteLine("I am an A.");
5: }
6: }
7:
8: public class B : A {
9: // must be marked override to override original behavior
10: public override void WhoAmI() {
11: Console.WriteLine("I am a B.");
12: }
13: }
In this example, B.WhoAmI() overrides the implementation of A.WhoAmI(). When overriding, the decision of what method to call is made at runtime, so if you have an object of type B held in a reference of type A, the B.WhoAmI() will still get called because it looks up the actual type of the object the reference refers to at runtime, and not the type of the reference itself, to determine which version of the method to call:
1: public static void Main()
2: {
3: B myB = new B();
4: A myBasA = myB;
5:
6: myB.WhoAmI(); // I am a B
7: myBasA.WhoAmI(); // I am a B
8: }
Note that in the code above, even though reference myBasA is typed as A, the actual object being referred to is of type B, thus B.WhoAmI() will be called at runtime.
Hiding, however, takes a different approach. In hiding what we do is create a new method with the exact same name and signature, but we (should) mark it as new. In this case B’s implementation hides A’s implementation:
2: // Method to hide can be virtual or non-virtual (but not abstract)
3: public void WhoAmI() {
9: // SHOULD use new to explicitly state intention to hide original.
10: public new void WhoAmI() {
So now, with hiding, the method replaces the definition for class B, which sounds the same on the surface, but in the case of hiding, the method to call is determined at compile-time based on the type of the reference itself. This means that the results of main from before are now:
7: myBasA.WhoAmI(); // I am an A
Notice that even though the object in both cases being referred to is type B, the version of WhoAmI() that is called depends solely on the type of the reference, not on the type of the object being referred to. Thus it will be A.WhoAmI() that will be called here.
Both hiding and overriding are valid and useful ways to replace base class functionality, and when you’d use each depends on your design and situation.
So all that discussion was mainly academic, right? If so, where does the problem lie? Well, the main thing to watch out for is that you aren’t required to use the override or new keyword when “replacing” a method in a subclass. Both of those keywords are purely optional, even if the original method was marked as virtual or abstract.
Consider this code example:
2: // Method is marked virtual, which signals intent to be overridden
9: // Person who designed sub-class didn't use 'new' or 'override' explicitly...
10: public void WhoAmI() {
So the question is, does B.WhoAmI() hide, or override A.WhoAmI()? The base class implementation was clearly marked virtual. In C++, if the base-class method is marked as virtual, then the sub-class method will be an override and you need not repeat the virtual keyword.
This is not true in C#, however, the default behavior is to implicitly hide (not override) if no keyword explicitly says otherwise. This gives us the following results:
This can trip up C++ developers who don’t know about this default behavior difference between the two languages.
To be fair, C# does give a compiler warning if you are not explicit, asking you politely to explicitly specify either override or new, but it doesn’t require you to do so:
'B.WhoAmI()' hides inherited member 'A.WhoAmI()'. Use the new keyword if hiding was intended.
'B.WhoAmI()' hides inherited member 'A.WhoAmI()'. Use the new keyword if hiding was intended.
Note: Remember, don’t ignore your warnings! As they say, a warning is an error waiting to happen. If you really want motivation to clean up warnings in your code, go into your project settings and enable “Treat warnings as errors” on the Build tab.
If your class implements an interface, and you want that interface behavior to be overridable, make sure you mark the base class that implements the interface’s methods as virtual or abstract. implement the IDisposable interface.
Because it’s highly possible if we’re using a factory pattern that we’ll be referring to concrete implementations of a message consumer as an AbstractMessageConsumer, we’d want to implement IDisposable in such a way that the Dispose() will call the appropriate method based on the concrete class and not just the one defined in AbstractMessageConsumer.
So we may do something like this:
1: public abstract class AbstractMessageConsumer : IDisposable
3: public virtual void Dispose()
4: {
5: // dispose of any resources in the abstract base here...
6: }
7: }
8:
9: public sealed class TopicMessageConsumer : AbstractMessageConsumer
10: {
11: public override void Dispose()
12: {
13: // dispose of any resources just in the topic message consumer here...
14:
15: // then dispose of the base by invoking the base class definition.
16: base.Dispose();
17: }
18: }
By defining Dispose() as virtual in AbstractMessageConsumer, we allow the actual definition of Dispose() to be used to be resolved at run-time, thus we will be assured that the correct version will be called.
So, what if we hadn’t done this, and would have instead defined our classes like this:
3: // note not virtual...
4: public void Dispose() { ... }
5: }
6:
7: public sealed class TopicMessageConsumer : AbstractMessageConsumer
8: {
9: public void Dispose() { ... }
10: }
Then if we would have tried to call Dispose() on an AbstractMessageConsumer reference to a TopicMessageConsumer, we would have gotten the wrong Dispose()!
1: AbstractMessageConsumer mc = new TopicMessageConsumer();
2:
3: // Because TopicMessageConsumer only hides Dispose(), AbstractMessageConsumer's
4: // Dipose() method is the one called here.
5: mc.Dispose();
So remember, if you implement an interface and want that implemented behavior to be overridable in a sub-class, make sure you mark your interface implementation methods (and/or properties) as virtual or abstract and then correctly override them in a sub-class.
Note: If you have no intention of allowing your class to be inherited from, consider making the class sealed to prevent accidental hiding problems from occurring.
Hiding has some very handy uses and is a valuable part of the C# toolbox. That said, it can be confusing for a developer who doesn’t know that the implicit behavior in C# is to hide and not to override.
To help make sure you always get the correct behavior you want, you can:
Print | posted on Thursday, July 21, 2011 6:10 PM |
Filed Under [
My Blog
C#
Software
.NET
Little Pitfalls
]
|
http://geekswithblogs.net/BlackRabbitCoder/archive/2011/07/21/c.net-little-pitfalls-the-default-is-to-hide-not-override.aspx
|
CC-MAIN-2020-34
|
refinedweb
| 1,396
| 59.33
|
Hide Forgot
Testing with meta.c (attached)
Test program created numerous files using various file open() flags and truncated each to 10 MiB. The test then wrote a 512-byte block beginning at offset 0 to each file. After a 10-second delay, the next 512-byte block was written, and so forth.
During the testing, both regular file write() and memcpy() to an mmapped file were tested, as well as combinations of fsync(), fdatasync(), and no file sync.
While the test program was running (after 10+ hours), customer initiated their snapshot + backup process. The metadata of the various files (in particular, the 'mtime') was then compared.
The result of the testing indicated that the file open() flags had no bearing on whether mtime was updated. Both fsync() and fdatasync() were confirmed to behave as expected (fsync forced both data write-out and 'mtime' update, while fdatasync() only caused write-out of the data).
When the file was written to through an mmap, fdatasync() was called, and fsync() was not called, the file's 'mtime' was never updated. In these cases, the mtime remained unchanged since the file creation time. File contents appeared to be correct in all cases.
In comparison, when the test program ran on a local ext4 filesystem (with the default data=ordered), the combination of mmap and not calling either fsync() or fdatasync() resulted in files where the 'mtime' would periodically lag behind the write time (and the mtime of the other files), but never by more than 20 seconds. (Note: after running 'sync', all metadata was written to disk, and the maximum mtime difference was less than 10 milliseconds).
It appears that gfs does not update the on-disk mtime in file writes when: an mmap() is used, fdatasync() is called, and fsync() is not called
Created attachment 864329 [details]
test program used to demonstrate the issue
Well I'm not sure exactly what the problem is that you are reporting here. Why is it not expected that mtime would update during an mmap write if fsync has not been called?
The man page for fdatasync says: modifica‐
tion; see stat(2)) do not require flushing because they are not neces‐
sary for a subsequent data read to be handled correctly. On the other
hand, a change to the file size (st_size, as made by say ftruncate(2)),
would require a metadata flush.
I can't see the attachment in comment #1, as it seems to be corrupt or marked as the wrong type of file.
Hi Steve,
*** Bug 1066178 has been marked as a duplicate of this bug. ***
*** Bug 1066179 has been marked as a duplicate of this bug. ***
Thanks for the link in comment #4, however so far as I can tell the behaviour is as expected per the man page information for fdatasync() unless I'm missing something here?
In-memory mtime is in fact updated. And the fact that in-memory never propagated to the disk creates a lot of confusion in case of the last close (2) SAN level snapshots creation. Though I think DRBD will be affected the same.
I think it would be rather nice to have `gfs_tool freeze' to synchronize in-memory with on-disc. This single change will make me happy.
Ok, so the issue is not fdatasync then, its not syncing before a freeze. I'll update the bug description accordingly..
Please do not close this bug. It should not be closed as WONTFIX
Actually, I don't see any evidence that the mtime is getting updated at all for mmap writes in your test program. the file keeps it's creation mtime no matter how long you leave it. Unmounting and remounting doesn't update it.
I'm going to do some more digging. Apparently the Single Unix Specification guarantee is that mtime will be updated at the very least by the next call to msync() or if no call to mysnc happens, by munmap(). Presumably, even if there isn't an explicit call to munmap(), when the file is closed, the mtime should be updated.
Created attachment 878304 [details]
patch to make gfs write out mtime on mmap syncs
This patch modifies gfs_write_inode to make it actually write out the modified vfs inode attributes. This will cause mtime to get updated when the mmap file changes are synced back to disk.
This bit:
if (ip && sync)
gfs_log_flush_glock(ip->i_gl, 0);
seems odd, bearing in mind that ip is dereferenced higher up the function. However that seems to be in the existing code, but maybe worth checking for certain that ip can never be NULL here.
Still in dev phase for 5.11, so lets ask for blocker for this as it is something that we ought to ensure is fixed.
Created attachment 878667 [details]
Version of the patch that is posted to cluster-devel
This version is just like the former, without the debug printk, and with a sensible checking if ip != NULL. Now, I can't see where ip could be NULL calling this function, but the other gfs_super_ops functions also do this check, so I assume that there was at least a worry that this could happen, and I left the check in. 5.11 seems like a bad place to clean up this sort of thing.
Verified against kmod-gfs-0.1.34-20.el5 with the following simple python script:
from mmap import *
from time import sleep
from random import randint
maxsize = 2**25
f = open("mmapfile", "r+b")
mm = mmap(f.fileno(), maxsize, MAP_SHARED)
token = "hello"
while True:
o = randint(0, maxsize - len(token))
mm[o:o+len(token)] = token
sleep(5)
Running `watch stat mmapfile` on each node shows that the mtime is updated across the cluster while the above script is running.
With kmod-gfs-0.1.34-18.el5, the mtime on other nodes never updates.
After I stopped my test case to verify this bug, my system panicked with the following assert.
GFS: fsid=nate:nate0.1: fatal: assertion "gfs_glock_is_locked_by_me(gl) && gfs_glock_is_held_excl(gl)" failed
GFS: fsid=nate:nate0.1: function = gfs_trans_add_gl
GFS: fsid=nate:nate0.1: file = /builddir/build/BUILD/gfs-kmod-0.1.34/_kmod_build_/src/gfs/trans.c, line = 234
GFS: fsid=nate:nate0.1: time = 1400873905
GFS: fsid=nate:nate0.1: about to withdraw from the cluster
----------- [cut here ] --------- [please bite here ] ---------
Kernel BUG at ...ld/BUILD/gfs-kmod-0.1.34/_kmod_build_/src/gfs/lm.c:112
invalid opcode: 0000 [1] SMP
last sysfs file: /fs/gfs/nate:nate0/lock_module/recover_done
CPU 0
Modules linked in: gfs(U) dm_log_clustered(U) lock_nolock lock_dlm gfs2 dlm gnbd(U) configfs lpfc scsi_transport_fc sg be2iscsi ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr iscsi_tcp bnx2i cnic cxgb3i iptable_filter ip_tables x_tables sctp autofs4 hidp rfcomm l2cap bluetooth lockd sunrpc ipv6 xfrm_nalgo crypto_api uio libcxgbi floppy i2c_piix4 i2c_core pcspkr virtio_balloon virtio_net serio_raw tpm_tis tpm tpm_bios dm_raid45 dm_message dm_region_hash dm_mem_cache dm_snapshot dm_zero dm_mirror dm_log dm_mod ahci ata_piix libata sd_mod scsi_mod virtio_blk virtio_pci virtio_ring virtio ext3 jbd uhci_hcd ohci_hcd ehci_hcd
Pid: 30450, comm: python Tainted: G -------------------- 2.6.18-389.el5 #1
RIP: 0010:[<ffffffff8882f06f>] [<ffffffff8882f06f>] :gfs:gfs_lm_withdraw+0x97/0x103
RSP: 0018:ffff810030e65b88 EFLAGS: 00010202
RAX: 000000000000003e RBX: ffffc200001ce000 RCX: 0000000000000286
RDX: 00000000ffffffff RSI: 0000000000000000 RDI: ffffffff803270dc
RBP: ffffc200002069d4 R08: 000000000000000d R09: 00000000ffffffff
R10: 0000000000000000 R11: 000000000000000a R12: ffff81003b31fdc8
R13: ffffc200001ce000 R14: 0000000000000001 R15: ffff810030e65e98
FS: 00002b4a72e94170(0000) GS:ffffffff80436000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00000000152c3000 CR3: 0000000037549000 CR4: 00000000000006e0
Process python (pid: 30450, threadinfo ffff810030e64000, task ffff810039f657b0)
Stack: 0000003000000030 ffff810030e65ca0 ffff810030e65ba8 ffff810030e65c38
0000000300000000 ffffc20000206870 ffffc200002069d4 ffffffff8884e6f0
ffffc200002069d4 ffffffff88848c60 ffff810030e65cb8 000000000001725f
Call Trace:
[<ffffffff8002621e>] find_or_create_page+0x22/0x72
[<ffffffff8881b753>] :gfs:getbuf+0x172/0x181
[<ffffffff8881ba54>] :gfs:gfs_dreread+0x72/0xc6
[<ffffffff8884725c>] :gfs:gfs_assert_withdraw_i+0x30/0x3c
[<ffffffff88845e10>] :gfs:gfs_trans_add_gl+0x82/0xbe
[<ffffffff88845ef3>] :gfs:gfs_trans_add_bh+0xa7/0xd9
[<ffffffff8883d8ba>] :gfs:gfs_write_inode+0x10c/0x166
[<ffffffff80063117>] wait_for_completion+0x1f/0xa2
[<ffffffff8002ff43>] __writeback_single_inode+0x1dd/0x31c
[<ffffffff88825d51>] :gfs:gfs_glock_nq+0x3aa/0x3ea
[<ffffffff800f8ee8>] sync_inode+0x24/0x33
[<ffffffff88839f32>] :gfs:gfs_fsync+0x88/0xba
[<ffffffff8005afe4>] do_writepages+0x29/0x2f
[<ffffffff8005055a>] do_fsync+0x52/0xa4
[<ffffffff800d468e>] sys_msync+0xff/0x180
[<ffffffff8005d29e>] tracesys+0xd5/0xdf
Code: 0f 0b 68 3b ad 84 88 c2 70 00 eb fe 48 89 ee 48 c7 c7 7b ad
RIP [<ffffffff8882f06f>] :gfs:gfs_lm_withdraw+0x97/0x103
RSP <ffff810030e65b88>
<0>Kernel panic - not syncing: Fatal exception
Created attachment 900627 [details]
New version to fix the QA issues.
This patch is similar to my earlier version, but includes more checks in gfs_write_inode to make sure that it can start a transaction. It now makes sure that there isn't already a transaction in progress, and that if it already has a lock, that the lock is exclusive.
I also noticed that the original fix itself doesn't always work. The issue is that after the last holder of the inode glock is dropped, the vfs inode timestamps are overwritten by the gfs inode ones. During mmap, the mtime is getting updated in the vfs inode during the page fault. If either a call to stat the inode or a call to write out the vfs inode doesn't attempt to lock it while the mmap call is still holding its glock, the updated vfs timestamp will get overwritten, and be lost. To solve this, I've changed gfs_inode_attr_in() to
not overwrite the vfs timestamps unless the gfs ones are newer. The timestamps still always get synced to the gfs inode when the vfs inode is first created. I have verified that it is still possible to manually reset the inode mtime to an earlier time using the touch command, and I can't think of any other possible issue with doing this. However, I'll bug Steve about this tomorrow to see if he can think of any problem with it.
Applied the above patch.
Created attachment 902772 [details]
Yet another version. This one should do no harm, but it does much less good.
The problem with my previous patches is that they do incorrect lock ordering. In RHEL5 gfs, the inode glock must be grabbed before inode is locked (it's opposite in RHEL5 gfs2), and __sync_single_inode() locks the inode before it calls gfs_write_inode(). This means that gfs_write_inode() can't lock the inode glock. Some times __sync_single_inode() is called when the process already has locked the inode glock exclusively. In these cases gfs_write_inode is able to
write out the inode.
Unfortunately, limiting gfs_write_inode() to only writing out the inode structure when it's called with the inode glock already held doesn't get the mtime updated when munmap is called, or an mmaped file is closed. The only condition that will trigger the mtime getting updated on disk is when the
msync is called on the mmapped area with the MS_SYNC flag.
Right now, I can't see a way to do better than this. For this to work, we'd need to be able to write out the inode in a transaction from some other gfs function that was getting called at least as often as whenever you msync or munmap and mmaped area, which hasn't already locked the inode. I don't see any function that would work for this. It's possible that this is the best we can do in gfs.
Above patch applied. This will only fix the issue when msync is called with MS_SYNC.
I can confirm that mtime is updated on disk (and visible on other nodes) after msync is called. Otherwise, mtime is not updated when munmap is called, the file is closed, or fsfreeze is invoked.
Used: kmod-gfs-0.1.34-22.el5
Move to VERIFIED after TPS runs are.
|
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1066181
|
CC-MAIN-2020-10
|
refinedweb
| 1,935
| 59.43
|
Output text
Hello everyone:
I've been trying to find a way to use the output text as drawn text, but I'm not really winning the battle, so I finished copying the text, here is my code:
#Set today's time day = 29 month = 8 year = 2019 date = '%02d/%02d/%04d' % (day, month, year) print("Today is " + str(date)) #Birthday function def birthDay(name, bDay, bMonth, bYear): bDate = '%02d/%02d/%04d' % (bDay, bMonth, bYear) if month >= bMonth and day >= bDay: age = year - bYear print("I am " + name + " and I have " + str(age) + " years." + " My birthday is " + str(bDate)) else: age2 = year - bYear -1 print("I am " + name + " and I have " + str(age2) + " years." + " My birthday is " + str(bDate)) #Examples size(1000, 1000) birthDay("L", 4, 9, 1992) birthDay("P", 21, 1, 1993) birthDay("E", 15, 9, 1971) birthDay("J", 22, 3, 1961) txt = """Today is 29/08/2019 I am L and I have 26 years. My birthday is 04/09/1992 I am P and I have 26 years. My birthday is 21/01/1993 I am E and I have 47 years. My birthday is 15/09/1971 I am J and I have 58 years. My birthday is 22/03/1961""" #Image x, y, w, h = 100, 100, 800, 800 fill(0.5, 1, 1) rect(x, y, w, h) font('.SFNSRounded-Black', 130) fontSize(50) fontVariations(GRAD=500, wght= 1000) stroke(1, 0, 0) strokeWidth(3) overflow = textBox(txt, (x, y, w, h), align="center") print(overflow) saveImage('~/Desktop/ages.png')
Does anyone know how to make it?
Thanks.
you can use Python’s string formatting syntax to create strings with variable parts:
name = 'John' age = 17 # old syntax, still works txt1 = "I am %s and I have %s years." % (name, age) # new in py3: f-strings txt2 = f"I am {name} and I have {age} years."
and here’s how you can repeat it for a list of names and collect the output into a single text:
persons = [ ('Michael', 34), ('Maria', 15), ('Daniel', 56), ] txt = '' for name, age in persons: txt += f"I am {name} and I have {age} years.\n" fontSize(56) textBox(txt, (0, 0, width(), height()))
hope this helps!
Thanks again @gferreira it worked better that way
now I'm trying to make a list of random objects and names, but I don't want them to be repeated, do you know if there's a similar way to match the pairs without repetition ?
Here’s the code:
#Canvas Size w = 1000 h = 1000 def whiteCanvas(): newPage(w, h) fill(1) rect(0, 0, w, h) def blackCanvas(): newPage(w, h) fill(0) rect(0, 0, w, h) def objects(tCol): objLuck = randint(0, 3) objTxt = "My object is: " if objLuck == 1: txt = objTxt + "Fish" elif objLuck == 2: txt = objTxt + "Bullet" else: txt = objTxt + "Glass" fontSize(56) fill(tCol) text(txt, (width()/2, 500), align ='center') def students(tCol): stdLuck = randint(0, 3) stdTxt = "I am " if stdLuck == 1: txt = stdTxt + "Pedro" elif stdLuck == 2: txt = stdTxt + "Caro" else: txt = stdTxt + "Raúl" fontSize(56) fill(tCol) text(txt, (width()/2, 400), align ='center') def pairs(): Luck = randint(0, 1) if Luck == 0: whiteCanvas() students(0) objects(0) else: blackCanvas() students(1) objects(1) for i in range(3): pairs()
Thank you very much
you could shuffle each list separately, then combine them using
zip():
from random import shuffle L1 = ['Michael', 'John', 'Graham'] L2 = ['spam', 'bacon', 'eggs'] shuffle(L1) shuffle(L2) L3 = list(zip(L1, L2)) print(L3)
there are other ways to do it…
This is totally better @gferreira you’re really kind, thank you. I’ve tried the following code and it’s exactly what I needed, I was just wondering if there is a way to make it with less code, I’ve noticed that you have better results in less lines and that’s really nice. Thanks again.
#Canvas Size w = 1000 h = 1000 yPos = 500 def whiteCanvas(): newPage(w, h) fill(1) rect(0, 0, w, h) def blackCanvas(): newPage(w, h) fill(0) rect(0, 0, w, h) from random import shuffle L1 = ['Pedro', 'Jonás', 'Raúl', 'Celestina', 'Constanza'] L2 = ['Ojos', 'Latas', 'Lombrices', 'Cigarro', 'Guitarra'] shuffle(L1) shuffle(L2) L3 = list(zip(L1, L2)) print(L3) def bgCol(): Luck = randint(0, 1) if Luck == 0: whiteCanvas() fontSize(56) fill(0) else: blackCanvas() fontSize(56) fill(1) def pairs(): bgCol() txt = "Pair: " + str(L3[0]) text(txt, (width()/2, yPos), align ='center') bgCol() txt = "Pair: " + str(L3[1]) text(txt, (width()/2, yPos), align ='center') bgCol() txt = "Pair: " + str(L3[2]) text(txt, (width()/2, yPos), align ='center') bgCol() txt = "Pair: " + str(L3[3]) text(txt, (width()/2, yPos), align ='center') bgCol() txt = "Pair: " + str(L3[4]) text(txt, (width()/2, yPos), align ='center') for i in range(1): pairs() saveImage('~/Desktop/ruleta.png', multipage=True)
@eduairet you’re welcome, happy to help.
here is a more concise and more pythonic version of your code:
from random import shuffle w, h = 1000, 1000 names = ['Pedro', 'Jonás', 'Raúl', 'Celestina', 'Constanza'] things = ['Ojos', 'Latas', 'Lombrices', 'Cigarro', 'Guitarra'] shuffle(names) shuffle(things) pairs = list(zip(names, things)) def makePage(pair): # flip a coin to choose colors coin = randint(0, 1) if coin: color1 = 0, color2 = 1, else: color1 = 1, color2 = 0, # draw page background newPage(w, h) fill(*color1) rect(0, 0, w, h) # draw text name, word = pair fontSize(56) fill(*color2) text(f"Pair: {name} & {word}", (w/2, h/2), align='center') # make all pages for pair in pairs: makePage(pair) # save pdf to disk saveImage('~/Desktop/ruleta.png', multipage=True)
if you don’t understand something, please just ask! cheers
@gferreira
probably not pythonic
but i was surprised this works
(if you keep things black and white)
coin = randint(0, 1) color1 = coin, color2 = not coin,
@gferreira thanks, it worked, I'm coming with some new exercises. Cheers.
|
https://forum.drawbot.com/topic/188/output-text/5
|
CC-MAIN-2019-51
|
refinedweb
| 980
| 50.54
|
One. (The source code for the application is posted on CodePlex at the following link. I encourage the community to grab the code and contribute to it. )
Source:
The sample application uses mathematical calculations to determine how to play each chord. As any guitar player knows, there are many ways to play each chord and using the mathematical calculations may not yield the most comfortable nor common finger positioning for the chord. I added some adjustments to the calculations to try to mediate this, but of course nothing is as good as purely pre-determining the finger positions for each chord.
In a future version of this tool I will forgo the calculations and display multiple chord variations for all of the chords. But for the purpose of this article, it was more fun to show that the calculations can be used.
The application uses a CoolMenu control found on CodePlex in the Silverlight Contrib project (which is a great set of community contributions). The sample application also takes advantages of several key ingredients in a Silverlight application including the following, amongst others:
- Data binding
- Media elements
- Style resources
- Control Templates
- File resources
- Transformations
- Visual states
- Dynamically generated controls
Demo Time
I’ll start by demonstrating the application pointing out what it does and how it can be interacted with by a user. Along the way I’ll make a reference to the different elements used to create the application, which will be explained following the demo.
Fret Board
Figure 1 – Guitar Chord Calculator
The sample application is shown in Figure 1 displaying the G major chord. The sample has a fret board on the top that displays the chord fingering. The blue circles indicate where a finger should hold a string. The note that will be played is noted inside of the blue circle. If the blue circle is shown on the nut at the far left, this indicates that the string is open.
Visual Aspects and Dynamic Creation
The blue circles on the fret board are actually button controls whose control templates have been replaced. Instead of displaying a standard button, the control templates contain an ellipse with a gradient background and text content. The buttons are created in .NET code and a style resource is applied to each button before being placed on the appropriate spot on the fret board. Each button also has a visual state where the button grows slightly during a mouseover event and shrinks slightly when the button is clicked. Every audio file is added as a resource to the project. Some transformations are used to skew the viewing angle of the fret board.
CoolMenu and Data Binding
Below the fret board, a list of the notes is displayed in a CoolMenu control from the Silverlight Contrib library on CodePlex (similar to a fish eye control). There is also a list of the types of chords (in this case Major or Minor) displayed in another CoolMenu control. Each of these controls is data bound to a List of notes and of chord types, respectively. The user can select the note and the chord type using the 2 CoolMenu controls. When the selection has been made, data binding is used to display the name of the chord at the bottom of the screen and the chord calculation logic executes to build the appropriate chord on the fret board. Data binding is also used to bind the appropriate chord’s audio file to a media element so the chord can be played.
Audio
There is an accompanying audio file for all 6 strings and for each fret and an open string for a total of 36 individual notes. If the user clicks a blue circle button on the fret board, the note will be played for that specific string and fret through a MediaElement control.
When the user selects a note and chord type, chord’s corresponding audio file is data bound to a MediaElement control. When the user clicks the musical note button, the audio file is played. There is an audio file for each of the notes and chord type combinations, for a total of 24 additional audio files.
When added together, the combined size of the audio files can become large. This is increases the size of the Silverlight XAP file and can cause the load time of the Silverlight application to slow down. One way to combat this is to use a tool to compress the audio files. The files in this sample project are compressed, however keep in mind that this compression often causes quality loss in the audio file.
Transforms and User Controls
As you can see, there are many aspects that work in concert to make the application. Now that I’ve shown what that application does and pointed out its parts, I’ll dive into how each part works within the application. The fret board is a Grid panel (named fretBoardGrid) with 6 rows and 5 columns. The fretBoardGrid is skewed slightly to give it a bit more visual perspective using the following transformation. This is most easily set by selecting the Grid in Expression Blend and setting the SkewTransform property’s X angle to -5, as shown in Figure 2.
Figure 2 – Skew Transform for the Fret Board
The guitar strings are represented by a user control named GuitarString and the fret bars are represented by a user control named Fret. Both of these user controls are contained in the controls folder in the Silverlight project. User controls are ideal for situations where you want to re-use the same control. This works well for the guitar string and fret bars since they are each used multiple times. The GuitarString control uses a Path to create the appearance of a guitar string. The Path has a stroke thickness of 5 and is colored, as shown below.
An instance of the GuitarString control is placed in each of the rows of the fretBoardGrid. Each string is the same width, but on a guitar each string gets progressively smaller from the low E to the high E string. One way to make the strings gets smaller is to use a scale transform on each string. For example, the following code shows a ScaleTransform that changes the scale of the Y for the guitar string to 10% of its initial state. The other strings all use slightly less of a scale to create the appearance of the strings.
The fret bars are represented by the Fret user control. The Fret control contains a vertical gray rectangle, represented by a path. (A Rectangle control could also have been used, if desired. Either control is fine in this case.)
The same principles that are used to display the GuitarString controls are also applied to the Fret controls. The first column of the fretBoardGrid needs to show 2 Fret control. The first fret will be a little thicker than the others as it will represent the nut of the guitar. The thickness is adjusted using a ScaleTransform to double its size (see the code sample below). The second Fret control will be aligned to the far right of the column to represent the first fret on the guitar. The subsequent Fret controls are placed in the rest of the columns, 1 per column and aligned to the right.
Example 1 – Laying the frets on the neck of the guitarr
All of this and the coloring and shading helps make the fret board give the appearance of a guitar fret board, as shown in Figure 3.
Figure 3 – Final appearance of the fret board
Data Bound Menus
Two CoolMenu controls are displayed below the fret board to allow the user to select the chord to display and play. The CoolMenu is a control included as part of the Silverlight Contrib on CodePlex. It is based on the ItemsControl, which is ideal for binding and displaying a list of items. The XAML for the first CoolMenu (named noteMenu) is shown in Example 2. This control must be downloaded from CodePlex and referenced by the Silverlight project before it can be added to a user control. This will add the namespace reference in the UserControl tag in the XAML, as follows:
Figure 4 – The Note and Chord menuss
The noteMenu CoolMenu is simple to set up. The code shown in Example 2 shows some basic layout settings for the menu and that the ItemsSource property is set to use the data binding via the DataContext. Since this control inherits from the ItemsControl, it can also use a DataTemplate to define the items in the CoolMenu. In this case each item is represented simply by an Image, which gets its source form data binding.
Example 2 – Designing the CoolMenu controls
The noteMenu is bound to a List<Note> classes. The Note class represents the aspects of a single note and includes the name of the note and the image file associated with the note. The NoteData class (shown in Example 3) stores a List<Note>: 1 for each of the 12 notes. Each note image is stored in an images folder in the Silverlight project and has its Build Action set to Resource, as shown in Figure 5. Once the NoteData class is created it can be bound to the noteMenu as shown below:
This list of data is static so there is no need to use the INotifyPropertyChanged interface nor the INotifyCollectionChanged interface. These interfaces are ideal when the contents of a list or the properties of an object might change, and the changes need to be displayed in the UI. For the noteMenu and the chordTypeMenu, this is not required since these lists do not change.
Example 3 – Defining the List<Note> to bind to the noteMenu
Figure 5 – Making the image file a resource
The chordTypeMenu follows the same design and pattern as the noteMenu. It uses a CoolMenu control and binds to a List<ChordType> through the ChordTypeData class. You can refer to the full source code for the exact implementation, as this is set up the same way the noteMenu is set up.
Chord Bindings
When a user selects a note or a chord type, a corresponding event handler fires for each of the CoolMenu controls respectively (as shown in Example 4). The selected item (note or chord type) is retrieved from menu and set to the field _currentChord, which is an instance of the ChordAudio class. The _currentChord contains a property for the Note, the ChordType and the audio file location for the currently selected chord. The ChordAudio class implements the INotifyPropertyChanged interface and fires the ProeprtyChanged event whenever any of the properties are updated. This tells the UI to get fresh values from the class instance whenever the audio file, note or chord type changes. The ShowChord method performs the calculations that determine what notes should be played to achieve the chord.
Example 4 – Setting the Chord’s Binding in Event Handlers
The _currentChord field is bound to the SelectedChord StackPanel which displays the selected chord’s name and binds the audio file to the chordPlayer MediaElement. Using data binding allows the UI to update itself and be prepared to play the currently selected chord immediately after the user selects a different note or chord type.
Example 5 – Binding the Selected Chord
Audio Media Elements
The chordPlayer MediaElement shown in Example 5 is a control with no visible interface. It will play the audio file based on its binding, which in this case is one of the mp3’s associated with each of the chords. Each audio file is stored in the Silverlight project in the sounds folder. Like the note images, the audio files also have their Build Action set to Resource, so they can be used as a resource in the project. The chordPlayer has its Volume property set to 1 (but of course this can be adjusted by the user by perhaps a slider control). The Source of the chordPlayer is set to use data binding to get the audio file’s location. The AutoPlay property is set to an initial state of False, so the audio file will not play automatically when the audio file is bound to the MediaElement. The application should not play the chord automatically, instead it should only play when the user clicks the btnPlayChord button (the square blue musical note).
The event handler for the btnPlayChord first checks if the audio file is set. Then it turns on AutoPlay, sets the position of the audio file to the beginning so it will play from the beginning, and the it plays the audio. The AutoPlay property is then set to false again, so when the bindings change later the audio file will not automatically play. The code for this is shown below.
Chord Calculations
The last aspect of the sample demonstrates how to dynamically generate the button controls on the fret board to show the user where to put their fingers to create a specific chord. The logic for displaying the chord is controlled by the ShowChord method. First the logic must determine where to put the “finger spots” (the places the user must put their fingers).
As mentioned before, this sample calculates the finger spots instead of using pre determined locations. The logic for this calculation can be seen in the source code and is based on standard musical theory, which is outside the scope of this article.
The ShowChord method fills a chart (_scaleInfoChart) containing the list of notes in a scale starting with the note for the chord. For example, if the G major chord is selected, the note chart would start with G and continue with the rest of the scale like this: G G# A A# B C C# D D# E F F#.
Once the scale chart has been created, the GetChordNotes method determines which notes are needed to create the selected chord. It uses static data based on the scale to determine which notes in the scale are needed (again based on music theory). The notes for the chord are returned to a list. Then in the FillNoteMappingList method (shown in Example 6), the strings and frets of the guitar are then searched to find the notes required to produce the chord.
Example 6 – Mapping the Notes to the Fret Board
First, the FillNoteMappingList method loops through each string of the guitar. It grabs the information about each guitar string such as string number and the notes for that string when played open and when played on all of the frets. This information will be used to see if the notes on that string match any of the notes we are looking for in the chord. The method then loops through each fret (starting with the open string) in search of the matching notes. When a match is found, an instance of the NoteMapping class is created and it’s the string number, fret number, the note and the audio file for that note are stored in the NoteMapping instance. This process continues for each string on the guitar.
I added a few additional methods after this process that eliminate some of these mappings based on logic to make them more reasonable to play (for example, I limit the chords to requiring 4 fingers). This logic is imperfect, and can certainly be expanded.
Dynamically Generating Controls
Once the note mappings have been determined, the buttons must be placed on the fret board. The DisplayNoteMapping method handles this job. It loops through the mappings and creates a new Button control (shown in Example 7). The new Button is created using the style resource (FingerButtonStyle) that uses the control template that replaces the standard button with a blue circle. The button does not have a Row property but it does get an attached property for this since it is contained within a Grid. The code must set the Grid.Row property for the button to make the button show up on the proper string. The attached property is set using the SetValue method and passing in the property and the value.
Example 7 – Generating and Display the Chord Buttons
Now that the string has been set (via the row) the fret must be set (via the column). If the string is not played open, then the code sets the attached Grid.Column property using the SetValue. If the string is played open, the button is shifted to the left a bit to make it appear centered on the nut of the guitar. Once the button is created, it is added to the Grid’s Children collection. This is necessary to make it display within the Grid.
When the new button is clicked, it should play the audio file that plays the individual note. This requires that an event handler must be assigned to the Click event for each new button. The handler calls the PlayNote method which accepts the name of the audio file to play. The DisplayNoteMapping method uses a lambda expression to set the handler.
Summary
This application demonstrates several aspects of creating visual effects, transforms, style resources, file resources, media elements, overriding control templates, data binding, and dynamically generating and displaying controls in code. When all of these features are combined the result can be a compelling application using Silverlight through Expression Blend and Visual Studio.
|
https://www.red-gate.com/simple-talk/dotnet/.net-framework/using-silverlight-to-build-a-guitar-chord-calculator/
|
CC-MAIN-2018-05
|
refinedweb
| 2,903
| 67.79
|
Whiteknight's Blog 2016-01-18T21:18:17+00:00 Andrew Whitworth (Whiteknight) wknight8111@gmail.com Big Ball of Mud Refactor to Testability 2015-12-27T00:00:00+00:00 <p>We had a small utility application which took in messages off the service bus and imported the data into ElasticSearch for querying. The task I was given, relatively fuzzy at that, was to improve the application. In doing so, it was expected that I would come up with some patterns and eventually some guidance for how to employ and properly utilize ElasticSearch in the future.</p> <p>Consider this something of a case study of my thesis from my previous post on unit testing. If we set up things properly with unit testing in mind, we can focus our attention on classes which need to be tested and can avoid testing things which do not.</p> <p>The application was a classic ball of mud design. JSON was read off the service bus, Newtownsoft.Json was used to deserialize the JSON into a <code>dynamic</code> object. Several large methods searched for various fields on this object, massaged or moved them (for search/faceting purposes) and then the whole thing was dumped into ElasticSearch. All the important parts were implemented in basically three large, unruly classes. I can’t show any of the code here, but suffice it to say that this was the land that SOLID forgot.</p> <p>Here’s a quick overview of the steps I used to start to transform this application into a reasonable one which we can apply tests to and start to make guarantees about.</p> <ol> <li>Use refactoring tools to clean the code without changing any behavior.</li> <li>Encapsulate calls to external resources into dedicated adaptor classes.</li> <li>Create a proper, strongly-typed Domain Model.</li> <li>Separate like concepts into dedicated utility classes.</li> <li>Create service layers to map incoming commands to methods on our Domain and Utilities.</li> <li>Add unit tests and integration tests.</li> </ol> <h2 id="why-not-test-first">Why Not Test First?</h2> <p>People familiar with TDD or who are very pro-testing might suggest that we write tests first so that we can know what the behavior <em>is</em>, so that when we make changes we have assurance that the behavior has not changed. The problem is that with a Big Ball of Mud system, there typically aren’t the kinds of abstraction boundaries or clean interface that make testing work correctly. You end up either locking the old, bad interfaces in place with premature unit tests or else you end up having to re-write your tests when you improve the interface. If the test is changing along with your code, there’s no real benefit to testing the old version of the code in the first place. That is, if the test is changing, you don’t have any assurance that the overall behavior of the system didn’t change. You’re better off just making your changes, doing some ad hoc manual testing to prove that things are the way you want them, and then adding your unit tests to prove the point.</p> <h2 id="refactor">Refactor</h2> <p>I use Resharper, but any refactoring tools will do. What we want to do in this first step is to clean the code making certain that we don’t change behavior. We want to make it easier to read and understand, because we can’t do anything else if we can’t understand it. Here are some examples of things you can do to improve the quality of code quickly with a refactoring tool and some simple editing:</p> <ol> <li>Invert conditionals to decrease nesting.</li> <li>Extract cohesive chunks of code into new methods with descriptive names.</li> <li>Rename existing methods, properties, fields and variables to more clearly describe what is happening.</li> <li>Remove dead code</li> <li>Remove misleading documentation (you can add good documentation, but getting rid of bad documentation is more important in my opinion).</li> <li>Add some comments. Bits of code which implement a single idea, but maybe aren’t big enough or cohesive enough to turn into a separate method can get a comment saying what is happening. These comments then form an outline or checklist of what the method needs to do.</li> </ol> <p>If you are interested in learning more about common and important refactoring techniques which can help to make code more readable, understandable and maintainable, get yourself a copy of Martin Fowler’s seminal “Refactoring”.</p> <p>When you’re done your basic cleanup, if you’ve done it correctly, the program should continue to run like normal with no changes in behavior.</p> <h2 id="encapsulate-external-resources">Encapsulate External Resources</h2> <p>For this application, we had three external resources to interact with: a service bus to receive commands from, ElasticSearch (via Nest) and a database for looking up some values and updating our domain model before indexing into ElasticSearch.</p> <p>The bus was serviced by a <code>Consumer</code> class, which read messages off the bus, parsed the command, and dispatched the request to the appropriate methods in our ball of mud. Itswitch</span> <span class="p">(</span><span class="n">cmd</span><span class="p">.</span><span class="n">CommandType</span><span class="p">)</span> <span class="p">{</span> <span class="k">case</span> <span class="n">CommandType</span><span class="p">.</span><span class="n">DoTheThing</span><span class="p">:</span> <span class="p">...</span> <span class="k">default</span><span class="p">:</span> <span class="c1">// log that we have an unrecognized command<>Switch statements in code are often a sign that we can use polymorphism instead (Again, read “Refactoring”). We can switch this to use a Command pattern (which we will call a “Handler” here to not confict with the Command Message pattern coming in off the bus), and move the switch statement into a HandlerFactory. Now weHandler</span> <span class="n">handler</span> <span class="p">=</span> <span class="k">new</span> <span class="n">HandlerFactory</span><span class="p">().</span><span class="n">CreateForCommand</span><span class="p">(</span><span class="n">cmd</span><span class="p">);</span> <span class="n">handler</span><span class="p">.</span><span class="n">Handle</span><span class="p">(</span><span class="n">cmd<>Inside our <code>HandlerFactory</code> we can replace our switch block with a lookup table to respect the Open/Closed Principle:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">class</span> <span class="nc">HandlerFactory</span> <span class="p">{</span> <span class="k">public</span> <span class="k">static</span> <span class="n">Dictionary</span><span class="p"><</span><span class="n">CommandType</span><span class="p">,</span> <span class="n">Type</span><span class="p">></span> <span class="n">_types</span> <span class="p">=</span> <span class="n">InitializeTypes</span><span class="p">();</span> <span class="k">public</span> <span class="k">static</span> <span class="n">Dictionary</span><span class="p"><</span><span class="n">CommandType</span><span class="p">,</span> <span class="n">Type</span><span class="p">></span> <span class="n">InitializeTypes</span><span class="p">()</span> <span class="p">{</span> <span class="k">return</span> <span class="k">new</span> <span class="n">Dictionary</span><span class="p"><</span><span class="n">CommandType</span><span class="p">,</span> <span class="n">Type</span><span class="p">></span> <span class="p">{</span> <span class="c1">// default type registrations here</span> <span class="p">}</span> <span class="p">}</span> <span class="k">public</span> <span class="k">static</span> <span class="k">void</span> <span class="nf">AddNewHandlerType</span><span class="p">(</span><span class="n">CommandType</span> <span class="n">cmdType</span><span class="p">,</span> <span class="n">Type</span> <span class="n">handlerType</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// Validate that it's the correct kind of thing</span> <span class="n">_types</span><span class="p">.</span><span class="n">Add</span><span class="p">(</span><span class="n">cmdType</span><span class="p">,</span> <span class="n">handlerType</span><span class="p">);</span> <span class="p">}</span> <span class="k">public</span> <span class="n">Handler</span> <span class="nf">CreateForCommand</span><span class="p">(</span><span class="n">MyCommand</span> <span class="n">cmd</span><span class="p">)</span> <span class="p">{</span> <span class="k">if</span> <span class="p">(!</span><span class="n">_types</span><span class="p">.</span><span class="n">ContainsKey</span><span class="p">(</span><span class="n">cmd</span><span class="p">.</span><span class="n">CommandType</span><span class="p">))</span> <span class="k">return</span> <span class="k">new</span> <span class="nf">NullHandler</span><span class="p">();</span> <span class="c1">// Null Object Pattern</span> <span class="k">return</span> <span class="n">Activator</span><span class="p">.</span><span class="n">CreateInstance</span><span class="p">(</span><span class="n">_types</span><span class="p">[</span><span class="n">cmd</span><span class="p">.</span><span class="n">CommandType</span><span class="p">]);</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>Now the front of our application is neatly separated from the rest of it. With a small amount of additional refactoring to use dependency injection in our <code>BusMessageConsumer</code>, we could test that messages with different command types do indeed go into the <code>HandlerFactory</code>, which returns a <code>Handler</code> object of the correct type. But if you take a closer look, you’ll see that our Consumer class doesn’t really do any work: It calls a method on <code>HandlerFactory</code> with the argument object unmolested, and it then calls a method on the return value with the argument, again unmolested. There’s no real point to doing deep-testing here, so we can skip it for now.</p> <p>The <code>Handler</code> objects represent a mapper from our request domain (<code>MyCommand</code>) into our problem domain, and it’s here where we want to start unit-testing in earnest. I added tests that <code>HandlerFactory</code> returns the correct <code>Handler</code> subtypes given different inputs, and I started planning tests for the various <code>Handler</code> subclasses as well. We aren’t ready to actually test those <code>Handler</code>s just yet though, we still need to decouple the other side of the business logic from the data stores (ElasticSearch and the DB).</p> <p>The database can be wrapped up in your choice of abstraction. I choose to use a simple gateway for queries, but you could use a <code>Repository</code> or <code>Active Record</code> or any other data access strategy that you think would fit. The book “Patterns of Enterprise Application Architecture”, again by Martin Fowler (I love that guy!) discusses these and more options in some detail.</p> <p>ElasticSearch is written in Java, so if you are writing Java code you can spool up an embedded instance of it for testing and not worry so much about encapsulating it. In C#, not so much. It is far too onerous to spool up a fresh ElasticSearch instance for every test or even every test session, especially when your CI routine is trying to do this on some remote build server in a brain-dead automated way. That is, you can writing an Integration Test suite which does this, but that is likely going to require help and support from your Infrastructure team or DevOps or however your origanization delineates that kind of work, and it might not be worth your effort. I think it’s far better in this case to encapsulate ElasticSearch out behind an air-tight interface and trust that Nest and ElasticSearch are doing what they are advertised to do (A certain amount of testing, be it manual or automated, should definitely be done at first and at any time that the version of these tools are upgraded to verify that they do, indeed, do what you expect).</p> <p>Notice that, taking this approach, we are limiting our use of certain features. Nest has features where you can generate your own JSON, or generate your own identifier path strings, or even generate some strings of commands in the Elastic DSL. If we are trying to rely on our abstractions and use things like compile-time type checks and externally-tested code to make our case, we can’t break that encapsulation with hand-rolled strings of commands that we aren’t providing ourselves with a way to test.</p> <p>One thing I’ve found over the years is that a Builder pattern works very well to abstract the query-building interfaces of various data stores and ORMs. Using a Builder, you can still have an air-tight abstraction but the abstraction can grow over time as you need and each new addition to the Builder API gets a readable, descriptive name of what it is trying to accomplish. Consider an interface like this, filling in the details of your particular ORM or storage technology:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="n">SearchQuery</span> <span class="n">query</span> <span class="p">=</span> <span class="k">new</span> <span class="n">SearchQueryBuilder</span><span class="p">()</span> <span class="p">.</span><span class="n">MatchingSearchTerm</span><span class="p">(</span><span class="n">term</span><span class="p">)</span> <span class="p">.</span><span class="n">WithVisibilityFor</span><span class="p">(</span><span class="n">CurrentUser</span><span class="p">)</span> <span class="p">.</span><span class="n">WithType</span><span class="p">(</span><span class="n">type</span><span class="p">,</span> <span class="n">subtype</span><span class="p">)</span> <span class="p">....</span> <span class="p">.</span><span class="n">Build</span><span class="p">();</span> <span class="n">SearchResult</span> <span class="n">result</span> <span class="p">=</span> <span class="n">query</span><span class="p">.</span><span class="n">Execute</span><span class="p">();</span></code></pre></div> <p>Using a Builder pattern like this to make an extensible abstraction over your data storage technology (this only works for queries, you’ll need a different strategy for INSERT/UPDATE/DELETE, like a Command pattern), and setting up your system to make proper use of DI will allow you to mock out your store and finally start isolating your business logic for testing purposes.</p> <h2 id="create-a-domain-model">Create a Domain Model</h2> <p>The <code>dynamic</code> type was added in .NET 4.0 with VisualStudio 2010. It gives us some of the flexibility enjoyed by dynamically-typed programming languages and allows enough flexibility to put together prototype code very quickly.</p> <p>The code we had in this application looked something like this:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="kt">dynamic</span> <span class="n">model</span> <span class="p">=</span> <span class="n">Newtownsoft</span><span class="p">.</span><span class="n">Json</span><span class="p">.</span><span class="n">JsonConvert</span><span class="p">.</span><span class="n">Deserialize</span><span class="p">(</span><span class="n">jsonString</span><span class="p">);</span></code></pre></div> <p>While this is decently fast for making prototype code, over time the strain on the system became huge. The reasons are two-fold. First, because there was no documentation anywhere about what was and was not supposed to be in the <code>dynamic</code> object. This lead to huge amounts of data being indexed into ElasticSearch which didn’t need to be there, because we didn’t have an inventory of the fields we wanted and any kind of mechanism to exclude fields we didn’t want. The JSON was turned into a <code>dynamic</code> and whatever was in that object is what ended up in the index. Notice that a DTO-style object has some built-in “documentation” in the form of field names and types. Just this small amount of extra structure would be head-and-shoulders above where we were, even if no other comments or documentation was added (“self-documenting” code rarely has all the information you want, but sometimes can have the basics which you <em>need</em>). Second, because there was no obvious and enforced structure to the data, there was no obvious structure to the code which worked on that data. The system had an organically-grown set of methods which had grown absurdly large, with loops inside conditionals inside loops. The data being <code>dynamic</code> and there being no obvious rules or expectations means that every property access needed to have a null check followed by a check that the data was in the correct type. Much of this validation work could have been handed off to the parser, if the original developers had taken the time to give the parser a little bit of the type metadata it needed to do the job.</p> <p>We actually need two objects here: One representing the JSON request coming in from the Bus, and the other representing the storage type that we index into ElasticSearch. Once we have these two model types, we can create a mapper object to convert from one to the other. This is the heart of our system.</p> <p>With a proper Domain Model in place, even if it was a little bit anemic in this case, we can start doing some of our real testing: Test that the JSON messages coming in off the bus deserialize properly into our Request Model. Test validation routines on our Request Models. Test that the Mapper properly maps various Request Model objects into Index Models. Test that our data layer properly hands our Index models off to ElasticSearch (or, test that we get all the way down to our mock that is standing in for ElasticSearch).</p> <h2 id="all-together">All Together</h2> <p>When you’re starting out with a piece of software which is a little bit more sane than what I had to deal with, the steps you use to make improvements might look something like this:</p> <p>1) Test, baseline 2) Cleanup and Refactor 3) Test 4) Make changes 5) Test 6) Go back to #2 and repeat</p> <p>When you’re working with a Big Ball of Mud which doesn’t readily accept testing, you need to abbreviate a little bit. This is what I did:</p> <p>1) Cleanups and small-scale refactors to simplify 2) Abstract external resources 3) Test 4) Refactor 5) Test 6) Go back to #4 and continue until you’re ready to start changing functionality.</p> <p>This isn’t an ideal work flow, but then again the world of software is rarely an ideal place. Sometimes you need to do a little bit of work without the safety of a test harness to save you, because the place your at just doesn’t have room for a test harness. In these cases your first action should be to get into a testable situation and then continue with a more normal and mature workflow.</p> <img src="" height="1" width="1" alt=""/> ConsoleImage 2015-08-21T00:00:00+00:00 <p>I saw a <a href="">blog post the other day about printing images to the Linux Terminal in 9 lines of Ruby</a> The results were quite interesting so I thought to myself that it might be a fun little exercise to duplicate this utility in C#. After all, I thought to myself, if you can do it in 9 lines of ruby you must be able to do it in a dozen or so lines of C#. <em>I was wrong</em>.</p> <p>There are two things that make the ruby version so short: The availability of libraries to do the hard work and the capabilities of the Linux Terminal.</p> <p>The modern Linux terminal is a pretty complicated beast. Most GUIs support 256 colors and have a lot of features like fonts that make a utility like this quite nice and easy to use. The Windows Command Prompt, on the other hand, isn’t in the same ballpark as the Linux Terminal. It isn’t even in the same league. We run into some problems immediately because the CMD prompt only supports 16 colors, where each color is 4 bits wide. One bit each for red, green and blue, and a bit for light and dark. One consequence of this is that there are 4 colors on the greyscale: Black, DarkGrey (Black + bright), Grey and White (Grey + bright). For every other color there are just two, a dark and a bright. The six hues that you can use are Magenta, Red, Yellow, Green, Cyan, and Blue, with two shades of each. Trying to convert a full RGB color image into this palette, which still clocks in at much worse than 9 lines of code, produces miserable results. Here’s a quick and dirty example:</p> <div class="highlight"><pre><code class="language-csharp" data-">bmp</span> <span class="p">=</span> <span class="n">LoadAndResizeImage</span><span class="p">(</span><span class="n">args</span><span class="p">[</span><span class="m">0</span><span class="p">]);</span> <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">i</span> <span class="p">=</span> <span class="m">0</span><span class="p">;</span> <span class="n">i</span> <span class="p"><</span> <span class="n">bmp</span><span class="p">.</span><span class="n">Size</span><span class="p">.</span><span class="n">Height</span><span class="p">;</span> <span class="n">i</span><span class="p">++)</span> <span class="p">{</span> <span class="n">Console</span><span class="p">.</span><span class="n">SetCursorPosition</span><span class="p">(</span><span class="m">0</span><span class="p">,</span> <span class="n">i</span><span class="p">);</span> <span class="k">for</span> <span class="p">(</span><span class="kt">int</span> <span class="n">j</span> <span class="p">=</span> <span class="m">0</span><span class="p">;</span> <span class="n">j</span> <span class="p"><</span> <span class="n">bmp</span><span class="p">.</span><span class="n">Size</span><span class="p">.</span><span class="n">Width</span><span class="p">;</span> <span class="n">j</span><span class="p">++)</span> <span class="p">{</span> <span class="n">ConsoleColor</span> <span class="n">cc</span> <span class="p">=</span> <span class="n">ConvertColor</span><span class="p">(</span><span class="n">bmp</span><span class="p">.</span><span class="n">GetPixel</span><span class="p">(</span><span class="n">j</span><span class="p">,</span> <span class="n">i</span><span class="p">));</span> <span class="n">Console</span><span class="p">.</span><span class="n">BackgroundColor</span> <span class="p">=</span> <span class="n">cc</span><span class="p">;</span> <span class="n">Console</span><span class="p">.</span><span class="n">Write</span><span class="p">(</span><span class="sc">' '</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> <span class="p">}</span> <span class="k">public</span> <span class="k">static</span> <span class="n">ConsoleColor</span> <span class="nf">ConvertColor</span><span class="p">(</span><span class="n">Color</span> <span class="n">c</span><span class="p">)</span> <span class="p">{</span> <span class="kt">int</span> <span class="n">cc</span> <span class="p">=</span> <span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">R</span> <span class="p">></span> <span class="m">128</span> <span class="p">|</span> <span class="n">c</span><span class="p">.</span><span class="n">G</span> <span class="p">></span> <span class="m">128</span> <span class="p">|</span> <span class="n">c</span><span class="p">.</span><span class="n">B</span> <span class="p">></span> <span class="m">128</span><span class="p">)</span> <span class="p">?</span> <span class="m">8</span> <span class="p">:</span> <span class="m">0</span><span class="p">;</span> <span class="c1">// Bright bit set, if any colors are bright</span> <span class="n">cc</span> <span class="p">|=</span> <span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">R</span> <span class="p">></span> <span class="m">64</span><span class="p">)</span> <span class="p">?</span> <span class="m">4</span> <span class="p">:</span> <span class="m">0</span><span class="p">;</span> <span class="c1">// R</span> <span class="n">cc</span> <span class="p">|=</span> <span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">G</span> <span class="p">></span> <span class="m">64</span><span class="p">)</span> <span class="p">?</span> <span class="m">2</span> <span class="p">:</span> <span class="m">0</span><span class="p">;</span> <span class="c1">// G</span> <span class="n">cc</span> <span class="p">|=</span> <span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">B</span> <span class="p">></span> <span class="m">64</span><span class="p">)</span> <span class="p">?</span> <span class="m">1</span> <span class="p">:</span> <span class="m">0</span><span class="p">;</span> <span class="c1">// B</span> <span class="k">return</span> <span class="p">(</span><span class="n">System</span><span class="p">.</span><span class="n">ConsoleColor</span><span class="p">)</span><span class="n">cc</span><span class="p">;</span> <span class="p">}</span></code></pre></div> <p>For each “pixel” of the image, we do a straight-forward conversion to the closest palette color, set that color as the background, and print a space. This is the simple version, but it clearly doesn’t do what we need. Frankly, the images produced look <strong>terrible</strong>. For your viewing pleasure, here’s a rendering of some peaches on a tree:</p> <p><img src="/images/ConsoleImage/peaches1.png" alt="Ugly Peaches" /></p> <p>A eureka moment comes when we realize that we can have a second Foreground color, apply that to some kind of printable character, and print that character on top of our colored background. This is sort of like how traditional “ASCII Art” works, by printing ASCII characters to represent shades. In normal ASCII Art, those shades are usually just black-on-white for a grayscale effect. But, this isn’t all we are limited to. By picking the right characters, we can produce a very low-resolution blending effect. The next question is, how do we do this blending? Extended ASCII (code page 1252) provides 5 characters which are worth looking at: 0x20 (space, 0% coverage), 0xB0 (25% coverage block), 0xB1 (50% coverage block), 0xB2 (75% coverage block) and 0xDB (100% coverage block). An astute observer will realize that a foreground with 100% coverage produces exactly the same effect as that background color with 0% coverage. Keeping that detail in mind, we have only 3 “shades” that we can use to blend between individual pairs of colors. Here are some generated “pixels” using blending, showing progression of pure colors from dark to light, a “color wheel” showing dark-to-light and blending between neighboring colors, and an example of how we convert these “pixels” to RGB values:</p> <p><img src="/images/ConsoleImage/scales1.png" alt="Color Scales" /></p> <p>It’s worth noting that we could have more colors, but many of the additional combinations of non-adjacent colors (Red-Green, Blue-Yellow, etc) don’t produce colors which add value, and which calculate down to RGB values which are functionally identical to other, more attractive, blends. That is, a Red-Green combo looks brownish, but isn’t a better brown than the dark yellow blends. Many of the diagonal combinations of Bright blended with Dark (bright cyan blended with dark blue, etc) don’t produce colors that are usable or unique either.</p> <p>Since there are relatively few of these “pixels” worth generating, we can create and cache the whole list up front. Then when we want to match a color from an image, we can calculate distances and find the color pixel with the shortest distance to the target color.</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">static</span> <span class="kt">double</span> <span class="nf">DistanceTo</span><span class="p">(</span><span class="k">this</span> <span class="n">Color</span> <span class="n">c</span><span class="p">,</span> <span class="n">Color</span> <span class="n">p</span><span class="p">)</span> <span class="p">{</span> <span class="k">return</span> <span class="n">Math</span><span class="p">.</span><span class="n">Sqrt</span><span class="p">(</span><span class="n">Sqr</span><span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">R</span> <span class="p">-</span> <span class="n">p</span><span class="p">.</span><span class="n">R</span><span class="p">)</span> <span class="p">+</span> <span class="n">Sqr</span><span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">G</span> <span class="p">-</span> <span class="n">p</span><span class="p">.</span><span class="n">G</span><span class="p">)</span> <span class="p">+</span> <span class="n">Sqr</span><span class="p">(</span><span class="n">c</span><span class="p">.</span><span class="n">B</span> <span class="p">-</span> <span class="n">p</span><span class="p">.</span><span class="n">B</span><span class="p">));</span> <span class="p">}</span> <span class="k">private</span> <span class="k">static</span> <span class="kt">int</span> <span class="nf">Sqr</span><span class="p">(</span><span class="kt">int</span> <span class="n">x</span><span class="p">)</span> <span class="p">{</span> <span class="k">return</span> <span class="n">x</span> <span class="p">*</span> <span class="n">x</span><span class="p">;</span> <span class="p">}</span></code></pre></div> <p>If we add in a rounding step to chop off the low-order bits (at this resolution, it rarely matters) and add in some caching, rendering performance is actually not terrible. Here’s an example of those same peaches, rendered with color blends:</p> <p><img src="/images/ConsoleImage/peaches2.png" alt="Pretty Peaches" /></p> <p>Maybe not quite as good as the Ruby-On-Linux version, but pretty impressive considering the limitations of DOS. Using library calls, it’s just about the same length as the Ruby version:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">using</span> <span class="nn">System</span><span class="p">;</span> <span class="k">using</span> <span class="nn">System.Drawing</span><span class="p">;</span> <span class="k">namespace</span> <span class="nn">ConsoleImage.Viewer<">bitmap</span> <span class="p">=</span> <span class="p">(</span><span class="n">Bitmap</span><span class="p">)</span><span class="n">System</span><span class="p">.</span><span class="n">Drawing</span><span class="p">.</span><span class="n">Image</span><span class="p">.</span><span class="n">FromFile</span><span class="p">(</span><span class="n">args</span><span class="p">[</span><span class="m">0</span><span class="p">]);</span> <span class="n">ConsoleImage</span><span class="p">.</span><span class="n">Draw</span><span class="p">(</span><span class="n">bitmap</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>I have <a href="">started a little library</a> to play with this idea of rendering images to the windows console. It does a fair bit already in terms of resizing images, cropping images, animating GIFs and rendering images at various points in the console window. I don’t want it to do too much more than that, though. The world clearly has no need for DOS-based image editing or anything crazy.</p> <img src="" height="1" width="1" alt=""/> Thin Controllers and Proper ViewModels 2015-06-23T00:00:00+00:00 <p>Everything on the web is MVC now (or MVP, or MVVM, which are separated more by nuance and discipline than by actual structure) and for good reason: It’s a natural and straight-forward way to separate the concerns of display, data and logic. I find that getting trapped in this MVC structure, and trying to use it for more than it is intended for, can lead to big trouble.</p> <h2 id="the-mvc-trap">The MVC Trap</h2> <p>When you look at something like ASP.NET MVC, when you create a new project a few folders are already created for you: <code>/Models</code>, <code>/Views</code> and <code>/Controllers</code>. With the Rails-inspired naming conventions enabled by default, it’s very easy to create a <code>FooController</code> whose method names correspond to file names in the <code>/Views/Foo/</code> folder. Various tools run with this. You can often click or right-click on a controller method and automatically be taken to the corresponding view file, with the file being created for you on the fly if needed. This is easy, so easy that many developers get lured into a trap.</p> <p>When you have two roads in front of you, and you’re not quite sure where they lead to off in the distance, it’s very easy to decide to pick which looks easier at the start. When the MVC system and the associated tools are creating methods for you on your controller, and your controller is attached by convention to automatically-created views, the easy road tells us that this is our structure and this is where our logic goes. So we start writing code, putting display logic into our View and putting some data-munging logic into our Model, and dumping most of the rest of the logic into our Controller. The end result, those goal posts off in the distance we didn’t quite see at first, is that our controllers are fat, our views are polluted, and our models are bloated with unrelated business logic. Then, when somebody says we need to make an alternate view, because we need to display much of the same logic in a different format for a specific subset of clients, everything goes to hell. We curse our tools, stomp our feet, and search the googles for the next shiny thing that promises to make all our cares go away.</p> <p>When you look at Microsoft tech demos, especially those for new ASP.NET MVC releases, you see people wanting to reuse data models to prevent code duplication. You create a POCO class and attach it to the EntityFramework DbContext, then you use the Visual Studio templating tools to create a controller for your entity class with automatically-generated views for the CRUD operations. Everything works end-to-end, with we the programmers needing to write very little code compared to how much is generated for us, and we call ourselves geniuses for leveraging code generation and avoiding the pitfalls of the layered architectures of our forefathers. Why did we have those things in the first place? Who needs layers? But now our models are serving many masters. We start fleshing our entity classes out into a full Domain Model with methods for business logic, but we need some validation methods down in the data code and we need some formatting methods to help with displaying data in the View. Now we have a bunch of methods with warnings on them like “Don’t call this method without an active DbContext!” or “Don’t call this method until the user has been validated!”. Then, when the business guys want us to associate multiple colors with every product instead of just one, and the storage format needs to change to use another table with a foreign key, now we have to rewrite our entire view because the same model that represents the database also represents the Domain Model and the View Model. Far from making our lives simpler, we are now living in hell where every little change in one domain forces massive rewrites in the others.</p> <p>Then the networking guys are telling us that the Shipping Cost calculations are too expensive, and we want to break that logic out into a separate service so we can offload the calculations from the webserver and distribute them onto some helper servers outside the DMZ. How do we possibly accomplish that? All our logic is in our controllers (and our Views, and our entity models) and it’s impossible to tease it all out without just rewriting the whole damned thing.</p> <p>When Stefan Tilkov tells us <a href="">Don’t start with a monolith</a>, this is what he is talking about. If you go down this path, your mess will never be detangled and you’ll end up throwing everything out when the requirements change too much.</p> <h2 id="mvc-three-separate-single-responsibilities">MVC: Three Separate, Single Responsibilities</h2> <p>The solution to a lot of these problems is kind of simple, if you keep it in mind from the beginning: SRP. <strong>The Single Responsibility Principle</strong> tells us that <a href="/2015/03/14/srp.html">classes should do one thing and serve one master</a>. So, what are the single responsibilities of each of the MVC components?</p> <ul> <li><strong>Model</strong>: To hold data needed for the view, in a format that the View requires. The Model is a servant of the View, and only changes when the View needs it to change.</li> <li><strong>View</strong>: To allow the user to view and interact with the data. This only changes when the needs of the user changes.</li> <li><strong>Controller</strong>: To mediate between the user domain and the business domain. This only changes when the API requirements of the Application Service Layer changes.</li> </ul> <p>Boom. Three parts, each with a single purpose and a single master. But this almost raises more questions than it answers: Where does all that logic go, which used to live in our fat controller methods, and in our polluted View, and in our bloated Model class?</p> <h3 id="viewmodel-responsibilities">ViewModel Responsibilities</h3> <p>First and foremost, if our model is a proper “View Model” and only serves the View, that means we’re going to need to have another class which represents the database storage format and yes, we’re probably also going to want another more-or-less duplicate to serve as our domain model. We have three classes here because the data serves three masters, and lives in three different realms and needs to satisfy three different sets of requirements. You’re going to need to do some mapping, but a conventions-based solution like AutoMapper will get you 90% of the way there with minimal effort. Maybe AutoMapper isn’t what you use long-term, but it can be a good start nonetheless. If you isolate the automapper dependency and abstract it behind some kind of <code>IMapper<TFrom, TTo></code> interface, you can always go back and fix it later with minimal effort.</p> <p>Your three models are:</p> <ul> <li><strong>Your Data Model</strong> which serves the needs of our data storage mechanism. Properties on these classes probably have a one-to-one correspondence with columns in your database, and methods on these classes have to do with low-level validation and translating data storage formats into something usable by the program.</li> <li><strong>Your Domain Model</strong> which holds your business logic. The Domain Model is set up to meet the needs of your business, divorced from any concerns of data storage or data display.</li> <li><strong>Your ViewModel</strong> which serves the needs of your UI. The ViewModel holds information needed by the View, and provides data manipulation and front-line validation methods required by the View.</li> </ul> <h3 id="view-responsibilities">View Responsibilities</h3> <p>The View now only deals with display concerns, and we don’t need to worry that it’s getting its fingers down into the business logic or the data storage logic. All the logic bloat which used to live in your View now probably lives in your ViewModel, because the ViewModel is the data-holding servant of the View.</p> <p>The View is conceptually very simple because it is only concerned with presentation. Holding and organizing data to be displayed is the role of the ViewModel. In this sense the View is very dumb: Take data out of the ViewModel. Show it as-is to the user. Repeat with the next piece of data. When we talk about Views being dumb, we are obviously only talking about it from the point-of-view of the server. From the client perspective, JavaScript in the View might have just as much complicated logic, if not more, than the entire rest of your application combined.</p> <h3 id="controller-responsibilities">Controller Responsibilities</h3> <p>The Controller is now a simple mediator. It takes requests from the user, translates those into domain requests, receives back a domain response, and translates that into a response for the user. Any other logic which used to live in your Controller now lives in your Domain Model or in an Application Service Layer of some sort. Not only is it easier to do things like unit test the controller in this setup, but if the implementation is thin enough and if the Service Layer is tested already, you might not need to bother testing your Controller at all! Here’s a basic MVC controller method which illustrates what a thin controller method may look like:<ResponseModel<f">View</span><span class="p">(</span><span class="n">userResponse</span><span class="p">);</span> <span class="p">}</span></code></pre></div> <h4 id="how-low-can-you-go">How Low Can You Go?</h4> <p>It’s worth pointing out that we could probably make this controller method even smaller by recognizing that we have a repeatable pattern, and putting in some effort to abstract our translator behind a generic interface:<">UserResponseModel</span> <span class="n">userResponse</span> <span class="p">=</span> <span class="n">ControllerHelper</span><span class="p">.</span><span class="n">Dispatch</span><span class="p"><</span><span class="n">UserRequestModel</span><span class="p">,</span> <span class="n">DomainRequest</span><span class="p">,</span> <span class="n">DomainResponse</span><span class="p">,</span> <span class="n">UserResponseModel</span><span class="p">,</span> <span class="n">DataService</span><span class="p">>(</span><span class="n">userRequest</span><span class="p">);</span> <span class="k">return</span> <span class="nf">View</span><span class="p">(</span><span class="n">userResponse</span><span class="p">);</span> <span class="p">}</span></code></pre></div> <p>But then again, many people (and many code analysis tools) would likely be pretty unhappy about all those type parameters and by the extremely abstract nature of this <code>Dispatch</code> method, not to mention the implied reliance on a Service Locator to find our <code>IRequstTranslator<UserRequestModel, DomainRequest></code>, <code>IRequestTranslator<DomainResponse, UserResponseModel></code>, our validator, our service, and anything else we wanted in there. Controller methods should be thin, but there is some price to pay if you make it <em>too thin</em>.</p> <p>In either case, this is all the logic that should appear in your controller method. Validate the request. Translate the request. Dispatch the request. Receive the response, translate the response, return the response.</p> <h4 id="webapi-methods">WebAPI Methods</h4> <p>The implementation of a webservice using WebAPI is identical, just returning the translated response DTO instead of returning a View:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="n">UserResponseDto</span> <span class="nf">GetTheData</span><span class="p">(</span><span class="n">UserRequestDto<ResponseDto<">userResponse</span><span class="p">;</span> <span class="p">}</span></code></pre></div> <h2 id="whats-the-payoff">What’s The Payoff?</h2> <p>After doing all this work, making multiple models for the different layers of your application, moving logic out of the View into the ViewModel, and making our Controllers thin, what have we acheived? What has all our effort bought us?</p> <ol> <li>MVC Controllers are tied to the View and HTTP and the web. Pulling logic out of there makes your logic not depend on any of those things. This means all your application logic exists separately from the web, which means it can be used separately without any issue.</li> <li>Logic is easier to test, because now we can run fast, cheap <em>unit tests</em> on our Application Service and Domain Model instead of having to set up expensive <em>integration tests</em> on our controllers and views.</li> <li>The logic of the application service is now reusable in different formats. We can create new APIs, or new versioned APIs, or new front-ends with the same logic without any issues.</li> <li>We open our controllers up to Dependency Injection, which will allow us to substitute different implementations based on outside factors. This makes things like A/B testing, or custom per-client behaviors in a multi-tenancy system more doable. (The dependency injector can peek at our HTTP headers and authentication tokens, and select service instances depending on what it finds there, etc).</li> </ol> <p>Then we can move the <code>DataService</code> into a service layer in a separate assembly, and reuse that logic elsewhere when the boss asks us for a separate view for some clients, or a report that feeds off the same data, or anything like that.</p> <h2 id="the-road-to-microservices-maybe">The Road To Microservices, Maybe</h2> <p>You’ll notice that a system like this has a vertical separation of concerns instead of the horizontal separation into layers of earlier systems. The Controller calls the Application Service, which interacts with the Domain Model and eventually creates requests into the database. Separating a system like this out into microservices is relatively easy because we already have problem-domain separation and we already have an API suitable for remote calls. That’s if Microservices are worth the hassle for your team, which they won’t necessarily be. It’s not one size to fit all, but it is an available option for people who find they do need it.</p> <p>When Martin Fowler says that we should <a href="">Start with a monolith first</a> and break out into microservices later as needed, he’s expecting us to already have a nice clean architecture like this. If you don’t have that, if you don’t have the discipline to properly structure your monolith with an eye towards proper structure and eventual decomposition, this isn’t the road for you. Either you need to decide that Microservices are not your final destination (a very difficult, nerve-wracking decision to make so early!) or you need to decide to go with Microservies from the beginning. You’re paying the Microservice premium too early, or you’ll be paying the massive cleanup and refactoring costs too late. It’s a tough decision if you’re in this boat, but one that needs to be made.</p> <img src="" height="1" width="1" alt=""/> When Not to Unit Test 2015-06-19T00:00:00+00:00 <p>I used to be the kind of guy who cared about <em>test coverage</em>. You know, that percentage value you get when you run your tests and count up how many lines of your code were actually executed versus how many lines of code you have total. Too little coverage causes your testing tool to spit out red warning messages, flashing indicators and sad face emoticons. The goal for any system is to reach 100%, no matter the cost.</p> <p>What I’ve learned in the intervening years is that not all code is created equal from the perspective of unit testing. Different types of code get tested in different ways and some code doesn’t need to get tested at all. Learning when to test and how to do it when necessary, is a big step down the road to software enlightenment.</p> <p>Let’s talk about a few different types of code, whether they are worth testing and, if so, how to do it.</p> <h2 id="external-dependencies">External Dependencies</h2> <p>As a general rule, you do not need to test your external dependencies. That is, you aren’t testing code you didn’t write. You need to trust that the original authors did their due diligence. If you can’t trust it, why would you be using it in your project?</p> <p>This is not to say that you should just use any old third party library sight unseen. This is not an excuse for you to shirk your due diligence. Put together a proof-of-concept, read user testimonials, and make sure the product you’re getting is worth having. But, once you decide that you trust the makers of the library enough to use their code in your project, it’s not your responsibility to unit-test their code. If their code has problems, you file bug reports. Let them fix and test it.</p> <p>External dependencies are things like your platform standard libraries, your database, your network, your operating system, referenced binaries, device drivers, etc. Unless you’re running custom builds of these things, don’t bother with “sanity” tests to prove that they work as advertised. If you are running a custom build, you should have unit tests on that project, not in all the upstream ones. It’s a waste of your time to be testing code which you didn’t write, because the vendor has already done that.</p> <p>Consider the case where I have some data access code that does a little bit of data transformation, validation, and then stores the results into a database. Since I don’t need to test the database, I can inject a mock object or a stub into my test to examine my stuff in isolation.</p> <p>Consider also the case of a web app, running on a webserver, under some sort of MVC framework. I don’t need to test that the webserver accepts connections or that my MVC framework correctly routes requests to my controller. All I need to test is the actual logic which I’ve written to execute once the MVC framework is done all its routing magic.</p> <p>Consider this example controller method from ASP.NET MVC:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="n">ActionResult</span> <span class="nf">DoWork</span><span class="p">()</span> <span class="p">{</span> <span class="c1">// Complex business logic here</span> <span class="k">return</span> <span class="nf">View</span><span class="p">();</span> <span class="p">}</span></code></pre></div> <p>To test our business logic in this configuration, we would need to instantiate a controller, execute our action method, and then verify the returned view. The majority of our test will be in testing components we didn’t write! If our business logic lived in a separate location, like a Service Layer or Domain Model, we could test those things directly and leave the UI out of it.</p> <p>Notice that I’m only talking about unit tests here. Integration tests, where you would be testing your entire integrated application, are a different beast entirely and shouldn’t be skipped in cases like this.</p> <h2 id="internal-dependencies">Internal Dependencies</h2> <p>Internal dependencies are things that are written in-house but which are separate from the current project. If your internal dependencies are already vigorously tested elsewhere, you don’t need to test them again in your current project.</p> <p>In other words, you should test things in the appropriate places. You do need tests, but you don’t need or want them spread out all over the world. Put tests where they belong and trust that tests exist and are passing when you attempt to reuse an existing library or product (if there are no tests, or if tests exists and don’t pass, that’s a bigger cultural issue which needs immediate and decisive resolution).</p> <h2 id="simple-gateways-adaptors-bridges-and-facades">Simple Gateways, Adaptors, Bridges, and Facades</h2> <p>Simple classes which provide access to other components but do not contain any meaningful logic of their own do not need to be tested.</p> <p>Consider the case where I am using EntityFramework as my ORM, and I write a simple <code>IRepository<T></code> wrapper type around the EntityFramework <code>DbSet<T></code> type. <a href="/2015/03/21/repository.html">As I’ve discussed elsewhere</a>, <code>DbSet<T></code> <em>is an implementation of the Repository pattern already</em>, so your wrapper is just an Adaptor to make EF fit into your solution a little nicer (and also to provide an abstraction boundary, for various tangential purposes, such as simplified mocking for tests).</p> <p>When you have a situation like this, you don’t need to test your <code>IRepository<T></code> type, because it’s methods have a one-to-one relationship with the underlying EntityFramework methods and because EntityFramework doesn’t need to be tested by your application. If EF is well tested, and if you’ve done basic due diligence like checking that your <code>DbSet<T></code> reference isn’t null or disposed, you can be certain that your <code>IRepository<T></code> works correctly. Don’t waste time testing it if you know it’s right because it’s too simple to get wrong.</p> <p>Look at this method:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">void</span> <span class="n">WidgetResult</span> <span class="nf">DoWidgetThing</span><span class="p">(</span><span class="n">WidgetArgs</span> <span class="n">args</span><span class="p">)</span> <span class="p">{</span> <span class="k">return</span> <span class="n">_widgetMaster</span><span class="p">.</span><span class="n">DoWidgetThingInternal</span><span class="p">(</span><span class="n">args</span><span class="p">);</span> <span class="p">}</span></code></pre></div> <p>Under the dubious proposition that this code is worth having in the first place, it’s obvious that we don’t require unit tests here. If <code>WidgetMaster.DoWidgetThingInternal</code> is already properly tested, then what can possibly go wrong here? Well, <code>_widgetMaster</code> could be null, in which case this method would throw a null ref exception. If <code>_widgetMaster</code> is being injected through a constructor parameter and you know it’s not null, that’s a case that isn’t worth considering.</p> <p>Code which does nothing except make straight-forward, trivial calls to other methods which are themselves well-tested, are likely not worth testing. Here’s another, slightly more complicated example lifted from an ASP.NET WebAPI setting:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">void</span> <span class="n">ThingResultDto</span> <span class="nf">GetThings</span><span class="p">(</span><span class="n">ThingRequestDto</span> <span class="n">requestDto</span><span class="p">)</span> <span class="p">{</span> <span class="n">DomainRequest</span> <span class="n">request</span> <span class="p">=</span> <span class="n">_requestTranslator</span><span class="p">.</span><span class="n">TranslateToDomainRequest</span><span class="p">(</span><span class="n">requestDto</span><span class="p">);</span> <span class="n">DomainResult</span> <span class="n">result</span> <span class="p">=</span> <span class="n">_domainService</span><span class="p">.</span><span class="n">Handle</span><span class="p">(</span><span class="n">request</span><span class="p">);</span> <span class="n">ThingResultDto</span> <span class="n">resultDto</span> <span class="p">=</span> <span class="n">_resultTranslator</span><span class="p">.</span><span class="n">TranslateFromDomainResult</span><span class="p">(</span><span class="n">result</span><span class="p">);</span> <span class="k">return</span> <span class="n">resultDto</span><span class="p">;</span> <span class="p">}</span></code></pre></div> <p>This case is more complicated than before, but still very simple. We translate a user request from the API into a domain request object, pass that off to some kind of service layer class, then translate the domain result back into a DTO suitable for sending back over the wire. If <code>RequestTranslator</code>, <code>DomainService</code> and <code>ResultTranslator</code> classes are already well tested, is there any value in also testing this <code>GetThings</code> method in your WebAPI Controller? Integration tests are always valuable, of course, but in this case unit tests seem superfluous. You don’t <em>need</em> to write a test here, because there is no non-trivial, untested logic. Method calls and variable assignments are details of your language, compiler or runtime, and these things are third party tools. You don’t need to test third party tools, as I’ve already said.</p> <p>I know you’re probably thinking “But what if somebody else on my team changes this method? What if the order of statements is rearranged, or if new logic is added to it”? To this I have a few replies:</p> <ol> <li>You can’t rearrange the order of statements in this method, because the type system and the compiler prevent that. You can’t use the variables before they’re defined, and you can’t swap a <code>ThingRequestDto</code> in place for a <code>DomainResult</code>, so those kinds of changes are impossible.</li> <li>By cultural convention, your team should know to keep thinks like MVC controllers thin, and move non-trivial logic into the Application Service Layer, or Domain Model, or wherever else. In that case, you shouldn’t worry about people adding testable logic to places that are intentionally kept simple.</li> <li>What happens normally, when you change logic in a method? You go to your test suite and ensure that either the tests continue to pass or that new tests are added to verify the new behavior. If you’re adding non-trivial logic to a simple method which is not previously tested, normal operating procedure tells us that we need to add a test. I’m not saying some things should never be tested, I’m saying that some things in certain conditions are not worth testing. If something which hadn’t been worth testing suddenly is worth testing, test it.</li> </ol> <p>I’ll talk about this in more detail below, but it’s worth pointing out that the amount of tests you need is proportional to the value of the code. If the code does nothing of value, and cannot (according to the rules of your language, compiler and runtime) do anything other than what is represented, you don’t need tests.</p> <p>(Notice that if you add custom logic in your adaptor, such as data mapping, validation, consistency or other rules, you <em>will need to test that</em>).</p> <h2 id="orchestrators">Orchestrators</h2> <p>Orchestrators are classes which take complex tasks, break them up into logical subtasks, and delegate those subtasks to child objects for processing. This is classic Map-Reduce and things like it: Map a large aggregate task out to several smaller tasks, and then reduce the various result sets down into a single result.</p> <p>Consider the case of a Service Layer which interacts with your Domain Model and persists results using a Repository:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">class</span> <span class="nc">FooService</span> <span class="p">{</span> <span class="k">private</span> <span class="k">readonly</span> <span class="n">IRepository</span><span class="p"><</span><span class="n">Foo</span><span class="p">></span> <span class="n">Repository</span><span class="p">;</span> <span class="k">public</span> <span class="nf">FooService</span><span class="p">(</span><span class="n">IRepository</span><span class="p"><</span><span class="n">Foo</span><span class="p">></span> <span class="n">repo</span><span class="p">)</span> <span class="p">{</span> <span class="n">Repository</span> <span class="p">=</span> <span class="n">repo</span><span class="p">;</span> <span class="p">}</span> <span class="k">public</span> <span class="k">void</span> <span class="nf">DoTheThing</span><span class="p">(</span><span class="n">ThingArgs</span> <span class="n">args</span><span class="p">)</span> <span class="p">{</span> <span class="n">Foo</span> <span class="n">foo</span> <span class="p">=</span> <span class="n">Repository</span><span class="p">.</span><span class="n">Load</span><span class="p">(</span><span class="n">args</span><span class="p">.</span><span class="n">FooId</span><span class="p">);</span> <span class="k">if</span> <span class="p">(</span><span class="n">foo</span> <span class="p">==</span> <span class="k">null</span><span class="p">)</span> <span class="k">return</span><span class="p">;</span> <span class="n">foo</span><span class="p">.</span><span class="n">Thing</span><span class="p">(</span><span class="n">args</span><span class="p">.</span><span class="n">Value</span><span class="p">);</span> <span class="n">Repository</span><span class="p">.</span><span class="n">Update</span><span class="p">(</span><span class="n">foo</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>This class breaks down the high-level domain task “Do the thing” into individual bits: Load the target object, perform an operation on it, and store the results of that operation back into the database. Taken together, this all sounds complicated. But when you break the task down into the individual operations and delegate those operations to the appropriate classes, it’s not a big task anymore.</p> <p>What you should notice is that this <code>DoTheThingAndSave</code> method doesn’t do much except call a sequence of methods on other classes, just like our WebAPI Controller method above. “But wait”, I can hear you saying already, “There’s an <code>if</code> statement in there! This method has testable logic!” Not really. The <code>if</code> is a simple null ref check. Test it if you want to, but I think the value proposition is dubious. Besides that <code>if</code>, this class doesn’t really have any logic to it Besides calling a sequence of methods in a required order. The compiler and type system guarantee that the method can’t be called in order, or that one of the calls be omitted without throwing an error about using an uninitialized variable.</p> <p>The loading and updating logic happens in the repository, which your tests are going to mock anyway. The null check is trivial, and then the rest of the method is a simple redirect to <code>foo.Thing</code>. If <code>Foo.Thing()</code> is already tested, there’s no reason to test <code>FooService.DoTheThing()</code> also.</p> <p>“But wait!” I can hear you yelling from over the interwebs “What if we add more logic to <code>FooService</code> later and these trivial behaviors you mention become more complex or even get broken?” You don’t need tests so long as your code is trivial. If you lose that property, you do need to add tests. But then, in the interest of being lazy, maybe we make sure that things which are trivial remain so. This is a cultural effort, and one that your team would need to know and understand. The easiest tests to write are the ones which don’t get written at all. The fastest tests to run are no tests. Make sure your team recognizes that keeping certain bits of logic trivial is in their own best interests. Again, if something can’t stay trivial, tests need to be added.</p> <p>Our <code>FooService.DoTheThing()</code> method loads data out of our data store into a business object, and then performs an operation on that business object. Where are we going to add complexity anyway? We need some kind of precondition validation on the ID? We can put that in the repository. We need another condition in there for an early exit on incomplete load? We can add a Specification object, which will be separately tested. We need to change which method or which series of methods we call on our <code>Foo</code> business object once it’s been loaded? Maybe we refactor the body of this method out into a handful of interchangable Strategy objects, which are individually tested. What if we need to add more method calls at the end, after <code>foo.Thing()</code>? Well, we could compose many small methods on <code>foo</code> into a single orchestrating method on that class, or we could just add them all at the end. What will a test of <code>FooService.DoTheThing()</code> show us in this case? We’ll set up a mock repository to return a mock <code>Foo</code>, which will count method calls and verify ordering? We start to venture dangerously far down the path of over-specification and micromanagement, which are big reasons why mock objects get such a bad rap in some corners. We do not need tests to prove that the order of methods called in our code is what it is in our code when our code works correctly. See? it’s even absurd to try and describe what we’re trying to test!</p> <p>If your try your absolute best and still can’t find a way to keep this method simple in the face of changing requirements, then by all means add tests for it. You don’t need tests when things are trivially simple, but when you lose simplicity you lose the benefit of not needing tests.</p> <p>My point is this: our <code>FooService.DoTheThing()</code> method doesn’t need to be tested and can resist the need to be tested because all testable, non-trivial logic can and should be moved elsewhere. This class is a simple orchestrator, delegating the real work out to other classes.</p> <p>We absolutely do not need to test that our programming language can assign a variable correctly, that it can correctly test for null, or that it can correctly call a method. If we are indeed using mocks or stubs for our Repository instance, we don’t need to test that our mock object library can correctly create mock objects, or that those mock objects return the values we’ve told them to create. All these things are already under test, safely filed away in our “External Dependencies” folder.</p> <p>It’s a heck of a lot less work to setup a unit test for a class which has no dependencies, few dependencies or extremely simple dependencies. Classes which bring together a more complicated set of dependencies, where Service Layer is a common example of such, are harder to create tests for and therefore there is benefit in avoiding the need to test these things. If your orchestrator needs to provide more behavior, you can encapsulate that in another method on an existing dependency or in a new dependent type, and delegate out to that.</p> <p>On a side note, I occasionally hear people arguing about whether the Service Layer should be thin or thick. My money comes down on the side of being thin (and therefore trivial) for exactly these reasons. A Service Layer, in my mind, exists to bring together dependencies (you <em>are</em> properly using Dependency Inversion, right?) and delegate out tasks to them. Simpler classes are easier to reason about, and being able to reason about things means it’s easier to understand what they do and feel comfortable that they do the right things.</p> <p>On yet another side note, I see posts around the internet for “Are you a C expert?”. These posts invariably show code examples of things like weird instruction ordering, weird type-casting, weird out-of-bounds issues and other weird code and then end with some little snip about how you can’t really call yourself an expert if you can’t answer these questions about the language. In retort, I like to point out that knowing not to write code like that is a much more valuable skill than knowing exactly what bad things happen when you do. Not walking into the minefield in the first place is a much stronger sign of intelligence in my mind than walking in with some heuristic for how to avoid most of the mines. I’ll gladly call you an expert if you know what code to not write, even if you can’t tell me exactly <em>why I shouldn’t write it</em>. Think about this next time you’re designing code for testability. If your code is simple and follows best practices, you don’t need a super-expert to sort it out, and you don’t need big complicated tests (or maybe any tests at all!).</p> <p>After all that my thesis stands: Any class which does nothing but bring together dependencies and delegate to them can be made trivial, and trivial classes like this don’t need to be tested.</p> <h2 id="pure-data-objects">Pure Data Objects</h2> <p>Any object which is pure data and has no logic or only trivial logic (null checking constructor parameters) does not need to be tested. You need to trust your programming language when it says it will make public <code>get</code>/<code>set</code> property accessors work as advertised.</p> <p>Things like DTO patterns and Null Object patterns fall into this category. If there is no logic, there is no need for a test.</p> <h2 id="what-do-we-test">What Do We Test?</h2> <p>So far I’ve listed several things which do not need to be tested. What’s left? What do we test?</p> <p>Let’s start by considering a tree. Tree has a trunk. Trunk splits into branches. Branches into twigs and at the end of the twigs we have the fun, interesting stuff like leaves, flowers and fruit. The inside of your tree is just wood: trunk and branches. Leaves, flowers and fruit form a shell or canopy around the outside, where the action is. Leaves need to be on the outside of the structure to receive unobscured sunlight. Flowers form on the outer edge where pollinators are likely to fly by. Fruit forms where the flowers were. New growth, stems and shoots, typically come out where your leaves are, with buds and free-flowing metabolic energy. The inside of your tree is the structural stuff, the wood. The outer edge is where the interesting stuff happens and where new growth is added.</p> <p>Now let’s think about your program call graph like a tree. Your trunk is your entry point. From there you call methods on your Orchestrators (your branches) which in turn eventually call methods on your testable classes (leaves, flowers, fruit). (or, if you’re using Dependency Inversion, which you should be, you take references to child objects and call methods on them). The wood is just boring and structural. It works because it must. If the wood doesn’t work, there is no tree. If the wood works but the leaves don’t, what you have is a big waste of time and energy.</p> <p>You start with one method on one object, which calls more methods on other objects, which in turn calls even more methods on other objects, and so on and so forth. At the very end of our graph, the leaves, we have things which either call out to external tools or else do pure work and return the results. External tools are already tested, so what’s left for us to test is our own pure methods. Methods which we write, which perform logic and return.</p> <p>So think about your code as having the following three types of objects:</p> <ol> <li><strong>Branch Classes</strong> which do no work themselves, but instead delegate work to children, and then aggregate results back to the parent. (Orchestrators)</li> <li><strong>External Gateways</strong> which interact with external resources and libraries. These are basically orchestrators themselves, because they do no real work but instead reformat the request and send it on to the external resource like any other dependency.</li> <li><strong>Pure Classes</strong> which perform some kind of computation and return a result without calling external resources or only make trivial calls.</li> </ol> <p>In an ideal project, if you’ve really taken the time to structure your code in the way I’m suggesting, you only need to test your non-adaptor Pure Class leaves. Your Branch Classes just delegate tasks to children so those are trivial and don’t need tests. Your external resources are no going to be included in your unit tests so your External Gateways will probably be replaced with mocks or stubs and won’t be tested either. The only things you need to test are your Pure Classes, which might not necessarily be “pure” in the sense of being completely side-effect free, but they do work and return results without calling out to another class.</p> <p>If you can separate out your classes into these three piles, classes which only delegate but do no real work, classes which are simply abstraction boundaries around the outside world, and classes which only really do work and do not delegate, you can focus your testing on the last group and be confident that you do not need to worry about anything else.</p> <img src="" height="1" width="1" alt=""/> SOA Language Selection 2015-05-24T00:00:00+00:00 <p>When you hear about Service Oriented Architecture in general, and Microservices specifically, you frequently hear about one of the common benefits: You can use the right tools for each individual job. Want to write a service with Node.js? Go for it. Want to write a service in Python or Ruby? Do it. Each individual service is so small, the thinking goes, that you can afford to experiment. Later, if you have to rewrite, it’s easier to do.</p> <p>Here’s a story of a team which I was not a part of. They were a .NET shop with plenty of JS experience from frontend work. They had a job, to write a new service for doing authentication over Active Directory. The developer on the project didn’t know how to do the job with C# or JS, but he did know of a Ruby Gem that solved the problem and exposed a friently API. He petitioned for and received permission to write the service in Ruby, with Rails.</p> <p>A few months later, when that developer left for greener pastures (his resume now had “Ruby” listed prominently towards the top!) the team was stuck in a pickle: How do we maintain and even improve this service, which is written in a language that nobody else on the team knew? Nobody on the team knew Ruby, and certainly none of them knew Rails. In an enterprise-wide architecture with dozens of .NET-based services, this one little Ruby project stood out as a growing problem. Eventually, when the authentication system needed to be upgraded, this one little project needed to go through a complete rewrite. Nobody on the team was able to do the necessary upgrades in place.</p> <p>When people are talking about using the right tool for the job, one thing that is almost always missing from the calculations is the availability of resources. How often do we hear the following kinds of things?</p> <p>“We’ll deploy this service to a Linux server” when nobody on the Ops team has experience administering Linux.</p> <p>“We’ll use MongoDB for this application” when we have a dozen DBAs on our data team with experience in SQL Server, and none with experience in MongoDB.</p> <p>I’m not saying we can’t ever use unfamiliar technologies and that you can’t, as a team, make a focused effort to develop new skills. What I am saying is that the availability of competent resources to develop, deploy, monitor and maintain the software needs to be taken into account. If there is a compelling technical reason to use something new, by all means go for it. But if you can’t demonstrate the benefits of a new technology outweigh the core competency of your team to leverage existing technologies, maybe you shouldn’t go down that road. Being different for the sake of difference (or worse, so your developers can pad their resume with new in-demand buzzwords) isn’t a reason to use a foreign technology.</p> <p>Yes, with a proper distributed architecture you <em>can</em> write different services in different languages with different backend technologies. I’m just saying that because you can do something doesn’t mean you <em>should</em>. Make sure your team is able to support the project through its entire lifetime, keeping in mind that people will leave your team and need to be replaced, and all the other headaches of team management too.</p> <img src="" height="1" width="1" alt=""/> Repository and EntityFramework 2015-03-21T00:00:00+00:00 <p>I’ve seen more than a couple of tutorials online talking about how to implement a Unit Of Work and a Repository pattern using Entity Framework. What you will inevitably see is something like this:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">class</span> <span class="nc">UnitOfWork</span> <span class="p">{</span> <span class="k">private</span> <span class="k">readonly</span> <span class="n">DbContext</span> <span class="n">m_context</span><span class="p">;</span> <span class="k">public</span> <span class="nf">UnitOfWork</span><span class="p">(</span><span class="n">DbContext</span> <span class="n">context</span><span class="p">)</span> <span class="p">{</span> <span class="n">m_context</span> <span class="p">=</span> <span class="n">context</span><span class="p">;</span> <span class="p">}</span> <span class="k">public</span> <span class="k">void</span> <span class="nf">SaveChanges</span><span class="p">()</span> <span class="p">{</span> <span class="n">m_context</span><span class="p">.</span><span class="n">SaveChanges</span><span class="p">();</span> <span class="p">}</span> <span class="k">public</span> <span class="n">Repository</span><span class="p"><</span><span class="n">T</span><span class="p">></span> <span class="n">GetRepository</span><span class="p"><</span><span class="n">T</span><span class="p">>()</span> <span class="p">{</span> <span class="k">return</span> <span class="k">new</span> <span class="n">Repository</span><span class="p"><</span><span class="n">T</span><span class="p">>(</span><span class="n">m_context</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> ="p">}</span> <span class="p">}</span></code></pre></div> <p>It’s also worth mentioning that you could make a few changes and have a Repository which does not rely on a Unit Of Work, if you don’t need the kinds of transactional consistency that a UOW brings:</p> <div class="highlight"><pre><code class="language-csharp" data-="n">m_context</span><span class="p">.</span><span class="n">SaveChanges<="n">m_context</span><span class="p">.</span><span class="n">SaveChanges</span><span class="p">();</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>In either case, this seems all well and good. We’ve implemented a Repository and possibly a Unit Of Work that both use the underlying <code>DbContext</code> and abstract it away so we can mock out our data store for unit tests. Win, right?</p> <p>What should be immediately obvious here is that our code doesn’t do anything. We haven’t written a Repository, we’ve written an <strong>Adaptor</strong>. EntityFramework already has a Repository for us. It doesn’t have the word “Repository” in the name, but it’s a repository nonetheless.</p> <p>In EntityFramework, <code>DbContext</code> <em>is a unit of work</em> (among other things) and <code>DbSet<T></code> <em>is a repository</em> (among other things). These patterns are already properly employed, so all you need to write for your code is a thin adaptor to abstract away the DB details and be able to use a class with the word “Repository” in the name.</p> <p>When we are talking about classes like this which are very thin, it’s worth asking if they are required at all. That is, is it worth the modest amount of effort required to implement this adaptor in the first place? Should we spend time wrapping up <code>DbContext</code> in a custom <code>IUnitOfWork</code> and wrapping up our <code>DbContext</code> in a custom <code>IRepository<T></code>?</p> <p>Arguments could be made either way and, of course, it may just depend on the needs of your application and your style of writing code. If you don’t like the Respository pattern, you obviously don’t want to do this. But, for people on the fence, we can do a little bit of cost/benefit analysis.</p> <p><strong>Reasons Why</strong></p> <ol> <li>We can easily mock our <code>IRepository<T></code> and <code>IUnitOfWork</code> dependencies in our unit tests, giving us the ability to easily isolate tests from our database (making them faster and more reliable).</li> <li>We provide a much smaller, simpler interface on the EntityFramework types which are, arguably, bloated. Consider this something of an extension of the Interface Segregation Principle (ISP).</li> <li>It gives us isolation from EntityFramework, so we could potentially make changes to our ORM or our storage mechanism without having to make sweeping changes outside our data layer.</li> <li>You can more easily include these adaptor types in other existing machinery which operates on Repositories over different stores.</li> <li>We gain the ability to include our <code>IRepository<T></code> in our domain layer, and inject the implementation elsewhere. This is in accordance with things like the Clean Architecture or the Onion architecture, which both bring their own benefits. (EntityFramework does a very poor job of providing its own interfaces for these purposes, so if you want the architecture to work, you need to provide some kind of adaptor and IRepository is as good as any).</li> </ol> <p><strong>Reasons Why Not</strong></p> <ol> <li>It does look a little like needless indirection, especially when EntityFramework is already implementing the patterns we need.</li> <li>We can stub in <code>DbContext</code> and <code>DbSet<T></code> in our unit tests to avoid hitting the database, giving us isolation (though, admittedly, doing this is much more difficult than employing a Mock Object framework like Moq).</li> <li>The reality is that you aren’t going to switch to a new ORM, especially if you’re only employing the few features which can be cleanly exposed through the standard Repository interface. EntityFramework does those things no worse than any alternative, and the pain of switching ORM to get one with the same features implemented just as well isn’t worth doing. (if you’re employing more powerful, ORM-specific features, you probably aren’t using IRepository because that abstraction boundary is so limited).</li> </ol> <p>There’s no magic bullet argument in either case, so it all comes down to preference. Is the relatively small amount of code you need to write for this purpose worth the effort? You’ll have to make this decision for yourself.</p> <p>Personally, my style of writing code almost always benefits from having these Repository adaptors so I tend to provide them unless a simpler solution or a more flexible one is called for. For example, if my only database interactions are some complicated queries which don’t fit nicely into the classic Repository interface, I may skip that but instead write other wrappers around EntityFramework such as a Table Gateway. I would almost always want to have a wrapper of some sort, in any case, because I tend to structure programs very much in an Onion or Clean style.</p> <img src="" height="1" width="1" alt=""/> The Single Responsibility Principle 2015-03-14T00:00:00+00:00 <p>The problem with the <strong><a href="">Single Responsibility Principle</a></strong> (SRP) is in defining exactly what is or is not a “responsibility”. As with so many things in programming and life, the definition is fluid and can change depending on context. Sometimes it also has to do with the level of abstraction that you use to discuss the class.</p> <h2 id="example-repository-pattern">Example: Repository Pattern</h2> <p>Consider the case of the Repository Pattern. As formulated by Martin Fowler in his timeless classic, Patterns of Enterprise Application Architecture (PoEAA), a repository typically contains some or all of the following parts:</p> <ol> <li>An interface to the database, to execute queries and return results</li> <li>A Data Mapper object, which maps results from the Database into objects usable by the software system.</li> <li>A Query Object or some kind of query generator which can take query parameters and translate those into queries on the Database</li> <li>Typically, some kind of Identity Map to prevent multiple loading and mapping of the same data object more than once.</li> </ol> <p>This seems like the repository has several different responsibilities: Working with the database, mapping database results, constructing search queries and possibly some caching tasks for performance reasons. This seems like more than one responsibility.</p> <p>But, if we go up a level of abstraction and look at the interface the Repository provides instead of the pieces from which the Repository is assembled, we get the kind of definition that Martin Fowler used to describe it, which undeniably looks like a single unified thing:</p> <pre><code>Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. </code></pre> <p>So it <em>mediates</em> between layers using a <em>collection-like interface</em>. That’s one responsibility and everything else is an implementation detail which might be delegated to other classes in other places.</p> <h2 id="context-matters">Context Matters</h2> <p>If we look at Repository in a different context, suddenly the same exact pattern becomes a clear violation of the SRP. Consider the case of a CQRS kind of architecture. In CQRS, as a refresher for readers who might not be familiar with the idea, we separate our commands (Insert, Update, Delete) from our queries (Load, Find, LoadAll). Ideally, this separation would be from top to bottom in the application: Separate interfaces at the top-level (or, at least, at the Service Layer) all the way down to separate data stores. We might have one master database which accepts writes, and several read-only slave databases for our reads. Notice also that the schemas of the two databases might actually be different, to satisfy the different needs of the consumers. The read-only slaves may opt for a heavily denormalized storage structure which is easier to query.</p> <p>In this CQRS setup then, the Command stack would use a database interface which interfaces only with the master DB and the Query stack would use a separate database interface which interfaces with the read-only slaves. We could still use Repository patterns in both places, because we would still like to have a collection-like interface over the data layers. We would probably end up with two separate Repository types, because we have two separate responsibilities: One to do normal CRUD operations on the Master DB, and one to read data from the Slave DBs. Making a change to one side of the stack would not require us to alter logic in the other side. Even though our two repository types are nominally acting on the same conceptual data, we can’t combine them into a single class because they have two separate responsibilities.</p> <p>(Arguably, the query side of the stack might opt for something more streamlined than a repository because the normal Insert/Update/Delete methods that are part of a typical repository interface wouldn’t be used in this case.)</p> <h2 id="different-formulations-of-the-principle">Different Formulations of the Principle</h2> <p>One way to think about the SRP is this: <strong>Describe your class in plain language. If you use the words “and” or “or” in the description, you probably need to break it up into smaller classes.</strong> This can work but, as shown in our repository example above, the way we describe the class depends on the level at which it is viewed, and the context in which it lives. (Also, using “and” to show which two layers it mediates between doesn’t count, making this formulation even more problematic).</p> <p>I’ve heard the SRP described as <strong>The class should only have one reason to change</strong>, but then the wise-asses among you will immediately retort with “There is only one reason to change: if the code doesn’t meet the requirements!” Not meeting the requirements is the one and only reason why anything changes, therefore your entire application can be stuffed into one super big class. This is clearly not the intention of the SRP.</p> <p>(Other wise-asses among you may note that a certain reading of the the Open/Closed Principle might preclude <em>any changes</em> to a class in favor of extending through new subclasses, meaning that the SRP is null and void. Don’t think like this.)</p> <p>I’ve also heard the SRP described as this: <strong>There should only be a single requirement which, when changed, will cause your class to change</strong>. But that seems kind of vague and seems to open the door to the idea that the granularity with which programmers write the code is dependent on the granularity of the requirements written by the analysts. A pretty big portion of your design suddenly is out of the hands of the people implementing it, and is put into the hands of people who aren’t considering the architecture of the code at all! In this case we have to start asking what the single point of truth is: Are the analysts secretly in charge of your application architecture, or are your programmers supposed to review and re-write all your requirements? Either way, there are clear problems here.</p> <p>One other way to describe the SRP is <strong>Do one thing and do it well</strong>, which is decent enough but immediately begs the question “what is a ‘thing’?”. Taking this to the logical extreme can lead to the idea of lots of little classes with one method each:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">class</span> <span class="nc">ManagerReport</span> <span class="p">{</span> <span class="k">public</span> <span class="k">void</span> <span class="nf">Execute</span><span class="p">()</span> <span class="p">{</span> <span class="kt">var</span> <span class="n">data</span> <span class="p">=</span> <span class="k">new</span> <span class="n">ManagerReportDataLoader</span><span class="p">().</span><span class="n">Execute</span><span class="p">();</span> <span class="kt">var</span> <span class="n">document</span> <span class="p">=</span> <span class="k">new</span> <span class="n">ManagerReportFormatter</span><span class="p">(</span><span class="n">data</span><span class="p">).</span><span class="n">Execute</span><span class="p">();</span> <span class="k">new</span> <span class="nf">ManagerReportPrinter</span><span class="p">(</span><span class="n">document</span><span class="p">).</span><span class="n">Execute</span><span class="p">();</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>…and this is clearly not what we want either. I’m not saying that we can’t or shouldn’t use the Command pattern where appropriate, only that if we’re writing this kind of code we can drop the pretense of object-orientation entirely and use a simpler, procedural language instead.</p> <h2 id="srp-according-to-uncle-bob">SRP According to Uncle Bob</h2> <p>Robert C Martin, who originally formulated the SRP in his work <em>Agile Software Development, Principles, Patterns, and Practices</em>, has said that “Responsibilities are People” in the sense that they tend to map to people with individual roles. For example, an accountant person might need certain accounting reports, where a manager might need certain reports about people and resources on the team. Even where the logic for these two reports might overlap, we would still keep them implemented as separate classes, because the <em>people who need them</em> are different and therefore the <em>reasons why those report classes may change</em> are also different. If we are aggressive about sharing code between the Accounting and Managerial reports and then one of those two people requests a change to one but not the other, we end up in quite a mess. Two pieces of code which may look the same, but which serve different masters, are repeated and therefore cannot and should not be combined. At least, not directly. As an example, let’s consider two classes, a ManagerReport and an AccountantReport, which both have a method that iterates over a list of Employees and totals up their salaries:<></code></pre></div> <p>We might be tempted to immediately combine these methods and indeed these classes to maximize code sharing:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">abstract</span> <span class="k">class</span> <span class="nc">ReportBase</span> <span class="p">{</span> <span class="k">public</span> <span class="k">abstract<="n">ReportBase</span> <span class="p">{</span> <span class="p">}</span> <span class="k">public</span> <span class="k">class</span> <span class="nc">ManagerReport</span> <span class="p">:</span> <span class="n">ReportBase</span> <span class="p">{</span> <span class="p">}</span></code></pre></div> <p>We can be quite proud of ourselves at this efficient reuse of code until the accountant comes back and says “I need total yearly pay for all employees, even hourly ones”. Now what do we do? We either have to take apart our entire Report inheritance hierarchy or else we need to override the method in one place. Now we have an inheritance hierarchy which is completely unnecessary and which does nothing but complicate our system.</p> <p>Going back to the very beginning, we have two options: 1. We can keep two classes, with method implementations which look identical, but secretly aren’t, because they serve different purposes for different people or roles (and these differences may not be clearly expressed in the code because they have to do with cultural aspects of our organization). 2. We can delegate out the logic which looks shared into some kind of helper class, and easily de-duplicate later:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">static</span> <span class="k">class</span> <span class="nc">PayHelper</span> <span class="p">{</span> <span class="k">public</span> <span class="k">static</span> <span class="kt">double</span> <span class="nf">TotalSalary</span><span class="p">(</span><span class="k">Now when we need to make a change to the AccountantReport, we can change that implementation without needing to worry at all about hierarchies or affecting the ManagerReport at the same time.<">TotalPay<">TotalCombinedPay<>I’m not saying this is <em>the best way</em> to do it, I’m just pointing it out as an option which initially allows code sharing but also allows us to make modifications in the spirit of the SRP without worrying about collateral damage.</p> <p>One other concern I do have about the “responsibilities are people” idea is that we might fall into the trap of having a single God Object per person: An AccountantStuff object that does everything and anything that the accounting department might want, and a ManagerStuff object that does everything and anything a Manager may want. It might be better to view a responsibility as a combination of person and task. For example the Manager might need information about team salary totals when he’s in the Big Budget Meeting, but he might need information about schedules and vacation time in his Project Planning Meeting. Those two contexts, “Manager In Big Budget Meeting” and “Manager In Project Planning Meeting” require at least two classes (and possibly more if generating, formatting and printing tasks need to be delegated out elsewhere).</p> <p>The idea of “responsibilities are people” is definitely one dimension to consider when thinking about the SRP, but it certainly cant’t be the only one.</p> <h2 id="inverting">Inverting</h2> <p>Let’s look at the SRP from the other side. Instead of asking “what do I need to do to implement the SRP correctly” let’s try asking “If implemented correctly, what results would we expect the SRP to produce?”</p> <p>However we define it, assuming our code follows the SRP correctly, what benefits should we get? Here’s at least a partial list:</p> <ol> <li>When a change needs to be made, it should be possible to unambiguously identify a single place in your code to do it.</li> <li>Classes should be easy to understand because they only do one thing and that one thing should be obvious.</li> <li>The purpose of the class should be easy to explain.</li> <li>If all classes do only one thing each, and those things are well understood, workflows composed of such classes should be easy to understand.</li> <li>Operations on objects, including changing data and executing methods, should not produce unintended side-effects or changes to unrelated data and behavior.</li> <li>Classes should be able to be reused in new workflows, because the class is not tied to any one particular workflow.</li> <li>New workflows should be able to be constructed by composing together classes from existing workflows. That is, if the class “does one thing”, and you need that one thing, you should be able to use that class for that purpose.</li> <li>It should be obvious how and when to test the class.</li> </ol> <p>Some of these ideas, especially 6 and 7 do start to blur the lines a little bit and bring in other ideas besides pure SRP. But, my thinking goes like this: if you have a class which implements an operation and you have a consumer which needs to utilize that operation, you should be able to use that class with that consumer. If you cannot, it’s probably because your class does two things: It performs it’s intended operation <strong>and</strong> it adapts that operation to a specific workflow. If you can tease apart the general implementation from the specific interface, you can make that class more focused and thus more reusable.</p> <p>I like to think about the SRP as being kissing cousins with DRY: Your class should only have one responsibility, and your program should only implement that responsibility in that one class. Each given idea exists in one and only one place.</p> <h2 id="rediculous-example">Rediculous Example</h2> <p>You need to get your car cleaned, so you take your Car object to your CarWash object and call the method <code>car.PayForCarWash()</code>. But that’s clearly stupid, the car is just a means of transportation. Even though you pay for the wash while still sitting in the driver’s seat doesn’t mean it’s a method on the car. The person pays for the wash, so we move that method to <code>person.PayForCarWash()</code>.</p> <p>But that’s kind of stupid too. If I then go out and buy groceries, and pick up my drycleaning, and grab some diapers, does that mean I need methods <code>person.PayForGroceries()</code> and <code>person.PayForDryCleaning()</code> and <code>person.PayForDiapers()</code>? Certainly not. We might have a single payment method and we might move that onto a wallet object.</p> <p>(Yes, I know, the wallet is inanimate and doesn’t actually pay. The person is the actor who reaches into the wallet and obtains a method of payment. This doesn’t change the conceptual idea that it is the job of the wallet to hold or “manage” your money, and the job of the person to carry a wallet. I’ve been in line at the grocery store without my wallet before, and I can tell you that no amount of me having the agency to obtain money from a wallet means I can do so if my wallet is sitting on the table back home.)</p> <p>Now our method call looks like this:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="n">car</span><span class="p">.</span><span class="n">Driver</span><span class="p">.</span><span class="n">Wallet</span><span class="p">.</span><span class="n">Pay</span><span class="p">(</span><span class="n">carwash</span><span class="p">.</span><span class="n">Price</span><span class="p">);</span></code></pre></div> <p>The car has a driver, the driver has a wallet, the wallet can pay any price to any recipient, and the carwash keeps track of it’s own pricing. Every object in our graph has one responsibility, and when they come together, the result is clear and easy to understand. To illustrate, let’s see how this system might change in response to adding new requirements:</p> <ol> <li>There are different ways to pay for something, and not every business accepts every method of payment. The carwash needs to change to expose a list of accepted payment methods, the wallet needs to change to expose a list of currently available payment methods. Neither the person nor the car should need to change at all.</li> <li>It costs more to wash a tractor-trailer than it does to wash a car. The car object should expose a “vehicle type” property to tell what kind of vehicle it is, and the carwash should ask for the vehicle type when determining the price.</li> <li>If the driver doesn’t have any money she can ask one of the passengers, starting with whoever is riding shotgun (it’s as much a privilege as a responsibility). Also, if the driver is owed money because she covered lunch, one of the passengers may be obligated to pay for the wash. The car changes to have multiple passengers, not just the driver. Some sort of “social convention” object would keep track of outstanding debts and also create priority list of payors, starting with driver and shotgun, then moving to the poor saps in the back.</li> </ol> <p>In each of these three cases, whenever something needs to be change, it should be possible to quickly determine where that change needs to be made. There may be other options in these cases, depending on the other system requirements which I have not explicitly written here.</p> <h2 id="use-your-intuition">Use Your Intuition</h2> <p>So there are different ways to describe and approach the SRP. My point with this post is to demonstrate how the SRP seems quite simple, but when you start trying to define it clearly and concisely, you find that most definitions are lacking. This doesn’t mean that the SRP is unknowable or unworkable, only that it’s worth considering both how you pursue it and what you hope to get from that pursuit. Taken in concert together, all of these ideas along with others I didn’t mention, intuition and experience can help to produce great code.</p> <p>I believe that writing good software is an iterative process, and one with few absolutes in terms of correct style or design. The Perl folks are quite fond of saying “There’s more than one way to do it”, which really is an apt mantra when it comes to architectural decisions like this. You should be able to look at a class and determine what might cause the class to change. Then, if you feel like the list has more than one non-trivial item (where a trivial item might be “the public interface of a dependency changes, requiring my class to change the way it interacts with that dependency”) you might want to consider either viewing the class from a higher level of abstraction or else breaking it up into smaller pieces.</p> <img src="" height="1" width="1" alt=""/> Resume Mistakes 2015-03-08T00:00:00+00:00 <p><a href="/2015/02/15/resume.html">Last time I talked about what a Resume should be</a>. Today I’m going to talk about what it absolutely should not be. Since I’ve seen a lot of resumes lately, I’m going to share some of the things I’ve seen that did more harm than good.</p> <h2 id="pages">6 Pages</h2> <p>It seems like about 6 pages is the norm for a resume that I’ve been seeing. Of all the resumes that have come across my desk lately, the vast majority of them have been 6 pages. Some were 5. Almost none were 3 or less. I haven’t seen a single 1-page resume (you know, what it’s supposed to be).</p> <p>We had a gentleman come in with one of these resumes, long-form prose descriptions of every little thing he’s ever done. Sitting across the table from him were myself and all the other senior members of the team (it was a small team).</p> <p>Let me lay a little knowledge on you: We’re busy people. I’m not sitting around all day, twiddling my thumbs and reading resumes. I print off a copy of the resume about 5 minutes before I walk into the interview. I do a quick scan of it, circle the important ideas, write a couple quick notes about questions I might ask, and then it’s show time. Maybe you have an image of interviewers getting together in a little room, hours in advance, discussing the candidate in great depth and preparing everything about the interview experience in minute detail. Sorry, that’s how now it works (at least, not at any job I’ve ever been at).</p> <p>Almost every time I’ve been a candidate in an interview myself, the first few moments are spent with the interviewers reading the resume while I introduce myself; to become familiar with the highlights because they have better things to do with their time, like work and produce software.</p> <p>Everybody has something better to be doing. Time spent reading your long, complex resume is time not doing what I’m paid to do: actual work.</p> <p>Back to our gentleman, he wasn’t doing a great job of really selling himself so we started to ask him questions. “Do you have experience in X?” And then he would say something like “Yes, I did that at a prior company” Then the team and myself flip through the resume to find the relevant section and read it. Silence ensues. This is a big waste of time, and it demonstrates two things: That this person can’t organize important information in a readable way, and that this person can’t sell himself because he expects the resume to do all the talking. Two important takeaways:</p> <ul> <li><strong>Long resumes are hard to find important pieces of information in</strong></li> <li><strong>Be respectful of the time commitments of the interviewers, and don’t give them a novel to read before your interview.</strong></li> </ul> <h2 id="padding-too-much">Padding Too Much</h2> <p>The resume is nothing but your ticket to an interview. A hiring manager or an HR person or a recruiter or somebody is going to look at your resume, do a quick keyword check and, if you match, invite you in. The interviewers will typically read your resume shortly before the interview starts, and only read closely enough to get a sense of you, and maybe direct some lines of questioning.</p> <p><strong>Do not put anything on your resume that you do not want to be asked about</strong></p> <p><strong>Anything you put on your resume can and will be used to direct the questions you are asked.</strong></p> <p>We had a young woman come in to our office applying for a senior-level position on our team. Her resume was, of course, about 6 pages long but luckily the first page included a quick summary of her skills. So, in other words, the first page of her resume was what the entire resume should have been, and the last 5 pages were garbage. But, I digress.</p> <p>We were looking for a C# coder, so we weren’t putting too high a premium on other skills. But one thing stood out on her resume, that we just needed to ask about:</p> <p>“Expert in SQL query optimization”</p> <p>Wow. That’s actually a cool skill to have. Even if it’s not what we were expressly looking for, that kind of skill (which typically comes from lots of practice and dedicated learning) could definitely transform a good candidate into a great one. Combine that will some teaching ability, and this person has the potential to improve the skills of our entire team. So, I had to ask her to explain, “What do you do, to optimize an SQL query?”</p> <p>This wasn’t some kind of “Gotcha!” moment, I wasn’t trying to put her on the spot. I was trying to give a self-described expert an opportunity to shine. Instead, what we got was this:</p> <p>“Well, I would look at the query and maybe try to reduce or re-order the joins.”</p> <p>That’s it? Reorder your joins? That’s not SQL optimization and certainly isn’t “expert” optimization. So I asked her a few other questions:</p> <ul> <li>“Have you ever examined an execution plan?” No.</li> <li>“Can you tell me about indexes?” No.</li> <li>“Do you know what query Statistics are?” No.</li> </ul> <p>I want to reiterate here, because this is important: <strong>I would not have asked about SQL query optimization if she hadn’t listed herself as an expert on the topic.</strong> When I see things like that on a resume, I ask because I want the candidate to shine. I want to ask lots of questions to find the topics that the candidate is the most passionate and knowledgable at. Everybody is different, some have different strengths and weaknesses, and I’m just trying to get at the heart of the matter.</p> <p>When you list “expertise” on your resume, even if it’s in a tangential subject and you can’t back it up, I have to seriously start questioning your abilities on every other thing you have written. If her expressed “expertise” was translated as “passing familiarity of”, then how bad must she be at the subjects where she isn’t listed as an expert?</p> <p>She ended up not getting an offer, and our decision was made in no small part because of this single exchange. If she hadn’t listed expertise in SQL query optimization on the resume, she very well might have been considered much more strongly.</p> <h2 id="a-brief-history-of-time">A Brief History of Time</h2> <p>We had a gentleman come in, with another glorious 6-page resume, who listed job experience going all the way back to the early 1990s. Like, not just dates and places, but detailed lists of technologies and techniques. After all, he did have 6 pages to fill.</p> <p>So, we asked him about one of the things he claimed to have done circa 1995. Silence. He was dumbfounded. He couldn’t remember what he did 20 years ago and, after profuse apology, said he couldn’t really talk about it.</p> <p><strong>Don’t list anything on your resume that you aren’t prepared to talk about</strong>.</p> <p>Seriously. If you expect me to read the damned thing, don’t waste my time with things you can’t even remember. Anything you put on your resume, expect me to ask about it. Anything that I ask about, you sure as shit better be prepared to talk about. Nothing decreases my confidence in you faster than me asking you about your life and your work and you not being able to answer me.</p> <h2 id="worthlessness">Worthlessness</h2> <p>We had another gentleman come in with a painfully long resume, and he listed every single job he’s ever had, including jobs that had nothing to do with his career. Things like this time he worked at a store, or the time he was a courier, or the time he managed a computer lab.</p> <p><strong>Nothing belongs on your resume except the things that are directly relevant to the job for which you are applying.</strong></p> <p>If you want to list out your entire employment history, do it on Facebook or LinkedIn, or some place where there is no purpose except narcissism and “networking”. When you go to apply for a job, take your entire work history, cut out anything that isn’t relevant, condense anything that isn’t recent, and put a spotlight on only the few things which you think are going to be the most interesting to your prospective future employer.</p> <p>A resume is not “your entire work history”. A resume is a way to earn an interview with a company that has a specific opening. Anything on your resume which doesn’t immediately make the hiring manager say “this is all very relevant for the position I need filled” is a waste of space, time, and energy.</p> <p>Don’t, as we later figured out this gentleman did, just copy+paste your entire LinkedIn profile into Microsoft Word and call it your resume. If you do this, definitely make sure you go back and check that the formatting works in the new program.</p> <p>If you can’t even put time and effort into your resume, I sure as hell don’t expect you to put time or effort into the job we need you to do. If your resume is a representation of you, and if it’s an undisciplined, disorganized mess, I’m going to assume that <em>you are an undisciplined, disorganized mess.</em> Somehow, I suspect this isn’t the image you’re trying to craft for yourself.</p> <h2 id="final-thoughts">Final Thoughts</h2> <p>As a computer programming professional, what is my job? I take requirements, I distill those down into workable designs, and I implement. I do what is required, I don’t do what is not wanted. I try to produce a product that meets the immediate needs of the stake-holders.</p> <p>A job posting <em>is a requirements document</em>. You and your resume <em>are the product</em>. You need to produce and present a product which meets the stated needs of the potential future employer. If you can’t do that on your resume, when your career and reputation are on the line, I have absolutely zero faith that you are going to be able to do it when the fate of the team and the company are on the line.</p> <p>Get your shit together. Seriously. If you have a resume and it’s longer than 1 page, <strong>you have misunderstood the requirements and you are doing it wrong</strong>. If your resume uses unnecessary prose and hides the important keywords, <strong>you are doing it wrong</strong>. You need to start doing it right, as a demonstration to your future employers that you’re going to do what they need the right way too.</p> <img src="" height="1" width="1" alt=""/> Your Resume 2015-02-15T00:00:00+00:00 <p>You’re putting together your resume to highlight your software development experience. How does it look? What do you put on there? What do you leave out? How do you know you’ve done a good job? Today I’m going to share some of my advice for writing up a resume for programmers.</p> <p>As always, I’m no expert so take all this “advice” with a big grain of salt. Having spent time on both sides of the hiring process, I’ve been able to identify some things that work and some things that don’t. I suspect that none of this advice will be seen as controversial, as it mostly consists of advice I’ve heard elsewhere and am simply bringing together in one place.</p> <h2 id="length">Length</h2> <p>A resume is like a Google search. It’s all about <strong>matching keywords</strong>, and almost nobody looks past the first page. Recruiters look at your keywords and search for jobs with similar keywords. Hiring managers look at keywords to see if they fit in with the technology stack they’re trying to staff.</p> <p>A resume should be <strong>One Page Long</strong>. Yes, this means you. I know you and your type, you think that you have more things to talk about, and every additional detail you can cram in there will make you a more attractive candidate, and you want to be the most attractive candidate possible. More is better, right?</p> <p>No. Bad programmer. <strong>Stop that</strong>.</p> <p>Your resume is all about keywords, and the longer the document is the harder it can be to find relevant terms. Most interviewers won’t look past the first page. Some rushed hiring manager won’t look at the bottom-half of your first page. Length is important, and going longer doesn’t make you look better. It makes you look unfocused, disorganized, unrealistic, and like the rules don’t apply to you.</p> <p>One small exception, and this probably doesn’t apply to most readers, is that certain types of senior-level people may opt for a longer format if they need to mention many different projects. For example, if you’ve been on many small short-term contracts it is probably necessary to list each one. If you have worked across many projects using different technologies and ideas, you might benefit from listing those out. Keep in mind that this may have the effect of demonstrating versatility but not deep expertise. If your job history just has lots of entries in it, you should consider omitting the ones that are shortest or least relevant (and be prepared to explain why you don’t seem to stay in one place for very long).</p> <h2 id="formatting">Formatting</h2> <p>Your resume should be formatted with bullet points and sentence fragments. It should have lists of projects with the technologies and techniques those projects employed. This is not the time or place for prose or lots of little details.</p> <p>Don’t say “created websites with a variety of technologies”. List those technologies out so people can see exactly what you have experience with: “Created websites with Python, MongoDB, JavaScript and jQuery”.</p> <p>For every job you have, formatting more or less wants to be the same:</p> <pre><code>Title, Company StartDate-EndDate Brief description, if necessary * Bullet points listing individual projects/technologies * and individual responsibilities </code></pre> <p>Use a fancy resume template from the internet if you want, but these few bits of information are what are important: Where, when, for how long, and what you did there. This is all that matters. Hiring managers need to know when you’ve worked with relevant technologies and for how long. This helps to indicate how much knowledge you have and how fresh and current it is.</p> <p>Important notice: If you are a front-end person like a designer, you might want to take some care to make sure your resume looks nice. If the employer expects you to make nice-looking websites or UIs, your resume should be a little mini-example of your good design sensibility. Make sure that the design is appropriate for the purpose (serious, focused, professional). If you aren’t involved in design, make sure the resume communicates the necessary information in an effective way.</p> <h2 id="basic-rules">Basic Rules</h2> <p>Here are some of the most important rules for developers of all levels:</p> <ol> <li>It should fill exactly one page, no more and no less.</li> <li>Do not list anything on your resume that you are not prepared to discuss, in detail, during the interview.</li> <li>Do not include things that aren’t relevant</li> <li>Avoid qualifiers like “expert in” or “master of” if you can’t talk about that concept at a high level.</li> </ol> <h2 id="resume-over-time">Resume Over Time</h2> <p>Now that we’ve covered the basic rules, let’s look at what you probably need to put on your resume at each stage in your career. As a dramatic over-simplification, there are probably three important stages in your career: junior level, mid-level and senior level.</p> <ul> <li>Junior developers have little to no relevant work experience.</li> <li>Mid-level developers probably have a year or more experience, are capable but don’t have lots of individual responsibility, technical leadership or deep expertise. These developers, depending on company and trajectory, probably have beween 1-6 years experience.</li> <li>Senior folks have been doing this stuff for a long time, have expertise, and can be trusted to do big projects themselves with minimum oversight. Senior developers, depending on company and individual skill, probably have more than 6+ years experience.</li> </ul> <p>These are rough guidelines. If you’re not sure exactly where you fall, read all relevant sections below and see what section most accurately describes what your resume needs to hold.</p> <h3 id="entry-level">Entry Level</h3> <p>For an entry level resume, your education is probably paramount. List that and make a big production of it: Relevant courses, GPA (if it’s better than about 3.5/4.0), Honors, large projects, independent study and things like that.</p> <p>List the relevant courses you took. These are probably the high-level major courses only. You don’t want to list gen-ed classes <strong>unless</strong> it’s something particularly relevant to the position like a foreign language (if the company industry has particular need of that language) or arts (if the job may involve design work).</p> <p>Internships and open-source projects (You do have some of that on there, right?) are good but probably not as important as your schooling. Your first job is going to involve a lot of learning and training. The employer wants to see that you have the necessary baseline knowledge, so that you stand a good chance of surviving the necessary on-the-job learning that will be expected of you.</p> <p>If you do have significant open-source experience, feel free to list that prominently as well, but don’t make a mountain out of a mole hill. A few commits to a project, especially a well-known one, are interesting but not extremely impressive. Now, if you have made significant contributions, that’s a different story entirely.</p> <p>From top to bottom, your resume should have:</p> <ul> <li>Some sort of objective statement, one or two sentences long, talking about how you are looking to start your career and what you bring to the table in lieu of actual work experience.</li> <li>Your education: Degrees earned (if any), relevant courses, large projects (especially group projects where you played a significant role), GPA, etc</li> <li>Relevant experience: internships and open-source projects.</li> </ul> <p>If you still have space to fill, list technologies and programming languages with which you have any experience or familiarity. Nothing fancy. Have a heading for “Relevant Skills” and just list them with bullet points. The purpose of this section is just to fill whitespace at the bottom of the page, nothing more.</p> <h3 id="mid-level">Mid-Level</h3> <p>Work experience is your most important asset now, so that becomes the star of the show. From top to bottom, your resume should include:</p> <ol> <li>Objective statement showing where you want your career to go from here.</li> <li>Work experience, highlighting both technology <em>and</em> process. Process includes things like Agile, SCRUM, TDD, etc. Make sure to highlight things for which you had ownership, leadership or responsibility.</li> <li>Internship or Open Source contributions (if relevant), any professional certifications, or anything else of professional relevance.</li> <li>School, focusing only on dates and degrees. Leave out GPA. Don’t list individual courses, but do list areas of concentration or particularly big projects, if any are relevant.</li> </ol> <p>Again, if you still have space, list bullet points with individual technology competency. Only list the most important things, and only to fill space.</p> <h3 id="senior-level">Senior Level</h3> <p>Work experience is really your only relevant asset here, unless you have a substantial body of open source or extra curricular activities (writing books, publishing papers, technical training or certification, public speaking, etc). Your resume should contain from top to bottom:</p> <ol> <li>A professional summary. One or two sentences telling what you do, how you specialize, and where you are headed.</li> <li>Work experience, highlighting big projects, key responsibilities, and any expertise. Most recent, most important things go closer to the top.</li> <li>Professional certifications, open-source projects, publications, etc</li> <li>Education summary. Just degrees and dates. Nothing else is relevant anymore. Some companies won’t care about education at all at this point, some other companies will require a degree but only care that you graduated or not, no other details matter.</li> </ol> <p>If you still have space, go back and expand on your biggest work projects. At this point you shouldn’t be listing things that aren’t directly related to the jobs you’ve worked.</p> <h2 id="overview">Overview</h2> <p>The resume serves one purpose: To get you in the door for an interview. Your resume should be a very short, focused overview of who you are as a technology professional. It should contain lots of keywords and make them easy to find. Once you’ve been in for an interview, your live performance there will matter much more than anything written on paper. If you follow some basic guidelines and highlight the important things, your resume should be enough to get you in the door, and that’s all that you need.</p> <img src="" height="1" width="1" alt=""/> Recruiters 2015-02-07T00:00:00+00:00 <p>Hunting for a job can be a difficult process. Since I have some buddies who are going through the process now, and my current employer is hiring, I figured I would share a little bit of my dubious, unsolicited wisdom on the subject. Today I’m going to talk about recruiters.</p> <p>Notice that what I say here is very specific to programming careers and is probably specific to my geographical area and skill set. Take my advice with a gain of salt.</p> <h2 id="recruiter-basics">Recruiter Basics</h2> <p>I can hunt for a job myself. I can put my resume together, search the internet for open positions, contact hiring managers, schedule interviews and negotiate salary. Recruiters are interesting here only because they can save <em>time and effort</em>. A recruiter can farm through and filter out job postings for you and contact you later with only the <em>creme de la creme</em>. In this way, a good recruiter can be “worth it”.</p> <p>But then again, as a job hunter you aren’t paying the bill, are you? And if you aren’t writing them a check, then they don’t really work for you and there is no way to compare “worth”.</p> <p>I want to make very clear: I am not against recruiters and a good recruiter can do a heck of a lot towards smoothing the transition into a new job or out of an old job. The people I complain about here are not the good ones. First, let’s talk about some common fallacies.</p> <h3 id="fallacy-1-they-work-for-me">Fallacy 1: They Work For Me</h3> <p>Towards the end of senior year in college I had a friend say to me “I’m not worried about finding a job. I have three recruiters working for me!” Sorry, they don’t work for you. You aren’t their client, you’re their <em>product</em>. When you understand that, so many things come into focus.</p> <h3 id="fallacy-2-i-make-more-they-make-more">Fallacy 2: I Make More, They Make More</h3> <p>I’ve heard people say “Recruiters get paid a percentage, so the more I get, the more they get too. It’s in their best interest to get me the best deal possible.” No. That’s not how it works. Walmart isn’t one of the biggest and most profitable companies in the world because it sells high-end goods. Walmart’s strategy is all about <em>volume</em>, and it’s a strategy that works.</p> <p>Many recruiters do work on a percentage. Getting you an unusually good deal at the expense of the employer (their paying client) is self-defeating. See, if an employer feels like the recruiter is taking advantage, they are highly unlikely to call that recruiter again (or even answer their emails) next time they have an opening. Recruiters work for the employers, and the first step is convincing the employers to let them in the door.</p> <p>Let’s do the math. Consider a recruiter who earns 20% on any placement. The recruiter finds a senior-level programmer who has a steady job but is casually looking for other options. He can afford to wait and be selective. So the recruiter sends him an email with half a dozen opportunities through which the candidate sorts and picks one for a phone screen. The phone screen maybe leads to an interview after which the candidate is unimpressed and chooses not to pursue it. The recruiter looks and finds another half-dozen positions for the next email, and the process repeats. When the candidate finally selects a position, he is making 150 thousand dollars, giving a 20% payday of 30k. This seems like a decent payday for our dedicated and tireless recruiter, earned over the course of, let’s say, 5 weeks.</p> <p>Consider instead the same recruiter making the same commission, but working with younger candidates fresh out of school. Being unemployed and under considerable debt makes these candidates significantly less picky. Plus, as they are assured repeatedly, your first job out of school is never one you keep for long, so the deal doesn’t need to be perfect. After a few years they’ll be able to transfer somewhere else for a big pay jump. The recruiter sends each of them an email with 12 job postings (low-level positions being much more common than more prestigious, senior-level ones) through which the candidates quickly sort, schedule several phone screens which lead to some interviews, which lead to a single offer which is quickly accepted. Each candidate places for an average of 50 thousand dollars, leaving a 10k payday for our recruiter. <strong>But</strong>, because he was able to place more junior-level candidates in the same amount of time as our one senior-level candidate, the recruiter has earned 50k in the same 5 weeks.</p> <p>A good recruiter makes the most money by stuffing the most people into new jobs in the shortest amount of time (while keeping employers happy). Obviously increasing the quality of the placement for both candidate and employer can help things like professional reputation, but in the short-term any placement is a good placement.</p> <h3 id="recruiter-goals">Recruiter Goals</h3> <p>See, recruiters have three basic goals: 1. Get a good deal for the employer, so the employer is willing to work with them again in the future. 2. Make the process relatively easy for the candidates, so it’s easy to get them onboard with the process. 3. Make deals as quickly as possible, to increase volume.</p> <h3 id="unscrupulous-recruiters">Unscrupulous Recruiters</h3> <p>I was working at a job that I had been placed at through a recruiter. He called me 18 months to the day after I first started there to ask me if “things are still going well”. And I don’t need to tell you that any suggestion of things not being so would have been met with an offer to help find something better. <em>18 months to the day</em>. I don’t know exactly what the recruiter’s contract with my employer looked like, but I have to imagine that the number “18” appeared somewhere prominent. I wonder how far in advance he added me into his Outlook calendar?</p> <p>Some recruiters have written into their contract that they won’t poach from their clients (the companies, not you). Some recruiters don’t. If it isn’t in the contract, some recruiters will be perfectly happy to place you and then turn around to poach you again.</p> <h2 id="money-and-the-10-rule">Money and the 10% Rule</h2> <p>A practice that I find to be particularly unpleasant is what I’m calling “The Ten Percent Rule”. Basically, it works like this: A lazy recruiter (or, a lazy recruiter in concert with a lazy employeer) asks how much money you’re making now. Being helpful, you tell her. She takes your current salary, adds 10% to it, and uses that as the start for negotiations with the prospective employer, <em>regardless of what the position is advertised to pay, or whether the expressed salary is satisfactory for the candidate</em>. The idea is this: 10% is a good enough raise that most candidates would jump at it quickly, but it’s low enough that most employers will take the deal quickly too. Win-win, right?</p> <p>When I accepted my first job out of school I had been unemployed for a few months (working on my GSOC project that summer!) and was getting desperate for anything. The offer was lower than I was hoping, but they were a small startup with a lot of growth potential and I was promised yearly performance reviews with merit-based pay increases. Being young and a little naive I took the offer with the assumption that I would be able to demonstrate my value and be rewarded later. As we all remember, 2008 and 2009 happened. The economy collapsed in on itself and the industry I was in was particularly hard hit. We were told that while our jobs were safe, there was a company-wide freeze on pay increases. My salary, which was low when I took the job and <em>very low</em> with two years experience on my resume, was stuck without any hope of increase.</p> <p>So I called a recruiter and he told me about that pesky 10% rule. I did the math and realized that my current low salary plus 10%, was still lower than I thought I should be making. I looked without a recruiter and found a job for about a 35% increase. This was, according to some basic market research, what other similar jobs were offering at the time. It’s not that I was looking to make too much money, I was only looking to get closer to a fair market value.</p> <p>Years later when it came time to look again (for reasons not relating to money) I called a recruiter. He asked “how much are you making now” and I refused, politely, to tell him. “How am I going to know where to start negotiating from?” so I told him that I wanted to be paid what the job was worth, no more, no less. He begrudgingly accepted my terms, I found a job for more than a 10% jump, and I was very happy to accept it. I didn’t ask for more money because I didn’t need to. The company was offering that much because that’s what they felt I was worth to them. A lowball starting point for negotiations would have hurt me and not helped anybody.</p> <h3 id="programmer-value-and-turnaround">Programmer Value And Turnaround</h3> <p>I had a buddy who was working his first job out of college for relatively low pay (like my first job). When he moved to a new position he followed my advice and refused to tell his recruiter about his current salary. When the first offer came in, for a 45% pay jump, he took it. Despite that kind of a jump, he isn’t over-paid by any stretch.</p> <p>The problem with the 10% rule, and about half of the recruiters I talk to still cling to it, is that it can put you into a trap. If your pay is low and you move for 10%, your pay will probably still be too low. Programmers, as they get older and more experienced start to command more money for a variety of reasons. We lose people due to attrition and because of other opportunities. Good programmers get lured into management or team leadership, or they start to specialize. A new CS graduate doesn’t really have much experience in anything, so they can really walk into any entry-level job because most employers recognize that new grads need to be trained. You do a few years in Ruby, and when it comes time to find a new job you have a choice: Find a mid-level job that requires a few years with Ruby, or start down at the bottom of the totem pole again with no years experience in something else. You either move up and continue specializing, or you start on a jack-of-all-trades path, or maybe you find something different entirely. Maybe you start moving into networking and security, or you start specializing in DB admin, or you start getting more into requirements gathering and analysis. Or maybe…</p> <p>My point with all this is that people start to specialize, which means the pool of potential candidates for higher-level programming jobs is smaller, which means money goes up. It doesn’t go up forever, people will plateau sooner than later (it’s hard to find a job asking for more than 8 or 10 years experience, because after that your specialized skills are out-of-date anyway and the pool of applicants would be so small that you would take a long time to fill the position). I’ll talk more about the career paths open to developers in a later post.</p> <p>A lot of schools teach languages like C++ or Java. So if companies want their entry-level developers to learn a different language they need to teach them on the job. Two or three years later, any companies who are looking for mid-level developers with those skills need to find a person who either taught themselves or else was trained in their previous position (or, they need to find somebody with a similar skillset and hope they can make the transition, which isn’t always easy). This is why there can be a relatively big jump from a developer’s first job to their second.</p> <p>This is also a big problem for companies, because many companies want to offer something like 2% every year for cost of living, <em>maybe</em> up to 5% total for merit, but the next company wants to offer you 10% or more after two or three years to jump ship. This is a major reason why programmers skip around a lot. I’ve heard many people say that the turnaround for a developer will be 2-3 years on average, and you can be damn sure all recruiters you’ve been in contact are aware of that fact.</p> <p>As an aside, why more companies don’t know this, and don’t take steps to retain their top talent in the face of such relentless poaching, I will never know. I have to imagine it would cost less, on balance.</p> <h3 id="real-numbers">Real Numbers</h3> <p>To backup some of what I’m saying, I searched through some online job postings recently with some keywords from my own resume in my general geographical area. I list them by the years of experience they are asking for and the pay that they are offering (obviously many job postings don’t include salary numbers or specific years experience, so there may be a selection bias here):</p> <ol> <li>New grad, 35k-50k</li> <li>New grad, 50k-55k</li> <li>2+ years experience, 50k-55k</li> <li>2+ years experience 50k-70k</li> <li>2+ years experience, 60k-70k</li> <li>2+ years experience, up to 85k</li> <li>1-3 years experience, up to 80k</li> <li>3+ years experience, up to 80k</li> <li>5+ years experience, 60k-65k</li> <li>5+ years experience, 65k-75k</li> <li>5+ years experience, 90k-100k</li> <li>10+ years experience, 120k-160k</li> </ol> <p>At the start of their career a CS grad can expect about 50k in their first job, probably about 60k (a 20% jump!) after 2 years, probably about 75k after 5 years (a 25% jump!) and 120k if they stick with it for 10 years (a 60% jump!). Now I’m not trying to put dollar signs into anybody’s eyes. For these kinds of numbers you need to work hard and continuously learn new technologies. But what I do want to point out is that a a recruiter blindly offering a 10% starting point for negotiations isn’t doing you any favors at any point.</p> <h3 id="dont-tell">Don’t Tell</h3> <p>Allison Greene, of <a href="askamanager.org">AskAManager.org</a> fame agrees with me. [Don’t tell people what you are currently making], especially if you don’t want your salary history to be the guide for your salary future.</p> <p>My general rule is that you should never tell a recruiter what you are currently making. The one exception to this rule is if you’ve already talked about what you want to make in your next move. If you are making 50k, but want to be making 65k, start with the later number. Here’s an example conversation:</p> <pre><code>Recruiter: What are you making now and what do you want to make? Me: I've been doing some research, and it looks like the kinds of jobs I'm after are paying about 65k. I probably wouldn't be willing to leave my current position for much less than that. Does that sound right to you? Recruiter: Yeah, looking at your resume and some jobs, it looks like 65k is fair. What are you making now? </code></pre> <p>And at that point, maybe it’s okay to tell what you are currently making, but it shouldn’t matter. You’ve already told what your target range is, and that should be used to inform the negotiations. What you are currently making is completely irrelevant, so why would a recruiter want to know?</p> <h2 id="in-review">In Review</h2> <p>In review, there are a few things to keep in mind about recruiters before you get involved with them:</p> <ol> <li>You don’t pay them, so they don’t work for you. You aren’t the client, you are the product</li> <li>Getting you any deal quickly is better for the recruiter than getting you the best deal possible</li> <li>Don’t tell them what you are currently making. Some recruiters will refuse to work with you because of this. That’s fine. There are always more recruiters out there.</li> <li>Do some research on your own and make sure you know what you are worth. Don’t let a recruiter or anybody railroad you into a bad deal just because it’s quick.</li> <li>Some recruiters can be very unscrupulous. Beware.</li> </ol> <p>I’ve worked with some good recruiters in the past and I would work with some of them in the future too, if the opportunity arose. Just make sure that, if you choose to go this route, that you understand the relationship and take steps to protect yourself.</p> <img src="" height="1" width="1" alt=""/> Why I Stopped Working On Parrot 2015-01-15T00:00:00+00:00 <p>For the sake of just wrapping up this subject, here is the post about the technical side of Parrot that I promised. I’m sorry that it’s so large but I have a lot to say and I hope that in saying it I’ll provide some kind of benefit to somebody.</p> <p>Also, everything that I say here concerns Parrot of 2-3 years ago when I was still working on it. I know other people have continued to develop it so some of the problems I mention might already be resolved. This post is <strong>absolutely not</strong> intended to demoralize the people currently working on Parrot or future contributors who might be interested in joining. All I can talk about is what I was thinking and feeling about the project, all of which is subjective and possibly incomplete or mistaken. In all this I want to make clear that I stopped working on Parrot because of my own personal goals and needs, which are almost certainly not the same for other people.</p> <h2 id="the-dream-of-parrot">The Dream of Parrot</h2> <p>There was a great dream which was Parrot. It was going to be many things, some of them great and visionary while some of them were a little more pedestrian (but nonetheless necessary). At the most basic level, the Perl world wanted a VM to run Perl (some version of it) with proper abstraction boundaries and without the kinds of twisted coupling nightmares that plague the current implementation of Perl5. From the very outset the Perl6 language project aimed to resolve exactly these kinds of problems: Start with a specification first, and design it in such a way that the execution engine could be divorced from the compiler, and multiple implementations could exist side-by-side.</p> <p>Python and Ruby, as two immediate examples, have benefited strongly from this kind of arrangement. Yes there are default implementations of these languages, but lessons learned during development of competing VMs and runtimes helped to strengthen the specifications and the community, increase competition, and expand the number of platforms where these languages can run. They are better for it.</p> <p>Parrot, along this line of thinking, was supposed to be the <em>primus inter pares</em>. Yes, Perl6 should be able to run anywhere and be implemented on a number of VMs and platforms. However Parrot would be the only one among these that would be tailor-made to fit Perl6 perfectly. Perl6 can run anywhere, but if you want <em>the best</em> Perl6 experience, with the most cutting-edge features, Parrot should have been the go-to platform.</p> <p>While this was among the earliest of design goals, it very quickly fell along the wayside as something that the Parrot developers didn’t think was necessary. Let’s not separate “me” from “they” here, I definitely believed the same thing for a while, but I wasn’t really around when these decisions were being made. You can’t blame the early Parrot devs for falling into the trap that they did. Once some code starting being written and the two projects became separate, the Parrot folks gained some ambition. We didn’t just want to have a VM that runs Perl6, because many of the features that Perl6 needs are also baseline requirements for other dynamic languages as well. Wouldn’t it be great if Parrot supported many languages and enabled interoperability between them, in the same way that .NET was doing with VB and C#, or how Java was starting to with Scala and other various language ports? Sure, you can say that .NET only really had eyes for C# despite begrudging support for other languages, or that Scala was always a bit of a second-class citizen and oddity on the Java platform, but these things existed and there was real, demonstrable interoperability at play.</p> <p>Despite static kinds of languages being able to work together for years (depending on the compilers used and the linking options provided, of course), dynamic languages were all self-contained unto themselves and didn’t really have any of these benefits. With Parrot, no longer would every language need to provide it’s own standard library (especially in cases, such as PHP, where the default standard library is severely lacking, in design if not in functionality). Parrot could provide a huge common runtime, and every other language could share it directly, or write some thin wrappers at most.</p> <p>There was also the idea of the rising tide that lifts all boats. In a world where every dynamic language has it’s own VM and own runtime, improvements in the fundamental building blocks of these need to be separately reproduced for each. Develop a better garbage collection strategy? Implement it a dozen times or more, in each platform that wants it. Develop a better JIT algorithm? Implement it a dozen times or more for each platform that wants it. Better unicode support? Better threading and multitasking? Better networking? Better object model? Better native call or even native types? Better optimizations? Better parsers and parser-building tools? For every one of these things, every time something new and awesome is developed, we can implement it dozens or even hundreds of times. And if we don’t implement each new thing on each old platform, the divide between them increases.</p> <p>Take a list of all the haves and the have-nots. Java has great unicode support. Ruby has a great object model. V8 has great JIT. Python has great green threads and tasklets. PHP has great built-in bindings to databases and webservers. .NET has some great optimizations. Perl has CPAN. JavaScript needs a better object model. Python needs better threading and multitasking. PHP needs unicode support, Perl needs JIT and optimizations, and the list goes on. One of the goals behind Parrot was that we could bring together all the strengths into one reusable bundle and eliminate the most common weaknesses, and only need to do it once.</p> <p>To recap, here are the three big goals of Parrot, as they have been communicated over time (though they haven’t always been equal priority):</p> <ol> <li>To be an initial “best fit” VM for Perl6</li> <li>To be a runtime for multiple dynamic languages where interoperability is possible</li> <li>To be a single point of improvement where costly additions could be made once, and many communities could benefit from them together.</li> </ol> <p>I left the Parrot project for a simple reason: The goals that I had with regards to Parrot were met and the things that I thought could be accomplished were accomplished. Just, not with Parrot. Other modern VMs, whether by accident or design, have achieved the kinds of goals that were supposed to set Parrot apart, and all the while Parrot was not progressing hardly at all. For years it seemed like we needed to take two steps backwards first, before attempting any step forward. The system just wasn’t very good, and no matter how much we worked to improve it, it felt like we were being weighed down by an unimaginable burden of cruft and backwards compatibility. The whole time we were slogging through the mess, other VMs were surging forward. In the end, I got the VM that I had always wanted, and it was called .NET (Java and V8 are also great, but I don’t use either of them nearly as much). Let’s look at those three goals again to see why.</p> <p>We can make arguments all day long that .NET or JVM don’t have an object model or dynamic invoke mechanism which is 100% exactly what is needed by Perl6, Python, Ruby or PHP. “There’s always going to be friction”, I’ve said it and I’ve heard others say it around me. Yes, this is true that these two big VMs will never cater to P6 on bended knee. However, every system has trade-offs and the small amount of friction is outweighed by the other benefits: large libraries and library ecosystems, large existing user bases, near universal desktop penetration (for .NET, it’s near universal on Windows systems, but with Mono, Xamarin and new OSS overtures from Microsoft this situation is improving and rapidly) and significant footprint in the ever-growing mobile world. And further, because .NET and JVM have better memory models, garbage collection, JIT and various other optimizations and performance enhancers, the performance of P6 on those platforms will likely be just as good if not better than performance on Parrot for the forseeable future. Parrot not only has to provide less friction (which it doesn’t even do), but it also needs to have comparable performance and memory usage, which it simply does not.</p> <p>Interoperability is supposed to be an area where Parrot excelled beyond the norm, but as of 2012 it did not work as expected and <em>I don’t even know why it didn’t work as expected</em>. People who claimed to know about it said it wasn’t working, and there was a ticket open somewhere for “Make interoperability happen”, but it didn’t work right and nobody was trying to fix it. When I asked what needed to happen to get it working, I could never get an answer besides “it just doesn’t”.</p> <p>Compare to a platform like .NET where you can write and interoperate all the following languages: C#, VB.NET, C++, F#, IronPython, IronRuby, JScript (and IronJS), and various dialects of Lisp, Clojure, Prolog, PHP, Ada, and even Perl6. Yes, you read that correctly. As of the time I left you could, perhaps with some effort, write a Perl6 module on .NET using Niecza and interoperate it at some level with a library written in C#. Maybe Niecza has lost functionality in the past few years or maybe that project has since been abandoned, but last time I looked Niecza on .NET was miles ahead of Perl6 on Parrot in terms of language interoperablity.</p> <p>Don’t even get me started on the various compiler projects which translate various dynamic languages into JavaScript for use in the browser. For all it’s flaws, JavaScript is indeed turning itself into an “assembly language for the internet”. You can, today, compile all sorts of languages into JavaScript, load them and run them together in a browser. The JavaScript environment even has it’s own trendy new languages which don’t exist anywhere else (CoffeeScript, etc). When you consider the amazingly productive performance arms race between Microsoft IE, Google Chrome and Mozilla FireFox (among others!), it’s easy to see why the platform has become so attractive. Throw Node.js into the mix and suddenly JavaScript starts to look like just as compelling a platform, and more versatile than some of the desktop-only options in .NET and JVM. There’s always going to be a little bit of friction translating any language to JS with its goody object model, but a smart developer is going to take a look at all the benefits, and do a simple calculation to see if it’s still a worthwhile platform. Many people will decide that it is.</p> <h2 id="performance-and-features">Performance and Features</h2> <p>I hear people asking questions like “Is Parrot fast enough?” Which hasn’t been the question to ask, really. Parrot doesn’t provide the features it is supposed to, so it doesn’t matter, in my mind, if it does the wrong things quickly enough. Sure, Parrot has been dirt slow (I hear it has since gotten faster) in part because we were doing too many things that we didn’t need to do and we weren’t doing enough of what we needed. So a language like Perl6 needs to either suffer through the bloat of our method dispatcher, or else write their own to do what they actually need. Guess what they did?</p> <p>In terms of having an awesome feature set on which all languages can leverage, and a single point for making improvements where all language can benefit, Parrot is a mixed bag. There are some places where I believe that Parrot really does provide awesome features. The hybrid Parrot threading model is, while incomplete at my last viewing, among the best designs for a built-in threading system that I’ve seen since. But then again, when you look at the new Futures and Promises features built-in to C# 5, or the <code>java.util.concurrent.Future</code> library in Java, or when you look at the event-based everything in Node.js, the Parrot offering doesn’t stand out as much. It’s one great design among a pool of other, similar, great designs. Parrot’s native call system is conceptually among the best, though probably is edged out by some of the other options. Parrot’s unicode support and string handling in general are pretty good. Could be better, but still pretty good (lightyears ahead of some of the competition. PHP comes to mind).</p> <p>Where Parrot was lacking was in everything else. The object model, especially, stood out as a place where the fail was particularly strong. Parrot doesn’t have JIT (and what it used to have was a dumb bytecode translator which only worked on x86 (which wasn’t even the most popular platform among our developers) and helped propagate the misconception that we had a <em>real JIT</em>, which we never did). Our calling conventions subsystem was poor, but not because the implementation was bad. For what it was supposed to be, the implementation was actually decent. The problem is that the <em>specification</em> was bloated and painful, and the abstraction boundaries were drawn in the wrong place. Every call had to create and then decode a CallContext object, but Parrot did all of this internally. This meant that Parrot took responsibility for every type of passable argument, including named and optional parameters, and made it unnecessarily difficult for languages which didn’t need exactly these features implemented exactly this way to do anything different.</p> <p>For the record, calling conventions were a huge part of the reason why my MATLAB clone, Matrixy, died. Because we never had easy access to our own CallContext object, we were never able to properly implement some of the basic features (like variadic parameter and result lists) which were required by even the most basic subsets of the standard library. NQP and Rakudo had to go to extremes to write their own argument binder for making calls, effectively cutting the bulk of the Parrot code out of the loop.</p> <p>My JavaScript port, Jaesop, died because of object model problems. More than 50% of the code written for that project was trying to shoehorn the JavaScript object model into the Parrot one, and barely got even the basics correct. Maybe this is because of some fundamental misunderstanding on my part, I’m not a JS expert and maybe I was missing some kind of crucial Eureka moment in the design of it. Regardless, I was having a hell of a time fighting with the object model to try and get the result I wanted, and a better object model would have let even my poorest designs work. The object model is also a huge reason why Python and Ruby projects floundered and died too. People wanted all these languages to run on Parrot, and the Object Model was the single biggest reason why nobody could make it work.</p> <p>My libblas bindings, PLA, was plagued by the same problems. The object model basically required PLA to be written in C, and performance suffered because of the twisted calling convention problems. My database bindings, ParrotStore, had the same limitations. As I was developing these, The P6 folks were developing their own bindings which used the (much nicer) 6model and the (much nicer) P6 native bindings instead. After a while I had to ask myself why I was fighting in the weeds so much, when the P6 people were rising above the problems of Parrot and doing things better? If my writing these things wasn’t helping anybody, why bother with it?</p> <p>Notice that the P6 folks were having their biggest successes when they bypassed Parrot, which isn’t exactly a roadmap for synergy and mutual success. One day, and I saw it coming like a freight train, P6 was going to realize that they could have the most success by bypassing Parrot entirely. I was not at all surprised when I started seeing blog posts in my daily feed about MoarVM.</p> <p>Rosella was actually my one project which didn’t run into too many Parrot problems. But then again, I wrote much of it to work around Parrot issues that I was aware of because of my knowledge of the internals. Somebody besides myself trying to write a similar project would have been in big trouble.</p> <p>The goal of all these projects I worked on was to provide a substrate of common functionality that other people could build on top of. If many languages can be translated into JavaScript, and if Parrot has a JavaScript compiler, we start to gain language adoption and interoperability for free. If parrot has an attractive and full standard library, people will be able to build on top of that to make bigger things, faster. If we have good infrastructure like unit tests, project templates and build tools, people will be able to leverage them to get new projects from conception to production faster. This just isn’t the way things worked out. The tide was indeed rising, albeit slowly and uncertainly, but all of the ships had already set sail.</p> <h2 id="project-leadership">Project Leadership</h2> <p>Allison was a pretty great architect before she reached her own burnout point. When she made her absence official I asked for the Job of architect in her stead. The job instead went to cotto which was probably the right choice at the time. While I had plenty of free time and energy to devote to the role, I was young, immature, ignorant of some of the big ideas, inexperienced in leadership and software architecture, and abbrasive to talk to sometimes. Having time and energy, while an architect certainly needs these things, wasn’t enough reason for me to be it.</p> <p>We know now in hindsight that cotto didn’t really have the free time to keep up with the position either. I don’t know exactly what was eating up his time but I can guess. Following the economic meltdowns in 2008 and 2009 many of our best developers were spending more time at work, fighting to keep jobs that were melting away, or being forced to pick up slack for other jobs that no longer had people. When you’re feeling a little pesimistic about Parrot, and your work life is taking more time and generating more stress, your open source project participation suffers. I don’t know exactly why cotto left, though I assume this and burnout and pessimism about the project all played their own parts in it.</p> <p>I was trying to do design work and rewrite old specs and make big changes, but without an architect there to take the thirty thousand foot view and sign off on things, I feel like I got caught in a bit of a rut. We had an architect for a reason and I respected the position enough to not go outside of that. But when you go for so long with Allison not participating and then she hands the job to cotto and he isn’t able to put in enough hours, I feel like a lot of the things I wanted to accomplish were stalled.</p> <p>What I can say is that if I were architect I would have kept things moving a little longer, though with my own burnout fast approaching and my inexperience and other problems in play, who knows if I would have moved us to a place we wanted Parrot to be.</p> <h2 id="perl6">Perl6</h2> <p>I’ve talked about P6 quite a lot because P6 was really the central player in all this. Without it, Parrot would have been nothing and would have had no purpose for existing at all. Parrot made many mistakes with respect to Perl6:</p> <ol> <li>Not treating it like the Most Valuable Project</li> <li>Kicking it out of the Parrot repo and forcing it to become a separate, stand-alone project.</li> <li>Not catering to the needs of Perl6 more closely</li> <li>Acting like the needs of any other language were important at all, much less as important as the needs of Perl6.</li> </ol> <p>And again, I understand why people did it. They wanted Parrot to be language agnostic and they wanted this utopian dreamland of language interoperability. The problem is that you need two languages running on your VM to worry about interoperability, and we only had the one. And then we kicked it out of the nest to make room for the other projects that weren’t coming.</p> <p>Before I left I was trying to refocus the project to be more of “The Perl6 VM” and less of “The VM that hosts many languages and, oh yeah, Perl6 but not well”</p> <p>I wanted to merge 6model into Parrot core and I wanted to make some major changes to the method dispatcher to more closely mirror the model P6 was using (which is, as I have known for a long time, much closer to the “right” way to do it).</p> <p>Here’s the part of the confession that should be revelatory, because I’ve never expressed these thoughts publically before: I was really starting to dislike Perl6. I was starting to feel that (a) it would ever be completed and (b) that if it was completed it might not be any good. Development on Perl6 has taken a very long time, much longer than development on other languages or compilers. In defense you might say “But Rakudo has spent years fighting with problems in Parrot, it would be far ahead of where it is now were it not for all those lost years”. I’ll agree with that to a point. Rakudo certainly did lose time with Parrot, but even allowing for 5 years of purely lost time, it has still had a huge development cycle <em>and nobody is calling it complete yet</em>.</p> <p>Plus, it’s not like Parrot has been the only host in town. Rakudo has a JVM backend last I heard, and there’s Niecza on .NET, neither one of which has the problems that Parrot has. Despite these things rendering the problems of Parrot moot, Perl 6 development hasn’t exactly accelerated forward.</p> <p>People say “oh but those VMs aren’t designed for dynamic languages! There’s extra friction!” Which is true to a point, but languages that run on those VMs or have been ported to them don’t seem to mind. Both JVM and .NET currently host fully operational versions of JavaScript, Ruby, Python, and PHP, and you don’t hear those communities complaining about how impossible it is to make compilers because of the inherent friction.</p> <p>In theory Parrot should have been able to do a little better, but .NET and JVM aren’t exactly unusable for the purpose. And when P6 runs on those platforms, you can’t complain that Parrot is the anchor holding your whole operation back. When you’re running on a platform as stable and usable as JVM, for example, and you still are spending year after year on development just to get to a “yes, it’s done and ready” v1.0, maybe the problem isn’t with the underlying platform.</p> <p>So that leads me to a major existential problem: If Parrot should be targetted squarely at Perl6 (and, for any chance of success, it <em>should be</em>) and if I really don’t like Perl6 and don’t believe that it will do what it promises (and, I don’t) then it’s hard to log in every day and spend hours and hours working on Parrot.</p> <p>We could have retargetted Parrot <em>again</em> to not focus on Perl6 and start working on those other languages that people wanted (Ruby, Python and JavaScript would have been the best contenders) but then we would have gone from one active downstream project to none, and that would have been instant death for the project. Out of the fry pan, into the fire.</p> <p>I’ve been saving this link for a long time, because when I read it, I immediately recognized the sentiment as my own, only better stated. If you want a better discussion of my thoughts on Perl and especially Perl6, this is worth the read:</p> <p><a href=""></a></p> <p>I think there’s hope for Perl6, and I sincerely wish that language well. They aren’t going to see any kind of adoption until they are willing to put a “complete and ready for production” sticker on the front of the box. If they are unable to reach that point they need to reconsider their spec and their assumptions. If they are unwilling to reach that point, they need to take a long hard look inwards at the project culture.</p> <h2 id="why-i-left">Why I left</h2> <p>I loved Parrot. I honestly did. I devoted years of my life to it, cleaning and coding and planning and designing and arguing and discussing. I spent hours of precious, limited free time hacking Parrot and trying to make it better. I bought into the dream, and was doing everything that I could do to actualize it. Maybe we can take a certain amount of credit, that we had these dreams before some of the other platforms which actually were able to reach them first. Maybe we played some small influential role in the development of other competing platforms, with people seeing what we were trying to do, deciding it was a great idea, and beating us to the finish line. Maybe these ideas were just common sense and other folks would have arrived at them without ever hearing about Parrot in the first place. I don’t know how exactly all the pieces of the historical puzzle fit together, or who gets credit for what. What I do know is that we <em>did have the good ideas</em>, we just weren’t able to implement them correctly or quickly enough. We may not have won the race, but we were at least on the right racetrack. There’s something to be said for that.</p> <p>When I stepped away from Parrot, I thought that I just needed a bit of a breather. I was starting to feel the symptoms of burn-out, and I needed to step away and collect my thoughts. A few days turned into weeks. Weeks into months and months over years. At some point I realized consciously that I had no intention of returning, and so I never did.</p> <p>I was burnt out over some of the big branches and features, but I was also getting down about the state of the foundation. I was down on Perl6 in general, and I was seeing them slowly but surely moving to other platforms and leaving Parrot behind. I saw how hard it was to implement any languages on Parrot, and I knew that, in this state, Parrot would have no languages and be dead.</p> <p>The longer I was away the less I wanted to return. The things that I wanted to do for Parrot already existed, and instead of blazing a new trail, I would have been playing a frustrating game of catch-up, following in the footsteps of organizations like Microsoft and Oracle, Google and Mozilla, who each have much more than a few spare man-hours each week to devote to their projects.</p> <p>In the end, when I added up all the reasons to leave and all the reasons to stay, I decided my time with the project was over for good.</p> <p>I’m not going to return to Parrot development. I don’t harbour any regrets or ill-feelings, I just am not motivated to do the kind of work that needs to be done any more. There’s work that I did in Parrot that I am, to this day, extremely proud of. I didn’t have any problems or quarrels with any of the other developers, and I still count several of them among my list of friends.</p> <p>I haven’t joined any other open-source projects since I left Parrot, but I have started looking for one that suits me. I’m not quite sure exactly what I’m looking for, but I’ll know it when I see it, and I’ll devote as much of my time as I can spare.</p> <img src="" height="1" width="1" alt=""/> The Parrot Foundation 2015-01-14T00:00:00+00:00 <p>This post is quite late. Late by years. The problem is that there hasn’t been anything to talk about until now and having to keep repeating “nothing is happening and I don’t know why” over and over again was too much of a pain.</p> <p>I don’t know the exact status right this instant, but as far as I know the Parrot Foundation has been dissolved (or is in the process of being so, imminently). All money and IP has been transferred to the Perl Foundation for safe keeping.</p> <h2 id="so-what-happened">So What Happened?</h2> <p>Honestly, I couldn’t tell you. We were elected to the board in…was it 2009? I can’t even remember how long ago it was. I remember being super-excited and having all sorts of plans for things to do.We wanted to raise money and fund grants like The Perl Foundation does. We wanted to go out advertising, and interacting with communities for dynamic languages, and doing conferences and outreach to universities and… the list went on and on. All of the newly elected members of the board were eager to start doing things and pushing Parrot up to the next level.</p> <p>When we were elected the future was bright. Parrot was on an upward trajectory and the world was our oyster.</p> <p>We ran into a severe problem almost immediately upon taking office. The foundation hadn’t received it’s 503(c) status from the IRS yet. The reason why this is such a problem is that there is a bit of a deadline. Beyond a certain amount of time organizations become ineligible for 503(c) status. Or, applying for the status becomes much more complicated or something. I don’t remember all the details right now, so if somebody reading this knows please feel free to remind me about it. In either case, there was a deadline, that deadline came very shortly after our elections, and we were completely unaware and unprepared for it when we were elected.</p> <p>So that was the first big shock. We didn’t realize at the time of the election that Parrot hadn’t already applied for and received 503(c) status. The first thing I did when I learned this was to print out the application and start filling in the blanks. The problem is that there were too many blanks and, as I alluded to above, I didn’t have access to all the paperwork and details that would have been needed to get to the end of it. I don’t remember exactly when the deadline was in relation to the election but I know we had a few weeks to get the paperwork in. It seems like it should have been possible to do.</p> <p>The first thing I want to make very clear, before going on any further, is that the previous incarnation of the board didn’t really do anything wrong and didn’t really drop the ball per se. They did a large amount of work setting up the legal footings of the foundation. If I have any complaint at all it’s that they didn’t communicate to either the Parrot members or the incoming candidates the details of the tax status, or didn’t do so adequately. But then again, like I said above, we had a few weeks to get the necessary paperwork in which should have been possible to do, if uncomfortably close, if we had made a concerted effort as a team to do it.</p> <p>I should also point out a few facts here, as an aside. 503(c) status is so important because it would have labeled the foundation a tax exempt charity organization. This means that not only would the foundation not need to pay taxes on its earnings, but that potential donors could write off their contributions to us on their own taxes. Without 503(c) donations could potentially be taxed both ways: The donors would get taxed for income which wasn’t going to a registered charity and the foundation would get taxed on the income. This isn’t a great situation to be in, and effectively renders the foundation unable to raise or spend any money. Also, if anybody can remember the CSPAN news headlines from so far back, it turns out that the IRS was actually putting a lot of scrutiny towards open-source software organizations that were applying to become tax-exempt, so maybe our application might have been denied in any case.</p> <p>To get to the meat of the issue, I mentioned that we didn’t have access to the necessary paperwork. This is partly true. Somebody did have access to these things but failed to provide them. Now, I’m not in the business of naming names and I don’t feel like there would be any benefit in some kind of public shaming. We had a person on the board who was in a position of central importance to the proper functioning of that body, who was unable to execute the duties of office for reasons I do not quite understand. Maybe the reasons don’t matter anyway. The problem was that we, like any piece of software, had a really bad bottleneck that we were unable to route around.</p> <p>When I was elected I was given the job of Treasurer. It wasn’t my first choice of position, but I was determined to do everything in my power as treasurer to help push the foundation forward. The treasurer needs some simple things in order to perform basic duties of the role. Things like access to the bank account, previous tax filings, financial statements and other things like that. I didn’t have them and, because of the aforementioned bottleneck, I was unable to get them.</p> <p>All the documents I needed, I was told, were ready and just needed to be mailed. They were in a box, sitting by the front door, all that was needed was time to get it to the post office. Just a couple minutes maybe, some saturday morning. Just be patient and it would arrive just as soon as possible. So I waited.</p> <p>And waited. I asked again and waited some more. And again. And more. I offered to pay postage out of my own pocket. Nothing. I asked other people to make a house call, to go visit directly and pick the materials up for me. Nothing. Living on the otherside of the country meant my options were limited, but I feel like I exercised every option that was at my disposal. It was all for nothing. It’s 2015 now, half a decade after this mess began, and I <em>still</em> don’t have any of this information. But then again, I stopped asking for it years ago.</p> <p>We can talk about excuses all day long and I haven’t been walking in the other person’s shoes so I really can’t say if it was indeed impossible to do or not. At this point it doesn’t matter, because it’s all over now and there is no way to save the ship from sinking. Maybe the blame falls on me. Maybe I didn’t ask correctly. Maybe there were another option that I missed. Maybe I didn’t communicate the urgency of my need and the gravity of the situation. Maybe I was asking the wrong person all that time and could have easily gotten what I needed by asking somebody else instead. I don’t know and there is no way for me to ever know.</p> <p>Without the information necessary to act as treasurer, I couldn’t do my job. When things like invoices came in that needed to be paid (mostly a holding fee from our legal representative) I couldn’t handle it myself. So I dutifully forwarded those emails to the parrot-directors mailinglist and hoped, impotently, that somebody who <em>could</em> handle it did so. I didn’t mention above but I have also been the list admin for the parrot-directors list, so I had that going for me, which is nice.</p> <p>Without bank account information, when the GSOC or Google Code-In come around and Google wants to send PaFo a check for participation, I can’t even fill out the paperwork. I can’t give them a bank account number to drop the money in to. When people want to go to Google for the post-GSOC meetups and they promise to reimburse our travel expenses, I can’t write a check to do it. Again, I was supposedly the treasurer, but other people had these abilities and I never did.</p> <h2 id="elections">Elections</h2> <p>So a year passes and nobody so much as mentions board elections. I guess I could have organized them myself, or encouraged others to pick it up again, but I didn’t. In that first year we didn’t accomplish a single thing. Not one. We didn’t have a meeting. We didn’t vote on anything. We didn’t make any decisions. We didn’t reach a single goal. The IRS deadline came and went before we could do anything about it and everything else fell apart after that. The only thing I can say with certainty that I tried to do things and I failed, and I’m sorry for that. I cannot speak about the motivations of the other board members, though I suspect several others were in the same boat as I.</p> <p>Why didn’t we have elections? The people who had organized and planned them in years past didn’t offer, and nobody on the board (myself included) made moves to do it, and so it didn’t happen. By that time the board members seemed to have all fallen into a state of paralysis and without somebody, anybody, nudging the process along it just didn’t happen. I regret it to a point, but then again it’s hard to say that elections would have done anything to help. A new group of board members who were rendered just as impotent as we were, but were more confused about <em>why</em> things were the way they were might not have been much of a help.</p> <p>At some point I think we all realized that it was a lost cause. Nothing was happening. <em>Nothing could happen</em>. The foundation was dead in the water. Everybody was losing interest and there was no hope that anything would magically get better. Jim resigned at some point, quietly. I don’t know if he did it from frustration, or what his motivation was, but out he went. I wanted to resign too, but I figured that somebody needed to be sitting at the wheel when the final dissolution vote was held. So that’s why I stayed, to cast that one last vote when the opportunity finally came around. As far as I’m aware the decision was made without me. Nobody had heard from me in a while and I suspect I wasn’t needed for a quorum, so it’s done. Or, at least I heard it was.</p> <p>See, if we had elections, I was so demoralized that I definitely wouldn’t have stood for re-election, and I suspect the other remaining “active” board members wouldn’t have either. What would have been the point? I hated what was happening or, more specifically, what wasn’t happening and if I had a graceful way out I might have taken it and dumped all our problems onto the next generation. And then the next group would have inherited all the same problems and been just as paralysed as we were, and then somebody else would have been sitting dutifully in the wheelhouse waiting for the ship to finally sink below the waves.</p> <p>Maybe this is just pessimistic thinking on my part. I felt like I couldn’t do anything to help, so I just assumed nobody else could have either. Maybe elections would have brought in fresh people who would have been able to solve our problems and I’m to blame (in whole or in part) for not making sure elections happened. If so, I’m sorry about that too.</p> <h2 id="dissolution">Dissolution</h2> <p>I hit a point where I was burned out with the coding part of Parrot, and haven’t contributed to the software side since. I’ve been working on a blog post about that for years now; writing and rewriting, drafting and revising, expanding and elaborating. Maybe someday I will actually publish it. Long after I stopped contributing to the Parrot repository, or participating in Parrot discussions or Parrot design, I was still moderating that mailing list and waiting, patiently, for something to happen in the Foundation.</p> <p>Sometime in 2011 or 2012 maybe, Allison helped organize some phone calls with our lawyer and with some other groups that might have been able to absorb the foundation and take over management of it. These talks were productive but ultimately lead nowhere. I can’t remember exactly why but it had something to do with our bottleneck and something to do with lack of drive from our remaining board members. Remember that this was a time when the economy wasn’t doing particularly great and a lot of people were having to be more devoted to work and things like that, so not everybody had lots of free time to spare. I had to take some vacation hours in the middle of a few work days to get on conference calls and things like that. This economic situation and people having a lot less free time to devote to parrot is also a big reason why several of our developers stopped contributing to the repo. It’s not the only reason, of course. There were problems with the software and people were starting to get disillusioned, but the economic situation definitely helped nudge people who were already close enough to the edge. But, that’s a different subject entirely.</p> <p>After those other talks and conference calls failed to produce any outcome, everything basically came to a complete halt. Nothing happened, and that was the way for years. When 2015 rolled around I sent Allison an exasperated email asking if, maybe, this was the year something would happen. She told me that it already might have. Somebody was going to send the last signed bit of paper to the legal firm and the foundation would be dissolved. This is why I’m writing this blog post now. The foundation is finally dead and people deserve an autopsy.</p> <h2 id="the-end">The End</h2> <p>I want to personally thank Allison. She wasn’t a member of the board in this final iteration, but she did swoop in like a super hero to try and save things when we were floundering. Then, when we all gave up hope and decided to just end it with grace, she helped facilitate communications between us and the lawyers to get that ball rolling. Without her, and this is hard to imagine, even less would have gotten done in the past few years.</p> <p>For my part, I keep wondering if maybe I could have done more. If I could have routed around our bottle neck and managed to breath some measure of life into the foundation. Maybe some of my goals were still accessible. Maybe I was just too young, or too inexperienced to know how to resolve issues like this. Maybe things could have been different now that I’m older and wiser. There are a lot of maybes because, in my mind, there are so many things which I do not know and do not understand. But the one thing I can say for certain is this: I was elected for a purpose, I didn’t fulfill that purpose, and for that I am sorry.</p> <p>My tenure on the board of the Parrot Foundation has been the most frustrating, disappointing and regretful periods of my entire life. At the very least, I’m able to derive some closure and put the whole thing behind me. Could that we never speak of any of this again.</p> <p>Maybe, in the interests of complete closure, I’ll finally publish that other blog post about the technical side of the project, why I left and why I didn’t come back. That, I think, will help answer the rest of the outstanding questions that people have been asking me over the years. With that one exception, I consider the matter closed and I’m glad to finally wash my hands of it.</p> <img src="" height="1" width="1" alt=""/> Entity Framework Code Only Migrations 2013-01-26T00:00:00+00:00 <p>I’ve been using Entity Framework 4.4 at work a lot recently, and as part of that I’ve been running into some questions about how to do this or that with some of the new features, particularly code first. Sometimes I’m able to find the answers I need from the Googles, but sometimes I’ve got to sit down with VisualStudio and find the answers through good old-fashioned trial and error. Then, I figure what’s the point of having a tech-related blog in the first place if you can’t share the things you’ve learned there. I’ll be sharing bits of what I’m learning as I go.</p> <h2 id="code-only-migrations">Code-Only Migrations</h2> <p>The new Entity Framework releases have a feature called code-first, where you can write pure csharp or VB code objects (“Plain Old Code Objects”, or POCO), and have the Entity engine automatically discern from those classes the shape of your DB tables and generate a change script to create them. Most tutorials on the topic explain the process through the use of the Package Manager Console in Visual Studio. I have slightly different requirements and so I’m going to try to do the same exact process using the csharp APIs directly.</p> <p>Here’s a short but helpful blog post where I started my search:</p> <p>[]</p> <h3 id="create-your-dbcontext-and-poco-classes">Create Your DbContext and POCO Classes</h3> <p>I won’t go into detail about that here. There are plenty of cool resources for this purpose elsewhere. For the purposes of the rest of this post, I’ll assume you’ve got a <code>DbContext</code> subclass called “MyDbContext”. Even though you may not like to have it, your DbContext subclass must provide a parameterless constructor to work with the Package Manager Console tools.</p> <h3 id="create-a-configuration">Create a Configuration</h3> <p>A Migration Configuration is a class that derives from <code>System.Data.Entity.Migrations.DbMigrationsConfiguration</code>. You can create one of these automatically through the Package Manager Console with the <code>Enable-Migrations</code> command, or you can just create it in code yourself:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">namespace</span> <span class="nn">MyProgram.Migrations</span> <span class="p">{</span> <span class="k">using</span> <span class="nn">System</span><span class="p">;</span> <span class="k">using</span> <span class="nn">System.Data.Entity</span><span class="p">;</span> <span class="k">using</span> <span class="nn">System.Data.Entity.Migrations</span><span class="p">;</span> <span class="k">using</span> <span class="nn">System.Linq</span><span class="p">;</span> <span class="k">using</span> <span class="nn">System.Reflection</span><span class="p">;</span> <span class="k">using</span> <span class="nn">MyProgram</span><span class="p">;</span> <span class="k">internal</span> <span class="k">sealed</span> <span class="k">class</span> <span class="nc">MyConfiguration</span> <span class="p">:</span> <span class="n">DbMigrationsConfiguration</span><span class="p"><</span><span class="n">MyProgram</span><span class="p">.</span><span class="n">MyDbContext</span><span class="p">></span> <span class="p">{</span> <span class="k">public</span> <span class="nf">Configuration</span><span class="p">()</span> <span class="p">{</span> <span class="n">AutomaticMigrationsEnabled</span> <span class="p">=</span> <span class="k">false</span><span class="p">;</span> <span class="c1">// These things are not strictly necessary, but are helpful when the assembly where</span> <span class="c1">// the migrations stuff lives is different from the assembly where the DbContext</span> <span class="c1">// lives. For instance, you may not want to run migrations from a separate</span> <span class="c1">// development-time console program, and not have that code included in production</span> <span class="c1">// assemblies.</span> <span class="n">MigrationsAssembly</span> <span class="p">=</span> <span class="n">Assembly</span><span class="p">.</span><span class="n">GetExecutingAssembly</span><span class="p">();</span> <span class="n">MigrationsNamespace</span> <span class="p">=</span> <span class="s">"MyProgram.Migrations"</span><span class="p">;</span> <span class="p">}</span> <span class="k">protected</span> <span class="k">override</span> <span class="k">void</span> <span class="nf">Seed</span><span class="p">(</span><span class="n">MyProgram</span><span class="p">.</span><span class="n">MyDbContext</span> <span class="n">context</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// TODO: Initialize seed data here</span> <span class="p">}</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <h3 id="create-a-migration">Create a Migration</h3> <p>Next step is to create a migration. A migration is any class which derives from <code>System.Data.Entity.Migrations.DbMigration</code>. You can create one of these manually, but it’s much easier to create them through the Package Manager Console with the <code>Add-Migration</code> command.</p> <pre><code>Add-Migration MyMigration </code></pre> <p>Or, if you need some more options (if your solution has multiple projects, etc):</p> <pre><code>Add-Migration -Name MyMigration -ProjectName MyProject -ConfigurationTypeName MyProject.Migrations.MyConfiguration </code></pre> <p>You may also need to specify <code>-StartupProjectName</code>, if your migrations live in a library assembly.</p> <p>Als, you can specify a separate connection string from what is provided by the default parameterless constructor of your DbContext by specifying <code>-ConnectionStringName</code> (for a named connection string in your app.config/web.config file) or <code>-ConnectionString</code> and <code>-ConnectionProviderName</code> to use a value which is not in your app.config/web.config file.</p> <p>What do all these options mean? Let’s consider a solution with two projects:</p> <pre><code>MyProgram.sln - MyProgram (a .exe which references MyProgram.Core.dll) - MyProgram.Core (a .dll Class Library) </code></pre> <p>The project <code>MyProgram.Core.dll</code> contains our <code>DbContext</code> instance and the <code>MyProgram</code> assembly has the app.config with connection string information.</p> <p>If we want our migrations to live in <code>MyProgram.Core</code> we can use this command as our base (plus any other options we need to add):</p> <pre><code>Add-Migration MyMigration -ProjectName MyProgram.Core -StartupProjectName MyProgram ... </code></pre> <p>If, on the other hand, we want the migrations to live in <code>MyProgram</code>, the .exe instead of the .dll, we can use this version:</p> <pre><code>Add-Migration MyMigration -ProjectName MyProgram -StartupProjectName MyProject ... </code></pre> <p>If you do not specify <code>-ProjectName</code> or <code>-StartupProjectName</code>, the <code>Add-Migration</code> command will attempt to use whichever project you have flagged as the “default startup project” in the solution explorer (whichever project runs when you press F5).</p> <p>What if I want to separate my migrations out into a different assembly entirely, one which isn’t included in my production deployment? Here’s another example solution:</p> <pre><code>MyProgram.sln - MyProgram (the production deployed .exe) - MyProgram.Core (where our DbContext and model classes live) - MyProgram.DbMigration (where our migration code will live, references MyProgram.Core.dll) </code></pre> <p>In this case, we can use a command like this:</p> <pre><code>Add-Migration MyMigration -ProjectName MyProgram.DbMigration -StartupProjectName MyProgram ... </code></pre> <p>You’re going to have to play with some of the options for different configurations. If the <code>Add-Migrations</code> command says something’s wrong, try tweaking your values and adding more info to the commandline.</p> <h3 id="run-the-migrations-some-recipes">Run the Migrations (Some Recipes)</h3> <p>Now that you’ve got migrations and a configuration, you can run the migrations manually. Here are some snippets from a console program which does}</span></code></pre></div> <p>Let’s take a minute to step back and ask how this all works. You build your assembly and run it. The <code>DbMigrator</code> class uses reflection to read out all classes from your assembly, and find the ones which are subclasses of <code>DbMigration</code>. Each DB migration has a name, which is a combination of a timestamp and the name you gave it in the <code>Add-Migration</code> command. In the database, there’s a table (or will be, after you run your first migration) called <code>dbo.__MigrationHistory</code> (it may be under the “System Tables” folder). That table holds information about migrations you have already ran. When you call <code>DbMigrator.Update()</code>, it searches for all migrations, removes the ones which already have entries in the table, and orders them according to timestamp. This is the list of pending migrations. You can get that listforeach</span> <span class="p">(</span><span class="kt">string</span> <span class="n">migration</span> <span class="k">in</span> <span class="n">migrator</span><span class="p">.</span><span class="n">GetPendingMigrations</span><span class="p">()</span> <span class="n">Console</span><span class="p">.</span><span class="n">WriteLine</span><span class="p">(</span><span class="n">migration</span><span class="p">);</span> <span class="n">migrator</span><span class="p">.</span><span class="n">Update</span><span class="p">();</span> <span class="p">}</span></code></pre></div> <p>You can also get the raw SQL script which is going to be used:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">private</span> <span class="k">void</span> <span class="nf">GetDbUpdateScript<="p">}</span></code></pre></div> <p>Running the scripting decorator clears out the list of pending migrations from the migrator. If you want to generate the script first (for logging) and then run the migration, you need to create two migrators:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">private</span> <span class="k">void</span> <span class="nf">GetDbUpdateScriptAndUpdate</span><span class="p">()</span> <span class="p">{</span> <span class="n">MyConfiguration</span> <span class="n">myConfig</span> <span class="p">=</span> <span class="k">new</span> <span class="n">MyConfiguration</span><span class="p">();</span> <span class="n">DbMigrator</span> <span class="n">migrator</span> <span class="p">=</span> <span class="k">new</span> <span class="n">DbMigrator</span><span class="p">(</span><span class="n">myConfig<="n">migrator</span> <span class="p">=</span> <span class="k">new</span> <span class="n">DbMigrator</span><span class="p">(</span><span class="n">myConfig</span><span class="p">);</span> <span class="n">migrator</span><span class="p">.</span><span class="n">Update</span><span class="p">();</span> <span class="p">}</span></code></pre></div> <p>Another thing we could try is to create a logging object, and use a logging decorator to log progress. This mechanism will also output the raw SQL text, but will do so piecewise intermixed with other information (so you’ll need to filter out what is and what is not part of the SQL script):</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">public</span> <span class="k">class</span> <span class="nc">MyLogger</span> <span class="p">:</span> <span class="n">System</span><span class="p">.</span><span class="n">Data</span><span class="p">.</span><span class="n">Entity</span><span class="p">.</span><span class="n">Migrations</span><span class="p">.</span><span class="n">Infrastructure</span><span class="p">.</span><span class="n">MigrationsLogger</span> <span class="p">{</span> <span class="k">public</span> <span class="k">override</span> <span class="k">void</span> <span class="nf">Info</span><span class="p">(</span><span class="kt">string</span> <span class="n">message</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// Short status messages come here</span> <span class="p">}</span> <span class="k">public</span> <span class="k">override</span> <span class="k">void</span> <span class="nf">Verbose</span><span class="p">(</span><span class="kt">string</span> <span class="n">message</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// The SQL text and other info comes here</span> <span class="p">}</span> <span class="k">public</span> <span class="k">override</span> <span class="k">void</span> <span class="nf">Warning</span><span class="p">(</span><span class="kt">string</span> <span class="n">message</span><span class="p">)</span> <span class="p">{</span> <span class="c1">// Warnings and other bad messages come here</span> <span class="p">}</span> <span class="p">}</span></code></pre></div> <p>Once we have a logger, we can use it in our migration:</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">private</span> <span class="k">void</span> <span class="nf">DoDbUpdateWithLogging<LoggingDecorator</span> <span class="n">logger</span> <span class="p">=</span> <span class="k">new</span> <span class="n">MigratorLoggingDecorator</span><span class="p">(</span><span class="n">migrator</span><span class="p">,</span> <span class="k">new</span> <span class="n">MyLogger</span><span class="p">());</span> <span class="n">logger</span><span class="p">.</span><span class="n">Update</span><span class="p">();</span> <span class="p">}</span></code></pre></div> <p>We can update to a specific migration, or we can rollback to a specific migration by name. Remember, the “name” used by the migrator is a combination of the timestamp and the name you gave it at the console.</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">private</span> <span class="k">void</span> <span class="nf">UpdateOrRollbackTo</span><span class="p">(</span><span class="kt">string</span> <span class="n">name<="n">name</span><span class="p">);</span> <span class="p">}</span></code></pre></div> <p>And what if you want to completely trash the DB, undo all migrations, delete everything, and start over?</p> <div class="highlight"><pre><code class="language-csharp" data-<span class="k">private</span> <span class="k">void</span> <span class="nf">CompletelyTrashDb<="s">"0"</span><span class="p">);</span> <span class="p">}</span></code></pre></div> <h2 id="whats-my-use-case">What’s My Use Case?</h2> <p>So what exactly is my use-case here? Why don’t I just stick with the Package Manager Console like many other tutorials do? I have a few criteria:</p> <ol> <li>I need to seed the new DB with a lot of complex data, pulled from another source, which needs to be updated regularly.</li> <li>I may have more than one DB, for multiple instances of my application. Connection strings for all of these may be kept in another DB or a file or somewhere else. All of these need to be kept in sync, and a script that runs a migration on all targets is better than a command which runs on only one and needs to be manually updated.</li> <li>I’d like the ability to log the SQL scripts which are used for the migration, for various purposes.</li> <li>I’d like to be able to do some scripted unit testing where we create and migrate a test DB from scratch, seed it with test data, and use that for testing. I would like these temporary test DBs to be identical to the production ones.</li> </ol> <p>Overall I think the new Entity Framework Code-First features are really cool, and remind me very closely of the equivalent db migrations scripts in Rails, but we have a little bit more control over it here because we can incorporate the DbMigration process into our application logic.</p> <img src="" height="1" width="1" alt=""/> Working On a New Project 2012-12-09T00:00:00+00:00 <p>As they say on my son’s favorite Thomas The Tank Engine, yesterday an idea flew into my funnel. I was <a href="">doing some work on my bathroom</a> when I got an idea for a new website that I would like to make. I want to make this site for a few reasons: First, I’m going to be using the new ASP.NET MVC framework at <a href="/2012/11/20/new_job.html">my new job</a> eventually, and I wanted to practice with it. Second, I have been getting motivated to do more programming in general, and a new project that I’m hot on seems like just the thing to get me moving again. Third, my idea for this site is relatively straight-forward but should offer some good practice and interesting technical challenges. Fourth and finally, this website I’m thinking about is actually something that I would like to use myself (even if nobody else joins me).</p> <p>I still don’t have a <a href="/2012/09/14/sept_status_update.html">new laptop</a> yet, though I’ve been shopping for one in earnest. I figure I’ll have it shortly after the holidays. In either case, for the time being I’m stuck with my current crappy laptop, which exclusively runs (the decidedly not-crappy) Linux. If I’m going to make a new ASP.NET MVC website on this box, it’s going to have to use <a href="">Mono</a>.</p> <p>And that’s fine. <a href="/2010/11/03/blogger2jekyll.html">I’ve used Mono before</a> and I like it plenty despite it’s shortcomings.</p> <p>I scratched out a few ideas and designs in my notebook. I decided I wanted to use some kind of dependency injection/inversion of control/service locator feature. I’ve used <a href="">Unity</a> in the past and loved it, but I wouldn’t be against using Ninject or something else too. I also want to use some kind of ORM to make persistance a little easier.</p> <p>Now, I can already hear some people mumbling to themselves about all the many flaws of ORMs. I won’t even bother to list them or link to the (many) pages where they are discussed on the interwebs. Use your imagination. In any case, I’m not detered and ORMs actually make good sense for the project I’m thinking about, if I can find the right one.</p> <p>I thought about using MongoDB, but after thinking hard about work flows and data relationships in my site, I think a regular, SQL-based relational DB would just be a better fit in this instance. I’d probably like to stick with MySQL or MS SQL Server, initially. (This is not to say anything about the relative merits of one type of DB over another, just that one type seems to be a more natural fit for this particular problem domain and I’d like not to be shoehorning in the wrong software for the wrong reasons. Don’t get my involved in your holy war.)</p> <p>The problem, I discovered, is finding a good ORM that’s worth using, doesn’t introduce more hassle than it saves, and actually works (with examples) on Mono. So far, my search is proving to be a little bit fruitless.</p> <ol> <li><strong><a href="">NHibernate</a></strong> seems like a common and popular choice, but the large amounts of required XML configuration make me sick to my stomach. I would far prefer something that I can do in pure C# code without large amounts of externa config.</li> <li><strong><a href="">Castle ActiveRecord</a></strong> builds an ActiveRecord-like interface on top of NHibernate. In theory you get all the power of NHibernate without the XML headaches. However, this package is listed on the castle website as being “Archived” and “no longer being worked on”. Also, I can’t find any real examples of using it on Mono. I’m not going to start a new project (which presumably could be active for years) by starting on an old and unmaintained foundation.</li> <li><strong><a href="">db4o</a></strong> It looks to me like this little project uses it’s own custom DB file format and doesn’t connect to existing databases. I think I’d really like to stick with an existing DB, and not use something custom.</li> <li><strong><a href="">Linq-To-SQL</a></strong>, probably using the SQLMetal code generator, would seem like a decent option except it doesn’t seem to be well-supported on Mono, and there’s the issue of having to generate a whole bunch of pure-data objects, which will need to be laboriously mapped to and from my actual object type definitions. I’ve seen the kinds of morass that this kind of situation can lead to, and I’m not interested in going this route if it is even possible to traverse.</li> <li><strong><a href="">Simple.Data</a></strong> Is a newer option which uses all sorts of fancy modern C# features to provide an extremely flexible, extremely natural-looking interface for accessing a database. It’s supposed to work on Mono, but I’ve not figured out a good way to get it (and a MySQL connector, and prerequisites) installed in a reasonable way for Mono. The docs suggest NuGet, but I can’t get NuGet working on my box (and the <a href="">NuGet devs don’t seem to care about Mono too much</a>)</li> </ol> <p>Overall, the experience of trying to get this project working on Mono has been frustrating. I understand what a big engineering task Mono is in general, and how much work goes into getting a diverse ecosystem of software to work together nicely on a VM that’s supposed to be cross-platform with low barriers to entry. I get all that. Although, for the purposes of this project I really wanted to start writing some code sooner than later and not have to fight with so much infrastructural stuff. I suppose I have a few options: I can wait till I get a new laptop and do things on a windows partition or VM instead. Or, I can keep fighting with this setup to try and get things to work. Finally, I guess I could port over my idea to Ruby on Rails, another platfrom that I’m interested in learning more about.</p> <img src="" height="1" width="1" alt=""/> More IO Work? 2012-11-21T00:00:00+00:00 <p>I might not be too bright. Either that or I might not have a great memory, or maybe I’m just a glutton for punishment. Remember the big IO system rewrite I completed only a few weeks ago? Remember how much of a huge hassle that turned into and how burnt-out I got because of it? Apparently I don’t because I’m back at it again.</p> <p>Parrot hacker brrt came to me with a problem: After the io_cleanup merge he noticed that his mod_parrot project doesn’t build and pass tests anymore. This was sort of expected, he was relying on lots of specialized IO functionality and I broke a lot of specialized IO functionality. Mea culpa. I had a few potential fixes in mind, so I tossed around a few ideas with brrt, put together a few small branches and think I’ve got the solution.</p> <p>The problem, in a nutshell is this: In mod_parrot brrt was using a custom Winxed object as an IO handle. By hijacking the standard input and output handles he could convert requests on those handles into NCI calls to Apache and all would just work as expected. However with the IO system rewrite, IO API calls no longer redirect to method calls. Instead, they are dispatched to new IO VTABLE function calls which handle the logic for individual types.</p> <p><strong>First question</strong>: How do we recreate brrt’s custom functionality, by allowing custom bytecode-level methods to implement core IO functionality for custom user types?</p> <p><strong>My Answer</strong>: We add a new IO VTABLE, for “User” objects, which can redirect low-level requests to PMC method calls.</p> <p><strong>Second Question</strong>: Okay, so how do we associate thisnew User IO VTABLE with custom objects? Currently the <code>get_pointer_keyed_int</code> VTABLE is used to get access to the handle’s <code>IO_VTABLE*</code> structure, but bytecode-level objects cannot use <code>get_pointer_keyed_int</code>.</p> <p><strong>My Answer</strong>: For most IO-related PMC types, the kind of <code>IO_VTABLE*</code> to use is staticly associated with that type. Socket PMCs always use the Socket IO VTABLE. StringHandle PMCs always use the StringHandle IO VTABLE, etc. So, we can use a simple map to associate PMC types with specific IO VTABLEs. Any PMC type not in this map can default to the User IO VTABLE, making everything “just work”.</p> <p><strong>Third Question</strong>: Hold your horses, what do you mean “most” IO-related PMC types have a static IO VTABLE? Which ones don’t and how do we fix it?</p> <p><strong>My Answer</strong>: The big problem is the FileHandle PMC. Due to some legacy issues the FileHandle PMC has two modes of operation: normal File IO and Pipe IO. I guess these two ideas were conflated together long ago because internally the details are kind of similar: Both files and pipes use file descriptors at the OS level, and many of the library calls to use them are the same, so it makes sense not to duplicate a lot of code. However, there are some nonsensical issues that arise because Pipes and files are not the same: Files don’t have a notion of a “process ID” or an “exit status”. Pipes don’t have a notion of a “file position” and cannot do methods like <code>seek</code> or <code>tell</code>. Parrot uses the <code>"p"</code> mode specifier to tell a FileHandle to be in Pipe mode, which causes the IO system to select a between either the File or the Pipe IO VTABLE for each call. Instead of this terrible system, I suggest we separate out this logic into two PMC types: FileHandle (which, as it’s name suggests, operates on Files) and Pipe. By breaking up this one type into two, we can statically map individual IO VTABLEs to individual PMC types, and the system just works.</p> <p><strong>Fourth Question</strong>: Once we have these maps in place, how do we do IO with user-defined objects?</p> <p><strong>My Answer</strong>: The User IO VTABLE will redirect low-level IO requests into method calls on these PMCs. I’ll break <code>IO_BUFFER*</code> pointers out into a new PMC type of their own (IOBuffer) and users will be able to access and manipulate these things from any level. We’ll attach buffers to arbitrary PMCs using named properties, which means we can attach buffers to <em>any PMC</em> that needs them.</p> <p>So that’s my chain of thought on how to solve this problem. I’ve put together three branches to start working on this issue, but I don’t want to get too involved in this code until I get some buy-in from other developers. The FileHandle/Pipe change is going to break some existing code, so I want to make sure we’re cool with this idea before we make breaking changes and need to patch things like NQP and Rakudo. Here are the three branches I’ve started for this:</p> <ul> <li><code>whiteknight/pipe_pmc</code>: This branch creates the new Pipe PMC type, separate from FileHandle. This is the breaking change that we need to make up front.</li> <li><code>whiteknight/io_vtable_lookup</code>: This branch adds the new IOBuffer PMC type, implements the new IO VTABLE map, and implements the new properties-based logic for attaching buffers to PMCs.</li> <li><code>whiteknight/io_userhandle</code>: This branch implements the new User IO VTABLE, which redirects IO requests to methods on PMC objects.</li> </ul> <p>Like I said, these are all very rough drafts so far. All these three branches build, but they don’t necessarily pass all tests or look very pretty. If people like what I’m doing and agree it’s a good direction to go in, I’ll continue work in earnest and see where it takes us.</p> <img src="" height="1" width="1" alt=""/> New Job 2012-11-20T00:00:00+00:00 <p>One thing that’s been eating up my time (and energy, and attention) lately has been a hunt for a new job. I had intended to look around passively for a while because I wasn’t in any big hurry. However, once the recruiters got word that I was looking, things started to move much more quickly. Suddenly I’m getting dozens of phone calls every day, and dozens of emails. I was doing phone screens and going on interviews. All of this and I was trying to not impact my current job so much.</p> <p>First thing’s first: I’m not leaving <a href="">WebLinc</a> because I’m unhappy with it. Also, I don’t think they’re unhappy with me. WebLinc is a <em>great</em> place to work, and I’m thankful for the time I’ve spent there. If you’re in the Philadelphia area and you know ColdFusion, Ruby on Rails, and/or have solid web fundamentals (JS/CSS/HTML) or graphic design experience (or sales, or project management, etc) <a href="">you should consider applying</a>. It’s a very hip young organization with great talent, a rapidly growing and diverse clientele, and some real opportunity to do cool things. Also, there’s a cool bar/restaurant on premises, and the company has a good (and growing) relationship with open-source and the developer community at large. If you’re young and talented and care about the craft of web development, definitely consider WebLinc in your job search. You won’t be disappointed.</p> <p>So why am I leaving? WebLinc historically has had two main platforms: ColdFusion and ASP.NET. Between the two, the ColdFusion team has had some of the biggest project successes and the more demonstrated ability to scale up the size of it’s team. When you’re a company that’s growing as fast as WebLinc, the ability to scale up your team quickly, to meet deadlines and to keep to budgets are all very important. The ColdFusion team was doing these things better than .NET (for a variety of reasons, not the least of which were endemic to the platform itself). This lead to more sales for the ColdFusion team, and a larger, steadier stream of work. At some point the decision was made to devote resources going forward into ColdFusion (and a small, but growing, Ruby on Rails team) and not devote new resources to .NET. This has nothing to do with the relative theoretical merits of ASP.NET vs ColdFusion and, in my opinion, has nothing to do with the quality of developers they had working on those platforms. The reasons why one team was doing better than the other team aren’t really worth exploring at this point, but from a business perspective it was clear where effort and resources needed to be devoted going forward.</p> <p>I started looking around for a variety of reasons. I could have stayed for a while in my current position, riding the waves of boom and bust that are inherent in any job that bills hourly for maintenance. I could have started transitioning over towards the Ruby on Rails team but chose not to go in that direction, yet. Instead of sitting around and hoping things went well, I wanted to take a little bit more control of my situation. I started looking passively at first, but once the recruiters got involved things started moving quickly and the rest is history. I’ve written a few notes up about my job hunt and my dealings with various recruiters. These notes may turn into additional blog posts (now that I have time and energy to try blogging more).</p> <p>In early December I’ll be starting at <a href="">Halfpenny Technologies</a>, a small but growing company involved with electronic medical records and related areas. The team is small, the company is growing rapidly, and they have some very real and very interesting technical challenges bubbling to the forefront. At the interview they were throwing around words like “ownership” and “leadership”, and were talking about some very interesting new technologies. Combine that with a few other factors, and the decision was actually an easy one for me to make.</p> <p>I have a good idea about what kinds of work I’m going to be doing there but I don’t want to talk about it quite yet. In reality, you don’t know anything until you start working and get knee-deep in code. I also don’t know what their policies and attitudes are on blogging and public commentary, but I’ll say what I can when I’m confident enough to say it.</p> <p>So starts a new chapter in my career, one that I’m hoping lasts quite a while and takes me in some cool new directions. Again, I’ll post more when I have more to say.</p> <img src="" height="1" width="1" alt=""/> September Status 2012-09-14T00:00:00+00:00 <p>First, some personal status:</p> <h3 id="personal-status">Personal Status</h3> <p>I haven’t blogged in a little while, and there’s a few reasons for that. I’ll list them quickly:</p> <ol> <li>Work has been…tedious lately and when I come home I find that I want to spend much less time looking at a computer, especially any computer that brings more stress into my life. Also,</li> <li.</li> <li>The <code>io_cleanup1</code> work.</li> </ol> <p>I’m going to do what I can to post something of a general Parrot update here, and hopefully I can get back in the habit of posting a little bit more regularly again.</p> <h3 id="iocleanup1-status"><code>io_cleanup1</code> Status</h3> <p><code>io_cleanup1</code> did indeed merge with almost no problems reported at all. I’m very happy about that work, and am looking forward to pushing the IO subsystem to the next level. Before I started <code>io_cleanup1</code>,.</p> <p>The <code>io_cleanup</code> branch did take a lot of time and energy, much more than I initially expected. But, it’s over now and I’m happy with the results so now I can start looking on to the next project on my list.</p> <h3 id="threads-status">Threads Status</h3> <p.</p> .</p> <p.</p> <p>Here’s some example code, adapted from the example tadzik had, which fails on the threads branch:</p> <pre><code>function main[main](var args) { var x = 1; var t = new 'Task'(function() { x++; say(x); }); ${ schedule t }; ${ wait t }; say("Done!"); } </code></pre> <p>Running this code on the threads branch creates anything from an assertion failure to a segfault. Why?</p> <p <code>x</code>, we’re passing a <code>Proxy</code> PMC, which points to <code>x</code>. This part works as expected.</p> <p>When we invoke a closure, we update the context to point to the “outer” context, so that lexical variables (“x”, in this case) can be looked up correctly. However, instead of having an outer which is a <code>CallContext</code> PMC, we have a <code>Proxy</code> to a <code>CallContext</code>.</p> <p>An overarching problem with <code>CallContext</code>.</p> <p.</p> <p?</p> <p>I identified this issue earlier in the week and have been thinking it over for a few days. I’m not sure I’ve found a workable solution yet. At least, I haven’t found a solution that wouldn’t impose some limitations on semantics.</p> <p>For instance, in the code example above, the implicit expectation is that the x variable lives on the main thread, but is updated on the second thread. And those updates should be reflected back on main after the <code>wait</code> opcode.</p> <p.</p> <h3 id="other-status">Other Status</h3> <p.</p> <p>Things have otherwise been a little bit slow lately, but between <code>io_cleanup1</code>, <code>threads</code> and rurban’s pbc work, we’re still making some pretty decent progress on some pretty important areas. If we can get threads fixed and merged soon, I’ll be on to the next project in the list.</p> <img src="" height="1" width="1" alt=""/> io_cleanup1 Lands! 2012-08-27T00:00:00+00:00 <p>FINALLY! The big day has come. I’ve just merged <code>whiteknight/io_cleanup1</code> to master. Let us rejoice!</p> <p>When <a href="/2012/05/27/io_cleanup_first_round.html">I started the project</a>, <code>src/io/*</code> and started rewriting from the ground up.</p> .</p> <p.</p> <p>Where to go from here? My TODO list for the near future is very short:</p> <ol> <li>Threads</li> <li>6model</li> <li>More IO work</li> </ol> <p>The Threads branch, the magnum opus of Parrot hacker <strong>nine</strong>.</p> <p.</p> <p).</p> <p.</p> <p>Finally, the <code>whiteknight/io_cleanup1</code>.</p> <img src="" height="1" width="1" alt=""/> Parrot 4.7.0 "Hispaniolan" Released! 2012-08-22T00:00:00+00:00 <p>On behalf of the Parrot team, I’m proud to announce Parrot 4.7.0, also known as “Hispaniolan”. <a href="">Parrot</a> is a virtual machine aimed at running all dynamic languages.</p> <p>Parrot 4.7.0 is available on <a href="">Parrot’s FTP site</a>, or by following the download instructions at <a href=""></a>. For those who would like to develop on Parrot, or help develop Parrot itself, we recommend using Git to retrieve the source code to get the latest and best Parrot code.</p> <p>Parrot 4.7.0 News:</p> <pre><code>- Core + Added .all_tags() and .all_tagged_pmcs() methods to PackfileView PMC + Several build and coding standards fixes </code></pre> <p>The SHA256 message digests for the downloadable tarballs are:</p> <pre><code </code></pre> <p>Many thanks to all our contributors for making this possible, and our sponsors for supporting this project. Our next scheduled release is 18 September 2012.</p> <p>The release is indeed out a day late. It’s not that I forgot about it, it’s just that I can’t read a calendar and HOLY CRAP, IT’S WEDNESDAY ALREADY? When did that happen? So, and I can’t stress this enough, <strong>Mea Culpa</strong>.</p> <img src="" height="1" width="1" alt=""/> io_cleanup1 Done? 2012-07-22T00:00:00+00:00 <p>This morning I made a few last commits on my <code>whiteknight/io_cleanup1</code> branch, and I’m cautiously optimistic that the branch is now ready to merge. The last remaining issue, which has taken the last few days to resolve, has been fixing readine semantics to match some old behavior.</p> <p>A few days ago I wrote a post about <a href="/2012/06/13/io_readline.html">how complicated readline is</a>. At the time, I thought I had the whole issue under control. But then Moritz pointed out a problem with a particular feature unique to Socket that was missing in the new branch.</p> <p>In master, you could pass in a custom delimiter sequence as a string to the <code>.readline()</code> method. Rakudo was using this feature like this:</p> <pre><code>str = s.readline("\r\n") </code></pre> <p>Of course, as I’ve pointed out in the post about readline and elsewhere, there was no consistency between the three major builtin types: FileHandle, Socket and StringHandle. The closest thing we could do with FileHandle is this:</p> <pre><code>f.record_separator("\n"); str = f.readline(); </code></pre> <p>Notice two big differences between FileHandle and Socket here: First, FileHandle has a separate <code>record_separator</code> method that must be called separately, and the record separator is stored as state on the FileHandle between <code>.readline()</code> calls. Second, FileHandle’s record separator sequence may only be a single character. Internally, it’s stored as an <code>INTVAL</code> for a single codepoint instead of as a <code>STRING*</code>, even though the <code>.record_separator()</code> method takes a <code>STRING*</code> argument (and extracts the first codepoint from it).</p> <p>Initially in the <code>io_cleanup1</code> branch I used the FileHandle semantics to unify the code because I wasn’t aware that Socket didn’t have the same restrictions that FileHandle did, even if the interface was a little bit different. I also didn’t think that the Socket version would be so much more flexible despite the much smaller size of the code to implement it. In short, I really just didn’t look at it closely enough and assumed the two were more similar than they actually were. Why would I ever assume that this subsystem ever had “consistency” as a driving design motivation?</p> <p>So I rewrote readline. From scratch.</p> <p>The new system follows the more flexible Socket semantics for all types. Now you can use almost any arbitrary string as the record separator for <code>.readline()</code> on FileHandle, StringHandle and Socket. In the <code>whiteknight/io_cleanup1</code> branch, as of this morning, you can now do this:</p> <pre><code>var f = new 'FileHandle'; f.open('foo.txt', 'r'); f.record_separator("TEST"); string s = f.readline(); </code></pre> <p>…And you can also do this, which is functionally equivalent:</p> <pre><code>var f = new 'FileHandle'; f.open('foo.txt', 'r'); string s = f.readline("TEST"); </code></pre> <p>The same two code snippets should work the same for all built-in handle types. For all types, if you don’t specify a record separator by either method, it defaults to “\n”.</p> <p>Above I mentioned that almost any arbitrary string should work. I use the word “almost” because there are some restrictions. First and foremost, the delimiter string cannot be larger than half the size of the buffer. Since buffers are sized in bytes, this is a byte-length restriction, not a character-length restriction. In practice we know that delimiters are typically things like “\n”, “\r\n”, “,”, etc. So if the buffer is a few kilobytes this isn’t a meaningful limitation. Also, the delimiter must be the same encoding as the handle uses, or it must be able to convert to that encoding. So if your handle uses <code>ascii</code>, but you pass in a delimiter which is <code>utf16</code>, you may see some exceptions raised.</p> <p>I think that the work on this branch, save for a few small tweaks, is done. I’ve done some testing myself and have asked for help to get it tested by a wider audience. Hopefully we can get this branch merged this month, if no other problems are found.</p> <img src="" height="1" width="1" alt=""/>
|
http://feeds.feedburner.com/afwknight
|
CC-MAIN-2016-40
|
refinedweb
| 39,846
| 59.84
|
In this example, we extend our LCDRange class to include a text label. We also provide something to shoot at.
The LCDRange now has a text label.
class QLabel;
We name declare QLabel because we want to use a pointer to it in the class definition.
class LCDRange : public QVBox { Q_OBJECT public: LCDRange( QWidget *parent=0, const char *name=0 ); LCDRange( const char *s, QWidget *parent=0, const char *name=0 );
We have added a new constructor that sets the label text in addition to the parent and name.
const char *text() const;
This function returns the label text.
void setText( const char * );
This slot sets the label text.
private: void init();
Because we now have two constructors, we have chosen to put the common initialization in the private init() function.
QLabel *label;
We also have a new private variable: a QLabel. QLabel is one of Qt's standard widgets and can show a text or a pixmap with or without a frame.
#include <qlabel.h>
Here we include the QLabel class definition.
LCDRange::LCDRange( QWidget *parent, const char *name ) : QVBox( parent, name ) { init(); }
This constructor calls the init() function, which contains the common initialization code.
LCDRange::LCDRange( const char *s, QWidget *parent, const char *name ) : QVBox( parent, name ) { init(); setText( s ); }
This constructor first calls init() and then sets the label text.
void LCDRange::init() { QLCDNumber *lcd = new QLCDNumber( 2, this, "lcd" ); slider = new QSlider( Horizontal, this, "slider" ); slider->setRange( 0, 99 ); slider->setValue( 0 ); label = new QLabel( " ", this, "label" ); label->setAlignment( AlignCenter ); connect( slider, SIGNAL(valueChanged(int)), lcd, SLOT(display(int)) ); connect( slider, SIGNAL(valueChanged(int)), SIGNAL(valueChanged(int)) ); setFocusProxy( slider ); }
The setup of lcd and slider is the same as in the previous chapter. Next we create a QLabel and tell it to align the contents centered (both vertically and horizontally). The connect() statements have also been taken from the previous chapter.
const char *LCDRange::text() const { return label->text(); }
This function returns the label text.
void LCDRange::setText( const char *s ) { label->setText( s ); }
This function sets the label text.
The CannonField now has two new signals: hit() and missed(). In addition it contains a target.
void newTarget();
This slot creates a target at a new position.
signals: void hit(); void missed();
The hit() signal is emitted when a shot hits the target. The missed() signal is emitted when the shot moves beyond the right or bottom edge of the widget (i.e., it is certain that it has not and will not hit the target).
void paintTarget( QPainter * );
This private function paints the target.
QRect targetRect() const;
This private function returns the enclosing rectangle of the target.
QPoint target;
This private variable contains the center point of the target.
#include <qdatetime.h>
We include the QDate, QTime, and QDateTime class definitions.
#include <stdlib.h>
We include the stdlib library because we need the rand() function. repaint() on a hidden widget.
void CannonField::newTarget() { static bool first_time = TRUE; if ( first_time ) { first_time = FALSE; QTime midnight( 0, 0, 0 ); srand( midnight.secsTo(QTime::currentTime()) ); } QRegion r( targetRect() ); target = QPoint( 200 + rand() % 190, 10 + rand() % 255 ); repaint( r.unite( targetRect() ) ); }
This private function creates a target center point at a new "random" position.
We use the rand() function to fetch random integers. The rand() function normally returns the same series of numbers each time you run a program. This would make the target appear at the same position every time. To avoid this, we must set a random seed the first time this function is called. The random seed must also be random in order to avoid equal random number series. The solution is to use the number of seconds that have passed since midnight as a pseudo-random value.
First we create a static bool local variable. A static variable like this one is guaranteed to keep its value between calls to the function.
The if test will succeed only the first time this function is called because we set first_time to FALSE inside the if block.
Then we create the QTime object midnight, which represents the time 00:00:00. Next we fetch the number of seconds from midnight until now and use it as a random seed. See the documentation for QDate, QTime, and QDateTime for more information.
Finally we calculate the target's center point. We keep it within the rectangle (x=200, y=35, width=190, height=255), (i.e., the possible x and y values are x = 200..389 and y = 35..289).
Note that rand() returns a random integer >= 0.
void CannonField::moveShot() { QRegion r( shotRect() ); timerCount++; QRect.
} else if ( shotR.x() > width() || shotR.y() > height() ) { autoShootTimer->stop(); emit missed();
This if statement is the same as in the previous chapter, except that it now emits the missed() signal to tell the outside world about the failure.
} else {
And the rest of the function is as before.
CannonField::paintEvent() is as before, except that this has been added:
if ( updateR.intersects( targetRect() ) ) paintTarget( &p );
These two lines make sure that the target is also painted when necessary.
void CannonField::paintTarget( QPainter *p ) { p->setBrush( red ); p->setPen( black ); p->drawRect( targetRect() ); }
This private function paints the target; a rectangle filled with red and with a black outline.
QRect CannonField::targetRect() const { QRect r( 0, 0, 20, 10 ); r.moveCenter( QPoint(target.x(),height() - 1 - target.y()) ); return r; }
This private function returns the enclosing rectangle of the target. Remember from newTarget() that the target point uses y coordinate 0 at the bottom of the widget. We calculate the point in widget coordinates before we call QRect:.
LCDRange *angle = new LCDRange( "ANGLE", this, "angle" );
We set the angle text label to "ANGLE".
LCDRange *force = new LCDRange( "FORCE", this, "force" );
We set the force text label to "FORCE".
The LCDRange widgets look a bit strange - the built-in layout management in QVBox gives the labels too much space and the rest not enough. We'll fix that in the next chapter.
(See Compiling for how to create a makefile and build the application.)
Make a cheat button that, when pressed, makes the CannonField display the shot trajectory for five seconds.
If you did the "round shot" exercise from the previous chapter, try changing the shotRect() to a shotRegion() that returns a QRegion so you can have really accurate collision detection.
Make a moving target.
Make sure that the target is always created entirely on-screen.
Make sure that the widget cannot be resized so that the target isn't visible. Hint: QWidget::setMinimumSize() is your friend.
Not easy; make it possible to have several shots in the air at the same time. Hint: make a Shot object.
You're now ready for Chapter 13.
[Previous tutorial] [Next tutorial] [Main tutorial page]
|
http://vision.lbl.gov/People/qyang/qt_doc/tutorial1-12.html
|
CC-MAIN-2013-20
|
refinedweb
| 1,127
| 66.13
|
The visualization is an important part of any data analysis. This helps us present the data in pictorial or graphical format. Data visualization helps in
- Grasp information quickly
- Understand emerging trends
- Understand relationships and pattern
- Communicate stories to the audience
I work in the transportation domain, thus I’m fortunate that I get to work with lots of data. In the data analysis part of the task, I have to often perform exploratory analysis. When comes to visualization my all-time favourite is the ggplot2 library (R’s plotting library: R is a statistical programming language) which is one of the popular plotting tools. Recently, I also started implementing the same using python due to recent advancements in python libraries. I have observed a significant improvement in python data analysis tools specifically, data manipulation, plotting and machine learning. So, I thought let’s see whether python visualization tools offer similar flexibility or not, like what ggplot2 does. So, I tried several libraries like Matplotlib, Seaborn, Bokeh and Plotly. As per my experience, we could utilize seaborn (static plots) and Plotly (interactive plots) for the majority of exploratory analysis tasks with very few lines of codes and avoiding complexity.
After going through different plotting tools, especially in Python, I have observed that still there are challenges one would face while implementing plots using the Matplotlib and Seaborn library. Especially, when you want it to be publication-ready. During learning, I have gone through these ups and downs. So, let me share my experience here.
The Seaborn library is built on top of the Matplotlib library and also combined with the data structures from pandas. The Seaborn blog series comprised of the following five parts:
Part-1. Generating different types of plots using seaborn
Part-2. Facet, Pair and Joint plots using seaborn
Part-3. Seaborn’s style guide and colour palettes
Part-4. Seaborn plot modifications (legend, tick, and axis labels etc.)
Part-5. Plot saving and miscellaneous
** In this article, we will explore and learn to generate Facet, Pair and Joint plots using matplotlib and seaborn library.
The article comprises of the following:
- FacetGrid( ) → Wrapper functions [relplot, catplot and lmplot]
- PairGrid( ) → Wrapper function [pairplot]
- JointGrid() → Wrapper function [jointplot]
- Code and dataset link
The first step is to load relevant plotting libraries.
import pandas as pd # data loading and manipulation import matplotlib.pyplot as plt # plotting import seaborn as sns # statistical plotting from palmerpenguins import load_penguins # Penguin dataset
Setting style and context
Seaborn offers five preset seaborn themes:
darkgrid,
whitegrid,
dark,
white, and
ticks. The default theme is
darkgrid. Here we will set the white theme to make the plots aesthetically beautiful.
Plot elements can be scaled using set_context( ). The four preset contexts, in order of relative size, are
paper,
notebook,
talkand
poster. The
notebook style is the default. Here we are going to set it to paper and scale the font elements to 2.
sns.set_style('white') sns.set_context("paper", font_scale = 2)
About datasets
In this blog, we primarily going to use the Tips dataset. The data was reported in a collection of case studies for business statistics. The dataset is also available through the Python package Seaborn.
Source:
Bryant, P. G. and Smith, M. A. (1995), Practical Data Analysis: Case Studies in Business Statistics, Richard D. Irwin Publishing, Homewood, IL.
# Load tips data from seaborn libraries tips = sns.load_dataset("tips") print(tips.head())
In addition to tips datasets, we are going to use a second dataset named “Penguins” for making few plots. The Penguins dataset contains 343 observations and 8 variables (excluding the index). The Penguins dataset comprised of the following variable:
species: a factor denoting penguin species (Adélie, Chinstrap and Gentoo)
island: a factor denoting island in Palmer Archipelago, Antarctica (Biscoe, Dream or Torgersen)
bill_length_mm: a number denoting bill length (millimeters)
bill_depth_mm: a number denoting bill depth (millimeters)
flipper_length_mm: an integer denoting flipper length (millimeters)
body_mass_g: an integer denoting body mass (grams)
sex: a factor denoting penguin sex (female, male)
year: an integer denoting the year of observation
One can load the penguins datasets by calling the load_penguins( ) function. The dataset contains few missing values thus we can omit those missing values by calling a .dropna( ) method.
# Load penguins dataset and remove na values penguins = load_penguins() penguins = penguins.dropna() print(penguins.head())
Let’s start with different facet plots one by one.
1. FacetGrid
FacetGrid helps in visualizing the distribution of one variable as well as the relationship between multiple variables separately within subsets of your dataset using multiple panels. A FacetGrid can be drawn with up to three dimensions by specifying a row, column, and hue.
The FacetGrid( ) function is useful when we want to plot a subset of data based on a categorical column, say for the tips dataset you want to see how the tip varies with the total bill amount but separately for each day. You can plot a subset of the data based on a categorical column by supplying it to column (col) or (row) argument.
The plotting mechanism is simple.
Step1: supply the data and categorical column to col or row arguments and create a facet grid plot object (here, g1).
Step2: apply a seaborn’s plot function using .map( ) method and supply x-axis and y-axis variables (columns).
Here, in the FacetGrid( ) I have faceted the plot based on the “day” variable column-wise. Next, supplied the seaborn’s scatterplot function through .map( ) method.
Step3: Plotting the final object using plt.show( ) function.
g1 = sns.FacetGrid(data = tips, col = "day", row_order = ["Sat", "Sun", "Thur", "Fri"]) g1.map(sns.scatterplot, "total_bill", "tip") plt.show()
FacetGrid( ) offers a lot of detailed functionality. For fast visualization, we can create similar plot using two different functions relplot( ) and catplot( ).
1.1 Relational plot
The relplot( ) is used to plot relations especially when we want to observe the relationship between two continuous variables. For example, a relational plot could be a scatter plot.
Here, we used the relplot( ) function where we supplied two continuous variables on the x-axis and y-axis, followed by a dataset. Next, we supplied “scatter” in the “kind” argument as we want to generate a scatterplot. Next, we supplied the “day” variable in the column (col) argument, so that it plots different relational plots (scatterplots) based on the day-wise subset of data.
sns.relplot(x = "total_bill", y = "tip", data = tips, kind = "scatter", col = "day") plt.show()
1.2 Categorical Plot
Catplot( ) is another alternative but very useful when you are dealing with a categorical column. You can generate a count plot, bar plot, box plot and violin plot using the catplot function. The best part is that you can subset data by supplying a categorical column to row and column (col) parameters as arguments.
For, example here I have plotted the distribution (densities) of tips across gender over different days.
sns.catplot(x = "sex", y = "tip", data = tips, kind = "violin", col = "day") plt.show()
1.3 lmplot()
lmplot( ) is useful when we want to generate regression plots. The function has lots of features that make your regression visualization very easy and fun.
Here, we have generated a scatter plot with the best fit line between total_bill and tip. Next, we subsetted the plot across row and column based on sex and time variables. Next, we supplied “day” into hue to generate separate regression best fit lines for each category. Additionally, you can change the row or column order too.
col_order = ['Lunch','Dinner'] sns.lmplot(x = 'total_bill', y = 'tip', data = tips, col = "time", row = 'sex', row_order = ["Male", "Female"], hue = 'day', col_order = col_order) plt.show() plt.clf()
2. PairGrid
Seaborn’s PairGrid( ) function could be used for plotting pairwise relationships of variables in a dataset. This type of plot is very useful when we want to see the relationship between multiple variables as well as their distribution in one plot.
The pairgrid( ) plot generation requires the following steps:
- First, you need to generate a PairGrid( ) plotting object. Here, we have used the penguin dataset and supplied four features for pair-wise plotting.
- Next, supply a plotting function for the diagonal section using map_diag( ) function. Here we have plotted histograms for the diagonal section
- Finally, supply another plot function for the off-diagonal grids using map_offdiag( ). Here we have supplied plt.scatter to generate pairwise scatterplots for off-diagonal grids.
g = sns.PairGrid(penguins, vars=['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g']) g2 = g.map_diag(plt.hist) g3 = g2.map_offdiag(plt.scatter) plt.show()
2.1 Pair Plots
The pairplot is a convenience wrapper around many of the PairGrid functions. The .pairplot( ) is the quick plotting function that helps in generating PairGrid like plots for quick exploratory analysis.
This plotting function offers almost similar parameters. Here, the type of off-diagonal and diagonal plots are decided by supplying a plot function into the “kind” and diag_kind arguments respectively. You can also set colour palettes and use **kws arguments to supply additional details.
sns.pairplot(vars = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g'], data = penguins, kind = 'scatter', diag_kind = "hist", hue = 'species', palette = "Set1", diag_kws = {'alpha':.5}) plt.show() plt.clf()
Here, is another example of pairplot( ), where we have supplied a categorical column (Species) to hue and asked seaborn to fit regression line (kind: reg). Additionally, added Kernel Density Estimate (KDE) plots across the grid’s diagonal line.
sns.pairplot(vars = ['bill_length_mm', 'bill_depth_mm', 'flipper_length_mm', 'body_mass_g'], data = penguins, kind = 'reg', diag_kind = "kde", hue = 'species', palette = "Set1", diag_kws = {'alpha': 0.4}) plt.show() plt.clf()
3. JointGrid()
Seaborn’s JointGrid combines univariate plots such as histograms, rug plots and kde plots with bivariate plots such as scatter and regression plots.
Let’s assume that we want to plot a bivariate plot (total_bill vs tip) and also want to plot a univariate distribution (histogram) for each variable. The plot generation comprised of the following steps:
Step1: The first step is to create a JointGrid( ) object by supplying the x-axis, y-axis variables and dataset.
Step2: Next, supply the plotting functions through a .plot( ) function. The first argument is for the bivariate plot and the second argument is for the univariate plot.
sns.set_style("whitegrid") g = sns.JointGrid(x="total_bill", y="tip", data = tips) g.plot(sns.regplot, sns.histplot) plt.show()
3.1 jointplot( )
The jointplot is a convenience wrapper around many of the JointGrid functions. It isa quick plotting function used for fast exploratory analysis. Here, we have reproduced the same plot (as discussed above) by just supplying a “reg” (regression) argument to kind parameter.
sns.jointplot(x = "total_bill", y = "tip", kind = 'reg', data = tips) plt.show() plt.clf()
Here is an example of a residual plot generated by supplying the “resid” argument to the kind parameter.
sns.jointplot(x = "total_bill", y = "tip", kind = 'resid', data = tips) plt.show() plt.clf()
We can plot more sophisticated plots using jointplot( ) parameters. Even it is possible to overlay some of the JointGrid plots on top of the standard jointplot.
In the following example, we have supplied the bins argument for the histogram using marginal_kws parameter. Additionally, we added a kdeplot using the plot_join( ) method.
g = (sns.jointplot(x = "total_bill", y = "tip", kind = 'scatter', data = tips, marginal_kws = {"bins": 20}).plot_joint(sns.kdeplot)) plt.show() plt.clf()
Matplotlib and Seaborn are really awesome plotting libraries. I would like to thank all the contributors for contributing to Matplotlib and Seaborn libraries.
I hope you learned something new!
If you learned something new and liked this article, share it with your friends and colleagues. If you have any suggestions, drop a comment.
Featured image by Gerd Altmann from Pixabay
|
https://onezero.blog/generate-publication-ready-facet-pair-and-joint-plots-using-seaborn-library/
|
CC-MAIN-2022-40
|
refinedweb
| 1,923
| 56.25
|
>>.)
The New York Times. (Score:3, Insightful), Insightful)
That would involve coding in C++ for a week. Eew.
Straight up C, no problem. Awesome language. Love it.
C++ requires me to mentally juggle too many balls in the air, it is mental effort that I could be expending on writing actual code.
Re:A more appropriate quote seems to be... (Score:4, Funny)
Try coding in ObjectiveC and Cocoa for a week, you'll learn what a really good library looks like.
No namespaces. More brackets than Lisp. Lame.
;)
Re:A more appropriate quote seems to be... (Score:5, Insightful)
Or just a better moderation system in general. Unfortunately this is the reality of Slashdot today, where pointing out why DRM is bad will get you modded overrated: [slashdot.org]
Whilst providing additional information that hasn't yet been posted but that demonstrates a valid counter point to the post of the parent you're responding to gets you modded redundant: [slashdot.org]
Just like real democracies, when you let the idiot masses vote, you're bound to get some idiotic results.
I'm not a fan of Apple, and I dislike Cocoa and Objective-C, but you getting moderated troll for making the point you did is just utterly stupid- it was a fair comment. It's just sad that there are people incapable of grasping the concept of moderating a post based on it's merits, rather than based on rabid fanboyism and ignorance.
It seems the best way to get modded up is to post some populist bullshit, that might well be completely and utterly fucking incorrect, but that appeals to the ignorant and uninformed. The problem with democratic moderation is that you basically just end up reinforcing the ideology that becomes dominant and driving away people with other often equally accurate points, so that it basically becomes a self-reassuring wankfest of ignorance.
Still, I carry on reading because every once in a while there are some posts that really are insightful and worth reading, it's just a shame they become ever rarer and rarer.)
Re:A more appropriate quote seems to be... (Score:5, Insightful)
Perhaps that's what it looks like to you, on its surface.
Microsoft tried to seed as much a they could into universities with really low prices on everything, including developer tools. NGOs got cheap stuff as well in many cases.
Microsoft did something more onerous, however: their software had poor quality, and they fought with abounding obfuscation, the FOSS movement. Add in to the equation lots of bad press about their bad behavior (and legal posturing) in the US, Canada, and the EU, to mention just a few jurisdictions. Salt the mess with mind-boggling security problems *of their own making*. Add in way too many versions of everything, requiring developers to have to constantly recode for variants.
Sprinkle in losing momentum in telephony, smartphones, gaming, search, and everything else they got their fingers on. Wanna be a part of a winning team? It used to be a meal ticket to sign on to Windows. No more.
Re !
Re:An appropriate quote seems to be... (Score:5, Insightful)
It's not about losing money as much as losing relevance. Lose relevance and money will follow eventually..
Re:An appropriate quote seems to be... (Score:4, Insightful).
Odd, it seems like you're describing the world today, as opposed to the world 10 years from now.
Re:An appropriate quote seems to be... (Score:5, Insightful)
one can make money while sliding down the slippery slope into the valley of irrelevance
Re:An appropriate quote seems to be... (Score:5, Insightful). [rackspacecloud.com]
Windows machine at Rackspace Cloud: 256m *not available, needs more memory*, 4.0 cents/hr for 512m. [rackspacecloud.com].
Re:MSDN? Hello? (Score:5, Insightful)
"Frankly, if you dont have $2K for an Enterprise MSDN licensing, you really have no business doing a start up, do you?"
Frankly, if you put your money out of the objective of achieving revenue -like spending even if only one single dollar on unneeded licenses, you really have no business doing a start up, do you?
Re:MSDN? Hello? (Score:5, Insightful)
What is all this bitching about the price of tools, with MSDN out there for almost nothing? Frankly, if you dont have $2K for an Enterprise MSDN licensing, you really have no business doing a start up, do you?
The point of starting a company is to make money. Money for you, and money for the investors. Lighting a pile of money on fire just to get access to development tools is throwing away money that could be in your pocket or your investors.
If you can do something for free, why would you choose to pay $2,000 for it?
Back in the late 90's, I developed for a Microsoft shop. By 2001, I was playing with linux, and by 2002 I made the switch. I haven't run into anything I couldn't do just as easily in Linux.:MSDN? Hello? (Score:5, Insightful)
Ok pop quiz, people. Is the above person a young hip developer, or a douchebag?
Re:MSDN? Hello? (Score:4, Insightful)
Ya, the message could have been made in a non-inflammatory tone. But agree with the overall message. Regardless of what "start-up" you plan on launching, it will still require a small amount of fuel to spark ignition. That's called Capitol Investment. It may be used to purchase rent, electricity, employees, and yes...licensing if that's a requirement to achieving your goal.
"no longer the biggest software company?" (Score:4, Insightful)
Is MS losing money ?
"Microsoft reports first YoY revenue slide in company history"
...so I guess that would be a "yes". [boygeniusreport.com]
no longer the biggest software company in the world ?
As of close on Tuesday 6 Jul 2010:
Microsoft market cap: 208.75B
Apple market cap:226.24B [yahoo.com]
...so I'm guessing that one's a "yes", too...
retrenching ?
Well, you got me on this one. I guess if they were actually retrenching, they wouldn't be reporting losses in revenue or be only the second largest software company in the world. So that one's a "no".
Possibly they should get off their butts, and instead of throwing the chair they were sitting on, they should actually retrench.
-- Terry
Allow me to (hopefully) to be the first to say.... (Score:5, Insightful)
Boo-fucking-hoo.
Re:Allow me to (hopefully) to be the first to say. (Score:4, Insightful)
Precisely. Microsoft lost on two counts, both self-imposed, and they are getting what they deserve.
They emphasized crap to lock users in instead of real cutting edge development, which is not fun for developers or users, and which generates crap code, twisted beyond comprehension, byzantine, ugly. IBM had this same problem as a result of their anti-trust shenanigans, and apparently Microsoft chose to repeat history.
Microsoft also emphasized control freakery beyond all reason, in addition to the twiddly feature lockin, what with siccing the BSA on "pirates", horrible copy protection, license verification requiring internet access to run, on and on, making use of their software more and more hassle. The message was clear -- go somewhere else.
People would put up with either of these to some extent, but the combination made them simply not worth the hassle. Crap products which make life difficult are dead products.
All they had to do was stay bleeding edge, drop the lockin featuritis, and compete on quality. They'd have the market sewn up.
Re:Microsoft out of favour with hipster developers (Score:5, Insightful)
This isn't MS whinging, this is some idiot at the NYT whinging.
MS's MO is to indoctrinate people at the business level not the developer level as it's the business people who sign pay cheques. It may appear that MS is having a hard time wooing developers when MS spends all its time and effort wooing MBA's.
This is also why all the innovative work is done in F/OSS. You cant schedule new idea's into a project.
Too narrow (Score:4, Insightful)
The microsoft software stack is designed so that service providers can siphon money off at the point of delivery. Antivirus is a good example. Yeah we sold you an OS but you need this extra thing to make it secure, didn't you know that?
So its a great way to make money if you stay with their targeted solutions. But if you want to do something totally new the benefits of using microsoft aren't really there so developers look elsewhere.
Re:Too narrow (Score:4, Insightful)
I don't get you point ?
how is that worse than Apple's model that actually siphons off 30% of all content and apps you install on your iDevice, and censors what apps and content are allowed, and takes a cut of wireless contracts ?
the issue for MS is that they DON'T make money on content, software and services sold for their machines... but that's also the cause for their success ?.).
Re: (Score:3, Insightful)
OK, Almost free. At the end of two years, you have to pay them $200.
Some people (especially startups with no money) would not consider $200 "almost free". In fact, there's no such thing as almost free, it's like being pregnant. It either is or it isn't, and free will always be cooler than not free.
MS got greedy and forgot the reason for their success was developers. They could have given away their developer tools all along. They were making enough money on Windows & Office, but they weren't satisfied with that and kept reaming developers for their tools, which had
Re: (Score:3, Insightful)
C'mon dude. Bizspark is mostly a networking concept. Not a cool-application platform.
This article isn't about VC-level startups, it's about students building the NextSmallThing in their dorm room. For the price of a bank of old servers, someone can build a web app and get a cool company started. MS is never going to deliver the performance/cost ratios of an old fashioned LAMP stack. It's not a business model that competes that way. Plus, that stack is just a gateway anymore - the real fun is in)
Never confuse (Score:5, Insightful)
Microsoft's program to seed start-ups with its software for free requires the fledgling companies to meet certain guidelines and jump through hoops to receive software — while its free competitors simply allow anyone to download products off a website with the click of a button.
This assumes that cost is the only factor that start-ups are weighing when determining software. Some of them may legitimately pick open source because it's better or that MS doesn't offer a certain software. For many, they may go to cheaper solutions like OpenOffice instead of MS Office purely on cost. But they may use Apache instead of IIS for performance reasons.
If cost is the only reason, wouldn't it be likely that once these start-ups are established, they may not like having to pay full price and may turn to competitors for cheaper alternatives?
Re:Never confuse (Score:5, Insightful):Right and wrong (Score:5, Insightful).
As a recent CS grad, I agree 100% that the cost to get up and running for MS is a pretty huge deal.
But another big draw in the FOSS world (for me, at least) is the freedom to write code that isn't locked down to particular technology or other setup. I see Microsoft (and Apple, and a few others) as wanting to get us locked into their way of doing things, completely ignoring the possibility of 'change' that doesn't come from them.
I would much rather give life to some core idea and then see how people with other interests and thoughts can expand and evolve what I started.
I think parent (and GP) has it right... (Score:3, Insightful)
I suspect the 'locking down to technology' is a pretty serious issue, along with the cost of the sophisticated development environment. And, speaking of development environment, the new graduates are going to be very comfortable with the social networking side of the FOSS world. When there is a problem with a tool, or if they need help with an esoteric problem, the help is read
FOSS isn't just price (Score:5, Insightful)
What Microsoft still doesn't seem to understand is that the lure of FOSS goes beyond what's "hip", and also goes beyond the price.
And I love these quotes: "We did not get access to kids as they were going through college" Translation: "We did not infiltrate schools enough to make sure they had no exposure to anything but our stuff".
And: "Microsoft's program to seed start-ups with its software for free requires the fledgling companies to meet certain guidelines and jump through hoops to receive [free/discounted] software" Translation: "We should have worked harder to make it even easier to get people/companies hooked on our proprietary solutions".
Oh well.
Speed (Score:4, Insightful)
Microsoft quite simply is too slow. They build nice tools, but they do so slowly. Far too slowly for the pace of the Internet. If they were an innovative company that might not be a problem, but Microsoft is now chasing at about a 2-4 year disadvantage.
It has nothing to do with "cool". I don't use COBOL not because it isn't "cool". I don't use COBOL because it doesn't have useful hooks into the libraries I need to use on a day to day basis. Same with Microsoft tech.
Its not because its free. (Score:5, Insightful)
Re:Its not because its free. (Score:5, Insightful)
This is the big item for me.
I can still write C code in emacs and compile with the same makefile under gcc if I wanted to. I can still call the same POSIX libraries. I don't have to throw away everything I know and start all over every few years. I have learned new languages, like Python and Java and new APIs because they were pertinent to what I was trying to accomplish.
Microsoft seems to make a big marketing splash on a development toolset or language or API every few years only to throw it away with the "next big thing". For someone who's been programming long enough this gets to be a tiring waste of time.
Re:Its not because its free. (Score:5, Insightful)
Re:Its not because its free. (Score:4, Insightful)
Microsoft seems to make a big marketing splash on a development toolset or language or API every few years only to throw it away with the "next big thing". For someone who's been programming long enough this gets to be a tiring waste of time.
Indeed. They even threw away an entire language, Visual Basic, much to the annoyance of all the companies that had invested millions in it (VB.NET is really not the same language and don't get me started on the auto-conversion tools). With proprietary languages the vendor makes (often sweeping changes) that suit THEIR business plan rather than addressing any pressing features their customers might really need. You can end up having to rewrite things pointlessly without adding any real value to your product. At the same time your competitors who chose to use something open, like Java or (and now QT) are spending that time adding new useful features to their product. They are also able to offer their product across a much larger range of platforms.
Re:Its not because its free. (Score:5, Insightful)
Maybe... but the last 3 startups I've worked for it was 100% the free thing. When you're building web services that are going to scale to thousands of users and millions of transactions, you need hardware... and when each CPU you plop out there costs you $800+ in software licenses, it gets very expensive very fast, and linux is a no brainer.
Why not MS? Let me count the ways... (Score:5, Insightful)
MS has so many problems with FOSS, some of them major.
1. FOSS is free as in beer. And it is eternally free. Software developers, with the possible exception of ($LANGUAGE developers), aren't stupid - there is some IQ floor involved in software development. Even if you give crippleware away, developers know that if they use your stuff it is going to eventually cost them. And if they can get something of near equivalent functionality that is FOSS, they don't have to deal with ever paying the piper. That's more margin for you and yours.
This helps if you are a startup, if you just want to experiment, or if you want to sneak something in at work and not have to ask to spend money. Strange but true - it's orders of magnitude easier to get money from a boss in the form of time to work on something than it is to get authorization to spend equivalent actual dollars on it.
2. FOSS is open source by definition. If you come across some future unanticipated problem, there is potential to hack it until it does if you have the skills.
3. Most FOSS has no vendor lock in (other than stuff like MySQL). Meaning, your development platform can't jerk the rug out from under you by deciding that you are now going to use DAO or ADO, or
.NET, or however they've decided to screw you over by obsoleting the work you've done. No vendor lock-in also means they can't dangle you upside down and see how much money falls out.
4. FOSS is often good, and keeps getting better because people keep contributing to it. Once you have used a bit of FOSS, you are often astounded by the quality and that encourages you to use more of it. And that experience leads a person to totally dispense with the "free = crap" heuristic. It's like drinking water from some unspoiled rainforest stream - it is both free and better than the commercial alternative. After a while your own heuristic becomes - "1. Search the FOSS world first. 2. If the best of what you find works well, stop looking."
5. FOSS has a passionate community. If you want help and can google, there is usually a good community around whatever FOSS it is you are interested in. In a genuine community, there is rarely a conflict between the creator of the software and the interests of the community. With a commercial solution, there is always that conflict - users want to pay less money, vendors need money to live.
6. FOSS is hassle free - you want to try it or use it, you just download it. You still have to learn how to use it, but that is no different from a proprietary solution.
7. FOSS OS (and non-MS OS) are renowned for being more stable, secure, powerful and easier to install than Windows once you know how. These attributes suit developers. Running FOSS on top of a FOSS OS is usually easier to install and use, better integrated, and more powerful. There is a virtuous circle going on there.
8. FOSS is trustworthy - you can see the code yourself, and fork it if you want. You may never do this but you know you can, and so do other people.
Why else does MS have a problem? Because university students WILL be exposed to some FOSS software if they do anything related to software. They will use commercial stuff too, but very likely they will learn many of the lessons above. At that point they've already swallowed the red pill. Even if they don't get exposure there their guru friends probably use FOSS.
I'll explain oppressive development environment (Score:5, Insightful)
If you want to write a C++ app in Visual Studio, the location of the additional directories for #includes is at the top of the C++ options. In the linker, the same option is somewhere towards the bottom. Why? Sounds small, but I'm already under the gun to get the code written and working, not futzing around with build settings.. The problem was that you couldn't turn off these warnings in the general options, only per-project, which meant that I had to make stupid changes to stdafx.h just to turn off the warnings so that other developers wouldn't freak as well.
How about the auto-hide windows that seem to randomly decide to suddenly be pinned or to suddenly appear during unrelated actions?).
Look, I'm a fan of Intellisense and all (when running on a powerful enough machine), but while VS2010 is "faster" than previous versions (almost as fast as VC++6), it purports to be a "rich" IDE that gets surprisingly sparse in places, and downright weird in others.
Visual Studio reminds me of guys who put racing stripes and thin tires and big mufflers on their Honda Civics and somehow convince themselves they've got a "race car".
Fine with me... (Score:4, Insightful)
I work in a mixed shop where most of the other devs are Ruby/Rails guys...they all see me as a "sellout" for using
Re:Fine with me... (Score:4, Insightful)
Really? "but at least Ballmer doesn't tell me I can't compile my code without forking him $100/yr."
No, he tells you you can't compile your code without forking him [sic] $550 in the first year and requiring an additional $500 for upgrades every 2 or 3 years. That's way cheaper!
"and he doesn't take 30% percent of whatever I might make selling my code."
But he also don't provide a free server to host your code and free testing before it is provided to users and no credit card fees.
Apple isn't perfect, but don't tell us Microsoft is much if any better.: (Score:3, Insightful)
Um, dude. You don't have to fork over anything to compile or run in an emulator. You do have to pay $100/year to run your software on the device and to ship it through the app store. And you can bet Microsoft will be charging for that, too. They have to make money somehow.
Well frankly (Score:4, Insightful)
Any "developer" who is a fanboy and will code only in their favoured language isn't worthy of the title of developer. They are a hack, or a code monkey, not a developer. A real developer will learn to understand how a computer works, at a fundamental level, and look at programming languages as different ways to solve a problem. They'll understand that there is not a best language because there is not one kind of problem. Some are better for certain things.
Also a good developer will probably learn how to develop for multiple platforms. After all while Linux is used a whole lot in the web world, MS rules on the desktop so it would be to one's advantage to be able to code on both platforms. Further more, it would be to their advantage to do so in the tools that generate the best programs. For Windows, that is Visual Studio, for Linux it is (obviously) not.
So no, you aren't a sellout. I would say that if you focus only on
.NET development you are being a bit too narrow, but learning it is a good thing. There is a lot of work for .NET devs. Companies want shiny GUIs for Windows things and .NET is a good way to deliver. The other "developers" will find that whining to the company and claiming they shouldn't do that won't work. Most companies are accustomed to telling you what you are going to do, not the other way around.
I have a friend who's a contract developer and he uses languages of all sorts. If you want something done in Windows, he defaults to
.NET (using C# usually) since that works well on that platform. In Linux, it is PERL quite often since nearly every Linux distro ships with it. However if you wanted something speed critical, it'd probably be C++. He sees languages as tools to solve problems, and tries to choose the right one for the job. That doesn't mean he uses any and every language, of course, he's got ones he prefers, just that he has a bag with more than one tool in it and he tries to select the correct one.
Personally I have little to no respect from code hacks that want to trumpet The One True Language as the one they use. That think is solves EVERY problem, that won't learn anything else. What it tells me is that they don't really understand programming. They've learned the syntax and grammar of a language without understanding the underpinnings. That is not a good situation and leads to bad code, shitty apps, and the kind of person who will say "That can't be done," to anything they don't understand how to do.).
Cathdral For The Bizarre (Score:4, Insightful)
That language! Not "college students were not broadly exposed to our products", or "our outreach efforts fell short", bur rather "...get access to kids...". MS has always been a cathedral, but sheesh, now they're even sounding like priests.
It is not me , it is you (Score:4, Insightful)
Dear Microsoft,
Today you sit and rue the face that you have lost the developer base and to
feel better about it, you label them as 'young and hip'. Here is some news:
Very few developers actually enjoy writing for windows. People have been
writing code on microsoft platforms since there are a huge number of people
who use microsoft products and ignoring the windows platform amounts to
ignoring a huge customer base which the developer could not afford to do.
We, as developers never really enjoyed developing for windows -- it is just
that we did not have a choice.
Today however, the scene has been changing.
1. A large number of GUI-based applications have moved into the browser.
2. Windows servers are not really used in large technology companies
They still are a dominant force in small to medium company's IT
infrastructures. That is all exchange and sharepoint. Any sane startup will
not consider windows to host their servers.
3. Developers now are used to and are aware of desktop platforms which
work well and also are very good programming platforms. Macs have a robust
BSD backbone and Linux is well, Linux. So everybody now have platforms
on which they can hack code and also play their movies.
4. Java provides for a development environment which can make pretty windows
without having to use developer studio.
So you have a scenario where where Microsoft is not the only viable
desktop/laptop OS. Also, it is a terrible programming environment. So any
self-respecting developer will not run windows on his personal machine and
as a result will want to push it out of his workplace too. The process
started a long time back. You guys are feeling it now.
So we come to the next question: Why do we hate writing code for windows ?
I will not cite the BSOD. The "windows crashes" and "windows is not stable"
are old arguments.
Windows is much much more stable than it used to be. In all honesty it has
been ages since I last saw a BSOD. We hate writing code for the windows
platform is because it sucks as a development platform.
1. The design is not based on any implementation of UNIX. That makes any CS
student uncomfortable. I am not saying that that the developer is
uncomfortable because windows has a bad programming interface (which btw it
is ). I am saying that it makes him uncomfortable because he cannot
recognize patterns he used to learn his computer science. He cannot refer to
the kernel source when he runs into a thorny problem, he cannot go online to
get a real educated answer to his problems. It is unfamiliar and since he is
not used to the paradigm. The developer finds it inelegant.
2. The second point is that it IS a bad programming interface. Till very
recently did not have a scripting interface worth its salt, has an extremely
convoluted device driver infrastructure and has that terrible thing called
the registry.
3. The development environment is not free as in beer and as in speech. It
is a closed heavily controlled environment in which the developer has no say
and is an interface which changes very frequently. You can get away with
changing rapidly and being open ( which linux does ) but you cannot get away
by being closed and also changing every 2 years. It drives the developer
mad.
4. Emacs and Vim do not integrate well with visual studio
:).
I may not be hip.. (Score:5, Insightful).
New Meme: Rage Quitting .NET (Score:4, Insightful)
They never really wanted them (Score:5, Interesting)
MS Tool Suites Have Always Sucked (Score:5, Insightful)
______
Those of you who know me in even the most casual way may be shocked to hear me say: I want to do some programming in Windows.
One would think that one would simply go out and download a compiler and an SDK (a bit fat wad of compiler headers, link libraries, and documentation) -- or perhaps buy a CD-ROM containing same -- and you'd be completely set to develop any kind of Windows application.
You'd be wrong.
What's available is a hopelessly confusing mashup of tools to develop native applications, VisualBASIC applications,
.NET virtual machine applications, Web applications (for IIS only, natch), database-driven applications and, if you're very nice and pay lots of money, Microsoft Office plugins. And, just to make it hard, all these tools are hidden underneath a cutesy Integrated Development Environment which passively-aggressively makes it as cumbersome as possible to figure out what's actually going on under the hood -- you know, the sorts of things a professional programmer would want to know.
Okay, fine, just give me the tools and docs to develop native C/C++ apps. "Oh, no no no," says Microsoft, twirling its moustache, "You have to pick one of our product packages." Packages? "Oh, yes, there's Visual Studio Express, Visual Studio Standard, Visual Studio Professional, Visual Studio Team System, and Visual Studio Grand Marquess with Truffles and Cherries."
After looking at the six-dimensional bullet chart of features, I think that Visual Studio Express may get the job done, since it comes with a C/C++ compiler and will compile native apps. "Quite so," says Microsoft whilst placing a postage stamp on a foreclosure notice, "provided you're only writing console apps -- you know, programs that run in a command window. If you want to develop full Windows GUI apps, then you'll need additional libraries which aren't necessarily included with Visual Studio Express."
Ah, so VS Express will only let me develop "toy" applications and, if I want to do anything more advanced, I should download and install the complete Windows SDK which, amazingly, is free. "Well, you could do that," says Microsoft after tying Nell to the sawmill. "But the SDK doesn't really integrate very well with the IDE. And there's still some link libraries which only ship with Visual Studio Standard or better."
Fine. I'll look at buying Visual Studio Standard. And then maybe I can get to improving this device driver. "Device driver!?" says Microsoft, blotting the blood spatters off its hat. "Heavens, no, that's not included with anything. You need to download and install the Driver Development Kit for that. And you may or may not need the DDK for each version of Windows you intend to support. Not to worry, however; they're all free downloads..."
*fume* And people wonder why I've avoided this clusterfuck for the last 25 years. Ever since the Visual Studio 6 days, I've been smacked in the face with this braindamage every time I've tried doing the slightest exploration of Windows development.
So: Can anyone with modest Windows development experience tell me what Visual Studio flavor to get and which addons to download if I want to:
No low-hanging fruit on the desktop (Score:4, Insightful)
There are lots of cool things to do as desktop applications. But the easy and useful ones have been done.
Want to write a better word processor? Users will expect it to be at least as good as OpenOffice even if you give it away. If you want to charge for it, it needs to be better than Word.
How about a 3D animation program? Big job. Yours has to be at least as good as Blender, and if you want to sell it, up there with Maya.
CAD? You're competing with SolidWorks, Inventor, and ProEngineer. Yes, there are small startups in CAD; check out OpenMind [openmind-tech.com], makers of HyperMill [youtube.com]. That's how good a new desktop program has to do to make it today.
Nobody is going to buy your IRC chat client as a desktop app.
Also ... (Score:4, Funny)
Re:Misses the point (Score:5, Insightful)
Frankly I think smart phones, tablet computing and the like are going to substantially shake up the landscape. It certainly is making me consider mine, at least as far as web development and the like. The tools that better allow me to write portable apps that are not chained to an operating system, screen type and the like are going to become much more attractive. This will extend, inevitably, towards native apps. Microsoft may have controlled the desktop, but in the newer platforms coming out, it is woefully behind the times.:Misses the point (Score:4, Insightful)
"Flame away, those who are so inclined, but I have never heard anyone say they would prefer to program in Objective-C over Java, C++, Python, or the
.Net languages."
I'm one who prefers Objective-C to Java, C++, Python, or
.Net languages.
Good lord, learning Objective-C is easy. Learning any language is easy. It's the frameworks and libraries and idioms that are the hard part. A programmer who resists learning a language as easy as Objective-C is like a child who refuses to try any food other than their staple chicken nuggets and spaghettios.
Re:All the cool kids just want one thing (Score:5, Insightful)
Re:All the cool kids just want one thing (Score:4, Insightful)
> The problem is that to unseat the iPod, it had to be a fantastic player.
No. To unseat the iPod it had to be perceived as a fantastically cool player. How well it actually worked was largely irrelevant.
Re: (Score:3, Insightful)
We did not get access to kids as they were going through college
Anybody else find that just a LITTLE creepy? "Getting access" sounds like something a Catholic priest and/or a cult leader would say. Perhaps employing clueless marketroids like Bob might have something to do with the problem as well.
Not really. Its the reason why my high school had apple ][s and my college had a facom. Manufacturers spend their marketing budget on subsidized sales to schools, so that students want to work on their platform.
I still wound up working on DEC though.
Re:Bob Muglia == creepy (Score:5, Insightful)
The learning curve is nearly non-existent now with GUIs.
Re: (Score:3, Insightful)
Creepy? No
I was going post a comment with that quote as the context.
I'm wondering what exactly they mean though. My children went through high school and went through or are going through college using Microsoft products -- but it's Word mainly and some Excel.
I wonder how they could have failed to 'access [the] "kids,"' except perhaps by deliberately ignoring them.
I develop for Unix/Linux and most of the recent college grads I encounter certainly don't know Unix/Linux! So what do they use in college then? O]
Re:Yeah...wrong (Score:4, Insightful)
If you are business or institution, whose focus and skillset isn't primarily technical, that needs to roll out a whole bunch of desktops for word processing and assorted off-the-shelf applications, along with email and central logins and stuff, Microsoft can make you a relatively compelling offer. There will be some annoying issues of various sorts; but the off-the-shelf software will run on Windows clients(and the boxes will be cheap because HP and dell are always cutting each other's throats), Windows admins are fairly common and comparatively inexpensive, and things like Exchange and AD make it(comparatively) trivial to get a bunch of people running more or less homogenous desktop setttings, logging in on different machines, and scheduling boring meetings with each other.
If, on the other hand, you are some tiny techy startup, none of that is nearly as relevant or interesting, or worth the money.
|
http://developers.slashdot.org/story/10/07/06/2140253/Microsoft-Out-of-Favor-With-Young-Hip-Developers
|
CC-MAIN-2014-42
|
refinedweb
| 6,260
| 71.65
|
hw_dataflash.h File Reference
Dataflash HW control routines (interface). More...
#include <cfg/compiler.h>
Go to the source code of this file.
Detailed Description
Dataflash HW control routines (interface).
Definition in file hw_dataflash.h.
Function Documentation
Data flash init function.
This function provide to initialize all that needs to drive a dataflash memory. Generaly needs to init pins to drive a CS line and reset line.
Definition at line 56 of file hw_dataflash.c.
Chip Select drive.
This function enable or disable a CS line. You must implement this function comply to a dataflash memory datasheet to allow the drive to enable a memory when
enable flag is true, and disable it when is false.
Definition at line 81 of file hw_dataflash.c.
Reset data flash memory.
This function provide to send reset signal to dataflash memory. You must impement it comly to a dataflash memory datasheet to allow the drive to set a reset pin when
enable flag is true, and disable it when is false.
Definition at line 108 of file hw_dataflash.c.
|
http://doc.bertos.org/2.7/hw__dataflash_8h.html
|
crawl-003
|
refinedweb
| 175
| 69.28
|
Community discussion forum
How to create this
I need help
I want to create this:
Nowe I have tried to use Flash but cannot get on with it. Any ideas on an easier application to do this?
Cheers
Won't allow to insert this as a image for some reason.
-
- 4 years agoby
Thushan Fernando
Thushan FernandoAustralia :: Melbourne, AustraliaJoined 7 years ago
I guess flash would be the easiest method to make this (and the lightest as its all vectored and the size is small)
you dont really have to use Flash Flash to make it, try smome alternatives like SWiSH which makes things alot easier to work with flash
there are lots of effects (some you see on that gif) which will be quick to implement
Hi ukmedia
this is easy to do in SVG. use SVG [1].
the installation base of svg is growing rapidly, now that opera supports native SVG [2].
Mozilla/firefox native support will also be switch on by default this year [3].
Konquror has an own implementation which is expected to move over to Safari [4].
allmost all modern mobile phones support it[5].
so there is no reason to use flash for simple things like this.
you can code it by hand or use an editor like beatwaremobil designer.
[1]
[2]
[3]
[4]
[5]
[6]
p.s.:i think flash is not a good idea here since flash does not know text.
with svg however you are able to create a multi lingual version of that banner.
the text is indexable and searchable. plus you can use the exact same version for mobiles,
since it scales.
use SVG and have fun
bernd
i was just bored, so i recreated this banner in SVG.
the ziped version (svgz) has only 691 bytes.
have fun
bernd
- 4 years agoby
Thushan Fernando
Thushan FernandoAustralia :: Melbourne, AustraliaJoined 7 years agoah yes quite true... SVG is quite cool... screw flash(haha) follow bernd
as we are at it, today i was happy to find google indexing SVG files.
so SVGs are not only indexable, but they do get indexed now.
If we could possibly see IE7 implementing SVG, that would be so cool.
i know that MS does have an own implementation, which is used in MS Visio.
does anybody see a way to convince MS to include that instead of , or additionaly to, VML ?
- 4 years agoI have used SVG in one of my .NET web apps. My shop is all IE 6 and it automatically prompts to install the Adobe SVG plugin which installs almost instantly and after that all is fine. SVG is a standard and I would say it's safe to use for a site likly to be visited by different browsers.
- I have not used SVG before on any of my sites. If you have to download a plugin for users to view my site I would rather leave this as I myself hate plugins from the web.
- HI
hollystyles:
is that project, where you use SVG, online ? if yes would you post a link,please?
ukmedia:
as stated in a previous post Mozilla, Opera, Konquror ( Safari ?) do have their own
native implementation, which means you dont have to download a plugin.( thats why i wish
MS would also implement it in IE7, i would be really upset if they dont, since
they do have some kind of svg implementation in MS Viso.)
beside that there are some Java SVG Viewers which you could use as an applet in your page, for example
so no plugin required.(look at the examples on that side with a browser without svg support)
cheers
bernd
- 4 years agoOk, wht about an animated gif then. I created this one in Fireworks MX.
bernd,
Sorry, it's on a private intranet. It's a database driven bargraph with animation, using ASP.NET and VB.NET to generate SVG xml on the fly. Unfortunately I don't have a public .NET host to showcase it right now. I could maybe do a 'save as' on the generated page and ftp that up yo my webspace hmmm...I'll get back..
- thats also ok besides that its 35 times larger than the svg, and is not scalable.
- 4 years agoBernd,
here's the link:
Firefox prompts me to install plugin, but so far singularly fails to do it ! DOH.
Opera needed me to copy a dll and .zip from the Adobe svg installation into it's plugin folder, yuck.
IE6 just put up a box saying something like 'Do you want to install Adobe blah..." said yes, few seconds later.. bosh a graph!
- >>Firefox prompts me to install plugin, but so far singularly fails to do it ! DOH
yes thats a known problem with ASV3
to use ASV in Firefox, download ASV6 beta
and then folow the instructions on this page:
>>Opera needed me to copy a dll and .zip from the Adobe svg installation into it's plugin folder, yuck
you wont need a plug in if you download Opera 8.0
- 4 years ago
Er.. I just followed instructions here:
The SVG is an XML document, embeded in an XHTML document. So it is text, embedded in object tags that do specifiy "image/svg+xml"
I did find discrepancy in the DOCTYPE element at W3C on one page:
They show this:
Code:<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"">
And on the next page:
it's changed to this:
Code:<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20000303 Stylable//EN"
"">
>The SVG is an XML document, embeded in an XHTML document. So it is text, embedded in object tags that do
>specifiy "image/svg+xml"
correct but you also have to make sure that the server is sending the right mimetype !
you can securly drop the Doctype declaration as there are no validating SVG parsers out there.
what is more important is the namespace (xmlsn=xml namespace). a minimal document would look like this:
<?xml version="1.0" standalone="no"?>
<svg xmlns="">
</svg>
- 4 years ago
>>correct but you also have to make sure that the server is sending the right mimetype !
The server belongs to my ISP, how can I check what it's serving?
Ok I have ditched the DOCTYPE, and added the namespace.
I put the version 3.0 dll and zip in firefox's plugins dir, this stopped the plugin wizard but I just got blank page.
I installed Adobe SVG 6.0 but this broke IE.
So I copied the version 6.0 dll and zip to firefox and ditched the version 3 files, then uninstalled Adobe SVG 6.0 and re-installed version 3.0.
I can browse my page fine in IE and Opera, but Firefox still just shows a blank white page.
hi hollystyles
>>The server belongs to my ISP, how can I check what it's serving?
you can check with this with this simple script
Set xml = CreateObject("msxml2.xmlhttp")
xml.open "GET", "", False
xml.send ""
msgbox(xml.getresponseheader("content-type"))
if its an apache server, you can add a .htaccess file to your root folder containing the lines
AddType image/svg+xml svg svgz
AddEncoding x-gzip svgz
concerning ASV6 well it works fine with IE6 and firefox here...
you might be interrested in the latest mozilla with native SVG support
it can be downloaded here:
just grab the file mozilla-win32-svg-GDI.zip
if you run that build for the first time, you have to enable svg first. to do so,
1. run mozilla
2. type about:config in the addressfield of the browser
3.in the appearing searchfield type svg
4.double click on the the variable appearing to set it to true
more info can be found here:
hth
bernd
- 4 years ago
Bernd,
Ok I have grabbed mozilla-win32-svg-GDI.zip
I think i need some sort of install install script or bat that puts the built files where they need to be, but I'm struggling to see where I get it and how to run it. Can you point me in the right direction ?
no, you dont need an installer, just unzip, then go to the folder that contains the unziped files.
there is a file called mozilla.exe in the /bin folder . just doubleclick that file.
- 4 years ago
Bernd,
Ok sorry I lied, I downloaded FIREFOX-win32-svg-GDI.zip
Anyway there's a firefox.exe so I double clicked that, it opened a gecko browser Did the about:config thing and set svg enabled to true. Opened firefox from my regular shortcut and did about:config and the setting showed true there as well.
I created a .htacces file in the root of my webspace (where my svg document also resides) with the two lines you specified:
AddType image/svg+xml svg svgz
AddEncoding x-gzip svgz
So now when browsing I get an embeded object with scroll bars, but still the text of my svg document and not the image.
So I guess I'm still having trouble with content type ?
- hi hollystyles
well yes seems to be a problem with your ISP. you should contact them, and ask them to add the correct mimetype for svg.
as your server is still sending text/plain . could be that its not an apache server, or they switch of that feature.
are svg files loaded localy from hardrive being displyed correctly ?
p.s.:if you use the firefox build, be carefull, if there is allready a firefox without native svg running, and you click
the firefox.exe in that bin folder, another instance of ff without svg support will be loaded.
cheers
bernd
Related discussion
Generate an Image of a Web Page
by gregcost (28 replies)
HTML table problem
by sonampatnaik (2 replies)
Developer job-seeking advice
by saikat (2 replies)
how Create Dynamic website..
by MZee (3 replies)
Viewing updates to a web page
by rajivrranjan (0 replies)
Related articles
Quick links
Recent activity
- Zainab Ahmed replied to How to receive data in web ...
- Zainab Ahmed replied to How to receive data in web ...
- Zainab Ahmed replied to How to receive data in web ...
- Uncle replied to ms access report
- chris jones replied to Im having problems updating...
- chris jones replied to Im having problems updating...
Enter your message below
|
http://www.developerfusion.com/forum/thread/25799/
|
crawl-002
|
refinedweb
| 1,738
| 82.54
|
Last Updated on December 14, 2020.
We use apps of email clients on our phones to access our emails seamlessly. Gmail is one of the most popular email clients out there. In this blog post, let’s see how to prompt a user to open their email client’s compose section with our own delivery email address, subject and body in react native.
If you have a feedback section in your react native app then this feature can be really useful. The user does not need to remember the email address as we pass ‘to’ email address, subject, and body to their email client.
The React Native API Linking helps you to interact with both incoming and outgoing app links. We can use Linking API to open other apps including email client apps.
import { Linking } from 'react-native'; Linking.openURL('mailto:support@example.com?subject=SendMail&body=Description');
Here, mailto refers to the delivery email address. Subject is where you should give the subject line and body gets the description or matter.
Following is the complete react native example to apps such as Gmail.
import React from 'react'; import {View, Button, Linking, StyleSheet} from 'react-native'; const App = () => { return ( <View style={styles.container}> <Button title="Share" onPress={() => Linking.openURL( 'mailto:support@example.com?subject=SendMail&body=Description', ) } /> </View> ); }; const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', }, }); export default App;
That’s it. I hope this blog post helped you!
|
https://reactnativeforyou.com/how-to-prompt-to-compose-email-with-given-subject-and-body-in-react-native/
|
CC-MAIN-2021-31
|
refinedweb
| 242
| 58.58
|
Awake is called when the script instance is being loaded.
Awake is called either when an active GameObject that contains the script is initialized when a Scene loads, or when a previously inactive GameObject is set to active, or after a GameObject created with Object.Instantiate is initialized.
Use Awake to initialize variables or states before the application starts.
Unity calls Awake only once during the lifetime of the script instance. A script's lifetime lasts until the Scene that contains it is unloaded. If the Scene is loaded again, Unity loads the script instance again, so Awake will be called again. If the Scene is loaded multiple times additively, Unity loads several script instances, so Awake will be called several times (one on each instance).
For active GameObjects placed in a Scene, Unity calls Awake after all active GameObjects in the Scene are initialized, so you can safely use methods such as GameObject.FindWithTag to query other GameObjects.
The order that Unity calls each GameObject's Awake is not deterministic. Because of this, you should not rely on one GameObject's Awake being called before or after another (for example, you should not assume that a reference set up by one GameObject's Awake will be usable in another GameObject's Awake). Instead, you should use Awake to set up references between scripts, and use
Start, which is called after all Awake calls are finished, to pass any information back and forth.
Awake is always called before any
Start functions. This allows you to order initialization of scripts. Awake is called even if the script is a disabled component of an active GameObject. example scripts Example1 and Example2 work together, and illustrate two timings when Awake() is called.
To reproduce the example, create a scene with two GameObjects Cube1 and Cube2. Assign Example1 as a script component to Cube1, and set Cube1 as inactive, by unchecking the Inspector top-left check box (Cube1 will become invisible). Assign Example2 as a script component to Cube2, and set Cube1 as its
GO variable.
Enter Play mode: pressing the space key will execute code in Example2.Update that activates Cube1, and causes Example1.Awake() to be called.
using UnityEngine;
// Make sure that Cube1 is assigned this script and is inactive at the start of the game.
public class Example1 : MonoBehaviour { void Awake() { Debug.Log("Example1.Awake() was called"); }
void Start() { Debug.Log("Example1.Start() was called"); }
void Update() { if (Input.GetKeyDown("b")) { print("b key was pressed"); } } }
Example2. This causes Example1.Awake() to be called. The Space key is used to perform this:
using UnityEngine;
public class Example2 : MonoBehaviour { // Assign Cube1 to this variable GO before running the example public GameObject GO;
void Awake() { Debug.Log("Example2.Awake() was called"); }
void Start() { Debug.Log("Example2.Start() was called"); }
// track if Cube1 was already activated private bool activateGO = true;
void Update() { if (activateGO == true) { if (Input.GetKeyDown("space")) { Debug.Log("space key was pressed"); GO.SetActive(true); activateGO = false; } } } }
|
https://docs.unity3d.com/2020.3/Documentation/ScriptReference/MonoBehaviour.Awake.html
|
CC-MAIN-2021-43
|
refinedweb
| 496
| 66.44
|
Are there any plans to start supporting union types, similarly to how it's done in Dotty (e.g., Int | Long | Float | Double)? If yes, is there any time estimate?
Int | Long | Float | Double
If not, what would be the best way to implement the following functionality in Scala 2.12?
Suppose we are interacting with a native C++ library supporting both Float32 (i.e., Float) and Float64 (i.e., Double) data types. We want to define a DataType trait in our library, subclassed by DataType.Float32 and DataType.Float64 that implement functionality such as casting for example. In that trait we want to have an abstract type T, which the DataType.Float32 will override with value Float and DataType.Float64 will override with value Double. In that trait we also define a couple methods:
Float
Double
DataType
DataType.Float32
DataType.Float64
T
def cast[V <: Float | Double](value: V): T
def setElementInBuffer(buffer: ByteBuffer, index: Int, value: T): Unit
def getElementFromBuffer(buffer: ByteBuffer, index: Int): T
T here can only be either Float or Double, but the compiler does not know that. Let's say we read from one buffer of some data type, cast the elements, and write in a buffer of the other data type. Ideally we want cast to be specialized for the primitives. For type V, above, the constraint can be enforced by something like the solution of Miles Sabin.
V
However, currently there is no way to make the compiler aware that T can only be Float or Double. We could use a witness-style inheritance-based pattern, but that would force boxing of the primitive values and thus be inefficient. Defining type T <: Float | Double (like in Dotty) would be ideal. Is there a way to mimic that behavior currently in Scala, without boxing the primitives?
T <: Float | Double
Note that paraneterizing the DataType trait with T and having a context bound there does not work. The reason is this: suppose a class Tensor has a data type but holds a reference to a C++ Tensor with an integer representing its data type. Then the way we obtain the data type is by calling a JNI function that returns that integer which we then convert to a DataType[_]. This makes the compiler unable to verify that one tensors data type returns elements that can be fed into another tensor's buffer (after casting to the appropriate type).
DataType[_]
I hope this application description is sufficient for understanding the problem, but I can provide more details if necessary.
Thank you!
I am actually of a mind to submit as SIP a standardization of Scala.js' pseudo union type. This would allow it to work on other platforms, as well as receive better treatment by the compiler on some aspects (e.g., in pattern matching), without having to support them in the typechecker/type system per se. I haven't gotten around to doing so yet, though.
I wasn't aware of that implementation. It's pretty cool. However, for my use case there is a problem. I realize I need an "exclusive-or" and not an "or" of types, after all. This means that "Int | Double <: Int | Float | Double" should evaluate to "false". I'm not sure of the scala.js union type can be converted to achieve that functionality. Do you have any idea if this is possible? My impression is there would need to be evidence that "not A <: B", aside from just "A <: B" and I'm not sure what the base case should be for that in your code.
Then, I would need to be able to provide implicit evidence for each type in the XOR separately (e.g., "IntCastHelper", and "DoubleCastHelper"). My impression is that I need an implicit function providing evidence for a type "T" that does pattern matching on the type and provide a different object for each type in the XOR. This needs to be an exhaustive pattern matching on each of the types defined in the XOR.
Does my description make sense?
Thanks!
Why not use scala.Either?
Because it's a completely different thing? In particular, it doesn't have the following properties of union types:
A | A =:= A
Either[A,A]
Left[A]
Right[A]
A | B =:= A
B <: A
A <: A | B
A | B
A
B
Either
So the key thing is the lack of wrapping in union types as compared to Either, both on type level and in runtime.
Hello,
I think rather than union types, we need special super types fornumerical types.
It is a big pity that numeric types Double, Float, Int, Long, Short,Char, Byte have so much functionality in common (e.g. toDouble) and yetthere is no super-type to cover that common functionality.
So please, let's have scala.Number.
And maybe also scala.FloatingPoint and scale.Integer. Perhaps evenscala.Number32 and scala.Number64.
I'm not excited about union types A|B. Much added complexity for littlegain. You'd either have to treat them like the common super-type or checkthe type. Why not just use the common super-type instead?
Best, Oliver
I don't know about you, but I definitely prefer Int | String instead of Any or CaseClassOne | CaseClassTwo instead of Product with Serializable.
Int | String
Any
CaseClassOne | CaseClassTwo
Product with Serializable
Yes, you'll have to pattern match union types. But that's like bread and butter in functional programming. We're already doing that with ADTs (sealed hierarchies). Union type can be used as a simple, ad-hoc, no-overhead alternative to a sealed hierarchy or typeclass.
Type classes are the right approach for dealing with this. Check out Spire, if you haven't already...
With Any or Product with Serializable, I know where to look to find outwhat methods are available. If I see CaseClassOne|CaseClassTwo, I don'tknow.
If I see Int|String, I would know it is really Any, and I would think"Why the hell would some one do that?".
To access a common function a structural type would do as well. See
Any or Product with Serializable tells me absolutely nothing about methods being available.If I see CaseClassOne | CaseClassTwo then I know that I have to pattern match against these two and look into API of these two. And I'm also safe against someone giving me CaseClassThree as would be possible when the type was Any.
CaseClassThree
Ok, that is true. But what is the use case for CaseClassOne|CaseClassTwo?And what is the use case for Int|String?
I would like to clarify something with respect to my original question because I feel that this conversation might not be exactly on point. My focus is on finding a way to achieve the desired type-safety while being highly efficient at the same time.
My current solution does involve a sealed trait hierarchy of value class wrappers, but that is inefficient (because of constant boxing/unboxing when accessing elements of underlying native tensors) and introduces a lot of boilerplate (because a value class wrapping an integer does not "behave" like an integer -- I have to implement a trait including all of the arithmetic and comparison operations, among other things). @jducoeur mentioned Spire and I have indeed checked it out. It seems to be using a similar approach to what I am doing right now, but that has the two problems I described. Please correct me if I am wrong.
What would ideally be desired is to be able to let the compiler know that type T is a numeric primitive, for example. Then if a class implements a function cast that takes in a value of type Float, and another function cast that takes in a value of type Double, etc., the compiler should be able to check whether cast can take a value of type T (i.e., all specialized cast implementation exist). In this case, no boxing/unboxing would be necessary and the code could be highly efficient. This would be more relevant to the approach that @curoli proposes. If those number super-types are part of the Scala library, then the compiler might be able to treat them in a specialized manner.
cast
Spire is pure magic. It's super-cool what it can do. It would be nice ifthere was a simpler way that non-wizards can understand.
Yes, and @eaplatanios doesn't want those properties. He wants an exclusive or. Either is that, union types are not.
I think @jducoeur is right. I'll elaborate on why below—but the TL;DR. is "use spire and follow". Below I answer this thread and explain why that seems the correct answer—ahem, modulo the fact that some requirements seem overly restrictive.
@eaplatanios clearly wants no wrapping, so Either is not OK. I'm not sure you can satisfy all the given requirements. But what he actually wants is Float | Double, so there might be specialized solutions for that. In particular, you want to use Spire and specialization as described on the Spire guide.
Float | Double
You asked about having
If you don't want to box value, your generated bytecode will need to use two separate methods (overloaded or not):
value
def cast(value: Float): T
def cast(value: Double): T
You can also try to generate that via specialization:
def cast[T @specialized(Float, Double)](value: V): T
The advantage of specialization is that it produces the two overloads automatically, and then you can write callers without duplicating them. That is, you can just write
def castUser[T @specialized(Float, Double)](value: V) = ... cast(value) ...
instead of having two copies, one for each overload of cast:
def castUser(value: Float) = ... cast(value) ...
def castUser(value: Double) = ... cast(value) ...
Beware: I'm no expert on specialization, but it can have some issues (I'm no expert on those). However, the Spire guide recommends specialization and you only care about two types, Float and Double, so you'll have fewer problems.
On the JVM, the only alternative (in principle) would be to only support a single type (say Double), and widen Float to Double if needed. That avoids boxing. Something like this was the basis of—though I suspect that avoided floating-point extension by using Float.floatToRawIntBits and friends. However, I understand that miniboxing is not stable enough for production use.
Float.floatToRawIntBits
Using type parameters and specialization avoids union types, but let me answer anyway...
That's confusing to me—especially, the name "exclusive or" is. That thing is an exclusive-or, a value can't be both Int and Double. And if a value is an Int | Double, it is a special case of Int | Float | Double. A caller that handles the latter can also handle the former—its Float branch will not be triggered by such a value, but it can still be triggered by users producing Float. If you know you have no such users, you shouldn't need to write Int | Float | Double anywhere (I know I'm oversimplifying, but not really). The pattern matching will still be exhaustive—at worst, it has more cases than strictly needed.
Int
Int | Double
Int | Float | Double
Calls to structural types are compiled to use reflection, hence are much too slow. Time ago somebody had a solution based on macros, but I don't know if it still works and how robust it is:
Yes, I should have mentioned the performance impact.
|
https://contributors.scala-lang.org/t/dotty-style-union-types-in-scala/733
|
CC-MAIN-2017-43
|
refinedweb
| 1,916
| 65.22
|
Head First Java 2nd Ed. Problems
El Mitchel
Greenhorn
Joined: Jun 03, 2007
Posts: 1
posted
Jun 03, 2007 09:37:00
0
Hi I'm new here. I do have some basic
Java
background but very minimal. I first learned it from school like a year ago.
I just started reading Head First Java 2nd Edition and I already have problems with the first two chapters.
First was the PhraseOMatic from chapter 1. I followed the directions at the side and typed the source code word by word. Now the problem was when I compile it. It displays
"package System does not exist"
when compiled in JCreator. It's the same in MSDOS also. I left it for a while to
test
it in school when i'll go there to check if they have the same result. I moved to chapter 2.
I also typed the code as it was written like this: guss is " +targetNumber); p1.guess(); p2.guess(); p3.guess(); guessp1 = p1.number; System.out.println("Player is over.");(); } }
When I compiled it, it displays this:
class GuessGame is public, should be declared in a file named GuessGame.java
class Player is public, should be declared in a file named Player.java
cannot find symbol class Guessgame
I'm 99% sure I don't have any typos ther.
BTW, I'm using the latest version Java SE. Thanks for your solutions in advance.
Anupam Sinha
Ranch Hand
Joined: Apr 13, 2003
Posts: 1090
posted
Jun 03, 2007 09:50:00
0
I haven't read HF java recently. But I guess they must have mentioned that all these classes need to be in different files.
Either make all the classes expect the class with main method package private (remove the
public
from public class) or declare them in different files in the same directory.
swapnil deo
Greenhorn
Joined: May 30, 2007
Posts: 6
posted
Jun 03, 2007 10:44:00
0
till what extent i know we can have only 1 public class in 1 file.
in the code you mentioned you have declared more than 1 " public " classes in 1 file
try declaring all the classes in different files
it might work out for you!
Jinny Morris
Ranch Hand
Joined: Apr 29, 2007
Posts: 101
posted
Jun 03, 2007 12:53:00
0
Your first error message
class GuessGame is public, should be declared in a file named GuessGame.java
is an error message with which I am all too familiar. What file name did you save your code under? It should have been saved as GuessGame.java. (Not, for example, as something like Chap2Game.java - which is what I tend to do ...)
Also, the advice about putting each public class in a file with
exactly
the same name as the class is exactly correct.
I am completely new to Java and am working my way through the same book. So far, pretty much all of the example code has worked for me.
Good luck!
I agree. Here's the link:
subject: Head First Java 2nd Ed. Problems
Similar Threads
GuessGame errors
Head First Java Book, Chapter 2
Please Help with Guessing Game (Head First Java)
GuessGame Errors From Head First Java
Very new to java
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/407242/java/java/Head-Java-Ed-Problems
|
CC-MAIN-2015-48
|
refinedweb
| 558
| 81.93
|
Prism & Silverlight: Part 10 - A Larger Example - "Email Client"
- Posted: Oct 27, 2009 at 3:31 AM
- 43,056 Views
- 38 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
This is part that all iPod versions of the videos in this series are copies of the first part ...
This is the good stuff. I've previously used Prism in conjunction with Unity to create composite WPF clients in this manner, but it's useful to see a run through of the whole process (disclaimer .. I watched the first 40 minutes or so .. will continue with it tonight). It's tackled at a refreshingly rapid pace whilst remaining clear and entirely comprehensible too. Bravo.
(Apologies .. I added this comment to the Part 1 comments section .. it was supposed to be here)
Thanks Kevin - glad you're finding it useful. To be honest, my original intent was that this would last around 30 minutes but it took me a good 2 hours to get to the point where I'd actually got something done which covered the various things I wanted to try and get in.
I hope the rest of your watching goes as well as the first half
A really excellent set of videos Mike. I signed-up just to say that!
Thanks Pete - glad to have you signed up and thanks for the feedback.
Hi Mike
Great videos!
I tried downloading Part 1-10 in iPod format. It seems that they are all copies of Part 1 :/
Hi, I keep getting "Media Failure" errors when watching in Silverlight. If I right click and "Save Target As" on the WMV it downloads about 10% then I get a "Connection was closed remotely" error. I'm in the UK if that has any relevance - perhaps this needs a mirror copy?
Hiya Mike,
Really a wonderful set of videos. I'm on the other side of the world in Hawaii.
I am also having Part 10 give me a media error about 85% of the way through.
Thanks!
Apologies if you're seeing media errors - I'll ping the Channel9 guys and see if there's something we can do here. I'll also download the video again myself and see if I see the same problem.
Hi Mike,
At around 30 minues you go and create your own commands for ListBoxSelectionChanged and TreeViewSelectionChanged - what are your opinions on using, for example, a TwoWay Binding to the ListBox SelectedItem property to the view model instead and then publish the event in the Setter? Not such a nice solution perhaps but possibly easier in many of these cases. Here's what I did for the MailListViewModel which seems to work fine:
Adding a binding as follows:
Then had the following property on the view model:
Can you see any problems with this approach?
One other thought - you use converters in some of the views - I know that some MVVM guys think that you should never need a converter in the View since the ViewModel should prepare the data correctly for it. I also know there are lots of discussions about MVVM currently and nothing really baked into the framework or tools to answer all the current questions - but just wondered what your opinion was.
Fantastic set of videos by the way - really detailed and incredibly useful - many thanks for the work you have put into them; I for one really appreciate the hands-on "live code" approach that you take...
Cheers
Ian
Hi Ian,
Nice to hear from you ( and see you briefly at the StackOverflow day the other week
). I agree that creating the custom command here did a feel bit heavy handed as it's quite a lot of code for what you get out of it although it does keep that separation
very "pure". Someone dropped me a mail saying "What if I used Expression's behavior library for that?" which seemed like another approach and you have another one here which seems pretty reasonable to me ( and a lot less code
).
Thanks,
Mike.
Nice screencasts, very well explained, a great start to learn how to use Prism and Unity.
Bravo!
Is it possible to have the code source from all samples, like you did for the last sample?
(In order to play with).
Thomas.
Hi Mike
It's very easy to learn Prism with your screencasts. Thank you very much.
Do you plan to add a part 11, about the topic unit testing? Currently, I'm not sure how to unit tests the different assemblies from the email client. Do I need a bootstrapper in my unit test method and the region manager stuff to get it run for the unit testing?
Thanks in advance,
Beat Kiener
You've made it much easier for me to understand the concepts because of the attention to detail that goes into them, also how each video builds upon the previous one in the series. Only feedback would be to include either the source or configuration files at the least as it can be a bit tricky to read them from the video.
Thanks Mike!
Hello,
I have gone through the video you have mentioned.
Although many doubt regarding EventAggregator, UnityContainer, RegionManager where solved. But I am failed to get a Answer Regarding:
In my project I have to Navigate as if "Shell" conatin two region ToolBar +( (Common Module(LoginViews))--------> Module A or Module B or Module C)
Here "Common Module" Can be Considered as "login module". After login the user will choose the application namely "Module A" or "Module B" or "Module c". Each module as number of "sub-Module" say "View"s Expect for "login Module" .Please help with an Idea.
my shell .Xaml code:
<UserControl x:Class="Forte.UI.Shell.Page"
xmlns=""
xmlns:x=""
xmlns:Controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"
xmlns:Regions="clr-namespace:Microsoft.Practices.Composite.Presentation.Regions;assembly=Microsoft.Practices.Composite.Presentation.Silverlight"
xmlns:
<Grid x:
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
</Grid.RowDefinitions>
<ItemsControl x:Name="MainToolbarRegion"
Regions:RegionManager. --------------->This region contains "Menu" it will not change
<Grid Margin="5,135,5,0" HorizontalAlignment="Center" VerticalAlignment="Center">
<controls:ClippingBorder x:Name="MainRegion" ---------->This region load all module's view at present
</Grid>
</Grid>
</UserControl>
Hi Mike,
First of all, This is a great series post, I must say.
I am sure by now you are moved on to lot of new stuff like Silverlight 4 etc.
I have seen this series of posts and its immensely useful for someone to start getting insight into PRISM framework.
in my new project, I am trying to use PRSIM framework and want to have multi platform application.. ( that's the goal )
I was wondering, If you can post this application or parts of last series of smaller posts which shows "Test Driven Development using PRISM" , "Test first" approach.
That will give even more idea for new people trying to learn PRSIM, what to test and what not to test.
your comments are welcome.
Hope that would be possible.
Cheers.
hitesh.
Thanks for the great videos Mike - they removed the final hurdle to my adoption of Prism!
Imagine my disappointment though when, at about 123 minutes of the last video, I got a "media failed" error ... oh well, at least I have the source code :/
Maybe the wmv or whatever could be made downloadable?
Anyways, really enjoys the series and got a lot out of it - cheers!
kurtmang - the video should be downloadable? try the little windows icon near to the "media downloads" text - that should let you download the video and pick up from where it failed for you ( apologies for that - I've no idea why the streaming version is failing here ).
Mike.
Thanks Mike - didn't see that!
Great Video Mike,
I followed through using VB and duplicated your efforts. It's starting to make sense for me now.
Thanks.
Just... Thank you!
This one is particulary great. The presentation is really good. I hope we'll got more and more
I don't have words to tell how great it is!! Amazing videos. These videos is the best tutorial I have ever seen.
Mike thank you very much for this amazing job!
H
Hi Mike. Thanks for the great set of videos. You are a fantastic tutor.
Mike, Thank you for sharing. You've done a great job of putting this technology together for us from the ground up. I am finding your videos very helpful. I have had the infamous "Part 10 Media Failure at 85%" twice in a row. I would like to watch the entire Part 10 video. Please try publishing the video again. Joe
So, the next day, from work, on a different computer I was able to watch the whole video.
Mike, I have downloaded your EmailClient solution. I was able to get it to build after I built my own Prism assemblies. When I run the app not much happens. I get a blank browser. You never mentioned the startup project. From the video it looks to be the EmailClient.Web. And with that selected my local web server starts up. But there is nothing in the browser. Here is the URL being opened: I have break points in the shell and bootstrapper, but they do not get called. What have I missed? Joe
Amazing tutorials Mike, thanks for sharing your technical expertise. I am new to SL and PRISM, but I found it really easy to understand and its very comprehensive. Kudos to you for such dedication.
Great video, has really helped me understand unity/prism better so I can decide how best to use it in my projects.
Maybe minor question but in all your View's you always set the DataContext in a lamda which fires on the FrameworkElement.Loaded event.
Is there a case when deferring this assignment until the Loaded event is better than just assigning immediately in the class constructor?
those tutorials were awsome,it was verrrrrrrrrrrrrrrrrrrrrrrrrrrry helpufil 4 me,thanks
It is most appresiaeted. I wonder if the devoleping team did se this. This is how it is done.
Start with .. and then .. wonderfull.
Now a days - Include the Wcf Ria Services with Prism - And show the last (show 10) again. Now ther is commands & behaviors.
The one I want most, is how to use it on Model first approach. It seems to me using the model first approach, dictate a wrong way to use Prism 4 and Ria services?
Is it Poco's or somthing else that is the starting point?
Sskip
Mike, thank you very much for these great videos! You've really produced the best training material I've seen so far! Andreas
For anyone who is remotely considering PRISM, I highly recommend that you watch all ten of Mikes videos, you will not be disappointed.
I recently started to dig deeper into the adoption of PRISM 4.0 and prior to watching these videos I thought I truely understood all that I needed to get started, but truthfully I am so much further ahead as a result of taking the time to watch all ten of these very well presented video tutorials.
Mike does a fantastic job of articulating a good portion of what you need to know in order to roll with PRISM.
Thanks Mike, job well done!
I really appreciated the videos and found them very useful. However, I converted the video 10 code to run with PRISM 4 but I also got the same problem that Joe Kahl described above. Has anyone gotten this application to run using PRISM 4? It would be instructive for me if I can get it to run and use the debugger and see how PRISM/Unity are working.
Thank you,Peter
Nice work Amino acids Classification
Excellent post Mike! Keep the good work...
Excellent post Mike! Keep up the good work...
Remove this comment
Remove this threadclose
|
http://channel9.msdn.com/Blogs/mtaulty/Prism--Silverlight-Part-10-A-Larger-Example-Email-Client
|
CC-MAIN-2013-20
|
refinedweb
| 2,012
| 72.76
|
Timers
Timers are lightweight objects that enable you to specify a delegate to be called at a specified time. A thread in the thread pool performs the wait operation.
Using the Timer class is straightforward. You create a Timer, passing a TimerCallback delegate to the callback method, an object representing state that will be passed to the callback, an initial raise time, and a time representing the period between callback invocations. To cancel a pending timer, call the Timer.Dispose function.
The following code example starts a timer that starts after one second (1000 milliseconds) and ticks every second until you press the Enter key. The variable containing the reference to the timer is a class-level field, to ensure that the timer is not subject to garbage collection while it is still running. For more information on aggressive garbage collection, see KeepAlive.
using System; using System.Threading; public class Example { private static Timer ticker; public static void TimerMethod(Object state) { Console.Write("."); } public static void Main() { ticker = new Timer(TimerMethod, null, 1000, 1000); Console.WriteLine("Press the Enter key to end the program."); Console.ReadLine(); } }
|
http://msdn.microsoft.com/en-US/library/zdzx8wx8(v=vs.80).aspx
|
CC-MAIN-2013-48
|
refinedweb
| 186
| 56.45
|
I use ReadyAPI 3.0.0 and I cannot upgrade.Consider the groovy script below which is inside a ReadyAPI test.
import java.security.*;
import Custom.Authorization;
//some code.
obj = new Authorization();
obj.doSomething();
//more code.
I want to read the documentation of the Authorization class to learn a bit about it. To find the documentation, I need to know which package that class belongs to. In the ReadyAPI groovy editor, how do I find out which package Authorization actually belongs to?I'd guess that Authorization belongs to the Custom package because I see it in the import and I don't see any errors or warnings in the editor before I run the script. But, its also possible that the java security package could have classes related to authorization. In contrast, in IDEs like Intellij IDEA, we can simply lookup the package of a class and much more.So, how do I find out which package is actually the source of the Authorization class, without running the code or digging into folders that contain all our groovy scripts?
Solved!
Go to Solution.
Hi @rajs2020,
As far as I see, ReadyAPI supports Code Completion. Perhaps, you can get the information you are looking for from it:
View solution in original post
|
https://community.smartbear.com/t5/API-Functional-Security-Testing/Groovy-script-How-to-find-out-the-actual-source-or-package-of-a/td-p/209524
|
CC-MAIN-2021-10
|
refinedweb
| 212
| 67.15
|
Back to index
#include "nsCOMPtr.h"
Go to the source code of this file.
This routine attempts to delete a directory that may contain some files that are still in use.
Resolves a relative path string containing "." and ".." with respect to a base path (assumed to already be resolved).
This routine returns the trash directory corresponding to the given directory.
This later point is only an issue on Windows and a few other systems.
If the moveToTrash parameter is true, then the process for deleting the directory creates a sibling directory of the same name with the ".Trash" suffix. It then attempts to move the given directory into the corresponding trash folder (moving individual files if necessary). Next, it proceeds to delete each file in the trash folder on a low-priority background thread.
If the moveToTrash parameter is false, then the given directory is deleted directly.
If the sync flag is true, then the delete operation runs to completion before this function returns. Otherwise, deletion occurs asynchronously.
Definition at line 69 of file nsDeleteDir.h.
|
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/ns_delete_dir_8h.html
|
CC-MAIN-2016-44
|
refinedweb
| 176
| 58.89
|
One of the most common issues that our users bring to our attention is the fact that the launch of the web page which contains the pivot also calls the web service and prefills (or at least it tries to prefill) the pivot with data.
That is a big issue because:
I think you see where I am going to with this. We need a way to initialize the pivot WITHOUT data, showing the column headers and give it a proper look.
The pivot configuration mentions all these columns we are waiting for…is it really an impossible task what we are asking for?
Regards,
Serban
Hi Serban,
Thank you for writing to us.
In order for us to provide you with the most relevant solution here, could you please give us some more insights on the following points?
Just in case this is something close to what you want to achieve, a potential solution in this situation involves defining a
dataSource property of the
report object as a reference to a CSV file with defined hierarchies but no further data definitions:
itemID,price,color,country
Please see the attached screenshot to demonstrate the results of this approach.
We would be happy to hear your thoughts. Looking forward to hearing from you.
Best regards,
Mykhailo
Users do not want any initialization data, period. We don’t want it too, if they would discard it anyway.
Creating a header-only csv is actually harder than bringing some data. Our csv web services actually process json/xml, and apply xslt to results to create csv. No data returned by filters means no output, no headers, nothing. It would be just the same if we processed json, xml. We chose csv because of size. We hate your json solution because you want config metadata as the first member of the json data, not a different path of json metadata. That is VERY difficult to process. Why not jsonpath for metadata, jsonpath for data? It would be so much more intuitive, and we could plug in almost all existing json web services like that. Don’t wait for things to be right at the top of the json tree…allow jsonpath to tell you where to find data and metadata.
Why can’t you allow a fake refresh, build the header row from the flexmonster json config, where all the columns are mentioned, and format exactly like you would when you would have received at least a row. That is the format we want, the same as it would have data, but with no data.
As always, thanks much for the prompt answers.
Serban
Hi Serban,
Thank you for your swift response.
With JSON as your data source, there is another way to predefine the hierarchies and their captions – that is to use the
mapping object.
This approach is better in your situation than defining a metaobject directly in your data, as in the report it stands independent from the data itself. Moreover, you can use it in combination with an empty
data property of the
dataSource object to create an empty grid with only hierarchy captions being displayed:
dataSource: {
data: [{}],
mapping: {
"itemID": {
type: "number"
},
"price": {
type: "number"
},
"color": {
type: "string"
},
"country": {
type: "string"
}
},
},
Although this is similar to what we’ve suggested with the headers-only CSV in our previous reply, this approach does not require you to load any external data at all – only the headers row is loaded from the report.
We’ve prepared a quick JSFiddle example to illustrate this:
For more info on Data Source & Mapping objects feel free to check out the following links:
Data Source:
Mapping:
We would be happy to hear your thoughts on this – please let us know what you think.
Best regards,
Mykhailo
Ok, data and mapping sound good but:
Can they be represented by functions with params that are SAVED in the config? I mean an equivalent of the url with querystring params? That is the biggest problem with this type of config…I’ve seen examples where data: getData(), but I haven’t seen anything like data: getData({param1: “val1”, param2: “val2”}) that survive the config save as json (as in json produced by Save, loaded with Open Local Report).
The datasource filename:¶m2=val2 gets saved into the config json as is, with params, and their values. I need “data” and “mapping” to do the same, allow their being produced dynamically. And again, assuming that will eventually happen, it should also take into account the web service that produces the data/mapping already exists, and doesn’t necessarily have the relevant data/mapping nodes in the root. In other words, I need:
dataSource: {
dataType: "json",
dataUrl: "",
dataJsonPath: "$.store.book [*].author",
mappingUrl: "",
mappingJsonPath: "$.store.book [*].authorMapping[0]"
}
I think you are following me now. Json paths are pretty much standard, something like here
As always, thank you for your prompt answer.
Serban
Hi Serban,
Our apologies for the slight delay in response here.
We just wanted to let you know that our team is currently working on your question and we are evaluating the possible solutions.
We will make sure to reach out to you as soon as possible with our response.
Thank you for your patience!
Best regards,
Mykhailo
Hi Serban,
Thank you for giving us time to look deeper into this problem.
After some discussion with our dev team, we’ve decided it would be reasonable to add the possibility to load the mapping object from a specific remote URL. Please note that the ETA for this feature is May 18th.
Speaking of implementing the mentioned JSON path-like functionality, it does not fully align with our current roadmap, so while we’ve added this to the customers’ wishlist, we cannot provide you with a specific ETA here. Still, we will make sure to let you know if there are any updates on this matter.
Please let us know if you have any other questions we can help you with at the moment.
Best regards,
Mykhailo
You guys are awesome. That should move things much faster towards the implementation of the json data sources, for everybody.
I understand a jsonpath dependency is a bigger decision. We thought a while about it, as well. Once we agreed to use it, though, there isn’t a day to not wonder how much harder it would have been without it. By the way, the most used version (), pure javascript function, is only 4kB ()
For an angular project:
package.json:
dependencies: {
….
“jsonpath”: “^1.0.2”,
….
}
app.component.ts:
imports:
import * as jsonPath from ‘jsonpath/jsonpath’;
constructor:
this.jp = new jsonPath.JSONPath();
usage:
this.jp.query(data, ‘$..Root.Response.Results’)
(returns array of json, as flexmonster likes it)
As always, thank you for listening to us.
Serban
Hi Serban,
Thank you for your kind words, it is great to hear that this plan is suitable for you!
We will make sure to take your advice into account when considering the jsonpath functionality in our product and we will surely inform you in case there is any news on this.
In the meantime, do not hesitate to reach out with any other questions we can assist you with.
Regards,
Mykhailo
Hi Serban,
How are you?
We are writing to give you some updates on the feature of loading mapping from a remote source. Currently, our team is actively working on its implementation – however, due to its technical specificities, we will need a little bit more time to carry out all the necessary testing and make sure everything is working as expected.
With that in mind, we will have to postpone the release of this feature to our next minor update ETA June 1st. Please let us know if that works for you.
Looking forward to your response.
Best regards,
Mykhailo
No problem. Looking forward to your release, whenever that is available. Add that jsonPath as well, please, please, please, pretty please 🙂
Hi Serban,
We are happy to let you know that the
mapping property now allows loading mapping from the remote source. Please see the following JSFiddle sample illustrating this:
This is available in the 2.8.8 version of Flexmonster:
You are welcome to update the component. Here is our updating to the latest version guide for assistance:
Please let us know if everything works fine for you.
Best regards,
Mykhailo
Hi there,
Great news.
Will get to play with the mapping feature soon. I see in the sample you sent it’s pointing data to a csv stream. Does this mapping work with json, as well? In other words, does it replace the top json entry which normally describes the metadata?
Hi Serban,
The Mapping object is the recommended approach for defining the structure and specificities of your input data.
Therefore, yes, it is prioritized in the report, which is why it overrides the meta-object if it is defined inside the JSON data set you are passing to Flexmonster as a data source. Naturally, this also means that you don’t have to define a meta-object in your JSON data if you are already using the Flexmonster Mapping object.
As always, let us know if you have any further questions we can help you with.
Regards,
Mykhailo
Hey,
You guys rock!
This is probably one of the best features you added in the last year. People may not realize, but this was a big problem when trying to use FlexMonster with already existing json enabled web services.
Almost all modern web services produce json. All you have to do now is to add a leaf describing the data, the metadata leaf, and then point the mapping url Flexmonster, just released, to it. Or, better yet, create one common web service that serves as a repository of metadata. Any one of these alternatives is fantastic.
As always,
Thank you much for being so responsive to our needs.
Serban
Hi Serban,
Thank you for your feedback, we are glad to hear everything is working well for you!
Speaking of your suggestions, we will let you know if we implement something like this in the future.
Do not hesitate to reach out if you have any other questions we can assist you with.
Regards,
Mykhailo
|
https://www.flexmonster.com/question/pivot-initialization-show-column-headers-without-receiving-data/
|
CC-MAIN-2021-31
|
refinedweb
| 1,710
| 70.43
|
go to bug id or search bugs for
Description:
------------
Hello!
I know this feature request has been made a few times in the last years, but I
want to reopen this request to add arguments in favor to this feature request.
It would be really nice for developper to be able to call a function like this:
func($param1 => 'value1', $param2 => 'value2');
or a similar way.
The func function would be defined that way:
function func(...$options) {}
or another syntax.
This way, we've got not problem with documentation if we compare to the "array-
way".
The goal of this syntax is to simplify the function call and get shorter code.
Not convinced?
Let's compare two web framework: one in a language that support named function
parameters and one that doesn't.
Drupal (PHP) and Django (Python).
Drupal uses the "array-way" as you recommend.
Here is the way Drupal allows us to create a database table model:
(go down to the code section)
And look at the "named-parameter-way" in Django in the Test script.
The code is almost five times shorter in python, because of the use of named
parameters.
It is the primary reason to include this syntax in PHP:
we can get shorter and cleaner code.
Thanks for considering this feature request once again.
And I hope you will do the right choice now.
I am open to argumentation.
Test script:
---------------
Python code almost equivalent to the Drupal code:
from datetime import datetime
from django.db import models
class Node(models.Model):
vid = models.PositiveIntegerField(default=0, null=False, unique=True)
changed = models.DateTimeField(default=datetime.now, db_index=True, null=False)
created = models.DateTimeField(default=datetime.now, db_index=True, null=False)
type = models.CharField(default='', max_length=32, null=False)
title = models.CharField(default='', max_length=255, null=False)
revision = models.ForeignKey('NodeRevision')
author = models.ForeignKey('User')
Add a Patch
Add a Pull Request
See <>.
|
https://bugs.php.net/bug.php?id=62787
|
CC-MAIN-2019-39
|
refinedweb
| 319
| 60.61
|
Is Asynchronous EJB Just a Gimmick?
Is Asynchronous EJB Just a Gimmick?
Blocking APIs can hurt your applications performance. So does using Asynchronous EJBs help?
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
void or a
Future must be returned. An example of a service using this annotation is shown in the following listing:
@Stateless public class Service2 { @Asynchronous public Future<String> foo(String s) { // simulate some long running process Thread.sleep(5000); s += "<br>Service2: threadId=" + Thread.currentThread().getId(); return new AsyncResult<String>(s); } }
The annotation is on line 4. The method returns a
Future of type
String and does so on line 10 by wrapping the output in an
AsyncResult. At the point that client code calls the EJB method, the container intercepts the call and creates a task which it will run on a different thread, so that it can return a
Future immediately. When the container then runs the task using a different thread, it calls the EJB's method and uses the
AsyncResult to complete the
Future which the caller was given. There are several problems with this code, even though it looks exactly like the code in all the examples found on the internet. For example, the
Future class only contains blocking methods for getting at the result of the
Future, rather than any methods for registering callbacks for when it is completed. That results in code like the following, which is bad when the container is under load:
//type 1 Future<String> f = service.foo(s); String s = f.get(); //blocks the thread, but at least others can run //... do something useful with the string... //type 2 Future<String> f = service.foo(s); while(!f.isDone()){ try { Thread.sleep(100); } catch (InterruptedException e) { ... } } String s = f.get(); //... do something useful with the string...
This kind of code is bad, because it causes threads to block meaning that they cannot do anything useful during that time. While other threads can run, there needs to be a context switch which wastes time and energy (see this good article for details about the costs, or the results of my previous articles). Code like this causes servers that are already under load to come under even more load, and grind to a halt.
So is it possible to get the container to execute methods asynchronously, but to write a client which doesn't need to block threads? It is. The following listing shows a servlet doing so.
@WebServlet(urlPatterns = { "/AsyncServlet2" }, asyncSupported = true) public class AsyncServlet2 extends HttpServlet { @EJB private Service3 service; protected void doGet(HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException { final PrintWriter pw = response.getWriter(); pw.write("<html><body>Started publishing with thread " + Thread.currentThread().getId() + "<br>"); response.flushBuffer(); // send back to the browser NOW CompletableFuture<String> cf = new CompletableFuture<>(); service.foo(cf); // since we need to keep the response open, we need to start an async context final AsyncContext ctx = request.startAsync(request, response); cf.whenCompleteAsync((s, t)->{ try { if(t!=null) throw t; pw.write("written in the future using thread " + Thread.currentThread().getId() + "... service response is:"); pw.write(s); pw.write("</body></html>"); response.flushBuffer(); ctx.complete(); // all done, free resources } catch (Throwable t2) { ...
Line 1 declares that the servlet supports running asynchronously - don't forget this bit! Lines 8-10 start writing data to the response but the interesting bit is on line 13 where the asynchronous service method is called. Instead of using a
Future as the return type, we pass it a
CompletableFuture, which it uses to return us the result. How? Well line 16 starts the asynchronous servlet context, so that we can still write to the response after the
doGet method returns. Lines 17 onwards then effectively register a callback on the
CompletableFuture which will be called once the
CompletableFuture is completed with a result. There is no blocking code here - no threads are blocked and no threads are polled, waiting for a result! Under load, the number of threads in the server can be kept to a minimum, making sure that the server can run efficiently because less context switches are required.
The service implementation is shown next:
@Stateless public class Service3 { @Asynchronous public void foo(CompletableFuture<String> cf) { // simulate some long running process Thread.sleep(5000); cf.complete("bar"); } }
Line 7 is really ugly, because it blocks, but pretend that this is code calling a web service deployed remotely in the internet or a slow database, using an API which blocks, as most web service clients and JDBC drivers do. Alternatively, use an asynchronous driver and when the result becomes available, complete the future as shown on line 9. That then signals to the
CompletableFuture that the callback registered in the previous listing can be called.
Isn't that just like using a simple callback? It is certainly similar, and the following two listings show a solution using a custom callback interface.
@WebServlet(urlPatterns = { "/AsyncServlet3" }, asyncSupported = true) public class AsyncServlet3 extends HttpServlet { @EJB private Service4 service; protected void doGet(HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException { ... final AsyncContext ctx = request.startAsync(request, response); service.foo(s -> { ... pw.write("</body></html>"); response.flushBuffer(); ctx.complete(); // all done, free resources ...
@Stateless public class Service4 { @Asynchronous public void foo(Callback<String> c) { // simulate some long running process Thread.sleep(5000); c.apply("bar"); } public static interface Callback<T> { void apply(T t); } }
Again, in the client, there is absolutely no blocking going on. But the earlier example of the AsyncServlet2 together with the Service3 class, which use the CompletableFuture are better for the following reasons:
- The API of
CompletableFutureallows for exceptions / failures,
- The
CompletableFutureclass provides methods for executing callbacks and dependent tasks asynchronously, i.e. in a fork-join pool, so that the system as a whole runs using as few threads as possible and so can handle concurrency more efficiently,
- A
CompletableFuturecan be combined with others so that you can register a callback to be called only when several
CompletableFutures complete,
- The callback isn't called immediately, rather a limited number of threads in the pool are servicing the
CompletableFutures executions in the order in which they are due to run.
After the first listing, I mentioned that there were several problems with the implementation of asynchronous EJB methods. Other than blocking clients, another problem is that according to chapter 4.5.3 of the EJB 3.1 Spec, the client transaction context does not propagate with an asynchronous method invocation. If you wanted to use the @Asynchronous annotation to create two methods which could run in parallel and update a database within a single transaction, it wouldn't work. That limits the use of the
Using the CompletableFuture, you might think that you could run several tasks in parallel within the same transactional context, by first starting a transaction in say an EJB, then creating a number of runnables and run them using the
runAsync method which runs them in an execution pool, and then register a callback to fire once all were done using the
allOf method. But you're likely to fail because of a number of things:
- If you use container managed transactions, then the transaction will be committed once the EJB method which causes the transaction to be started returns control to the container - if your futures are not completed by then, you will have to block the thread running the EJB method so that it waits for the results of the parallel execution, and blocking is precisely what we want to avoid,
- If all the threads in the single execution pool which runs the tasks are blocked waiting for their DB calls to answer then you will be in danger of creating an inperformant solution - in such cases you could try using a non-blocking asynchronous driver, but not every database has a driver like that,
- Thread local storage (TLS) is no longer usable as soon as a task is running on a different thread e.g. like those in the execution pool, because the thread which is running is different from the thread which submitted the work to the execution pool and set values into TLS before submitting the work,
- Resources like
EntityManagerare not thread-safe. That means you cannot pass the
EntityManagerinto the tasks which are submitted to the pool, rather each task needs to get hold of it's own
EntityManagerinstance, but the creation of an
EntityManagerdepends on TLS (see below).
Let's consider TLS in more detail with the following code which shows an asyncronous service method attempting to do several things, to test what is allowed.
@Stateless public class Service5 { @Resource ManagedExecutorService mes; @Resource EJBContext ctx; @PersistenceContext(name="asdf") EntityManager em; @Asynchronous public void foo(CompletableFuture<String> cf, final PrintWriter pw) { //pw.write("<br>inside the service we can rollback, i.e. we have access to the transaction"); //ctx.setRollbackOnly(); //in EJB we can use EM KeyValuePair kvp = new KeyValuePair("asdf"); em.persist(kvp); Future<String> f = mes.submit(new Callable<String>() { @Override public String call() throws Exception { try{ ctx.setRollbackOnly(); pw.write("<br/>inside executor service, we can rollback the transaction"); }catch(Exception e){ pw.write("<br/>inside executor service, we CANNOT rollback the transaction: " + e.getMessage()); } try{ //in task inside executor service we CANNOT use EM KeyValuePair kvp = new KeyValuePair("asdf"); em.persist(kvp); pw.write("...inside executor service, we can use the EM"); }catch(TransactionRequiredException e){ pw.write("...inside executor service, we CANNOT use the EM: " + e.getMessage()); } ...
Line 12 is no problem, you can rollback the transaction that is automatically started on line 9 when the container calls the EJB method. But that transaction will not be the global transaction that might have been started by code which calls line 9. Line 16 is also no problem, you can use the
EntityManager to write to the database inside the transaction started by line 9. Lines 4 and 18 show another way of running code on a different thread, namely using the
ManagedExecutorService introduced in Java EE 7. But this too fails anytime there is a reliance on TLS, for example lines 22 and 31 cause exceptions because the transaction that is started on line 9 cannot be located because TLS is used to do so and the code on lines 21-35 is run using a different thread than the code prior to line 19.
The next listing shows that the completion callback registered on the
CompletableFuture from lines 11-14 also runs in a different thread than lines 4-10, because the call to commit the transaction that is started outside of the callback on line 6 will fail on line 13, again because the call on line 13 searches TLS for the current transaction and because the thread running line 13 is different to the thread that ran line 6, the transaction cannot be found. In fact the listing below actually has a different problem: the thread handling the
GET request to the web server runs lines 6, 8, 9 and 11 and then it returns at which point JBoss logs
JBAS010152: APPLICATION ERROR: transaction still active in request with status 0 - even if the thread running line 13 could find the transaction, it is questionable whether it would still be active or whether the container would have closed it.
@Resource UserTransaction ut; @Override protected void doGet(HttpServletRequest request, final HttpServletResponse response) throws ServletException, IOException { ut.begin(); ... CompletableFuture<String> cf = new CompletableFuture<>(); service.foo(cf, pw); ... cf.whenCompleteAsync((s, t)->{ ... ut.commit(); // => exception: "BaseTransaction.commit - ARJUNA016074: no transaction!" }); }
The transaction clearly relies on the thread and TLS. But it's not just transactions that rely on TLS. Take for example JPA which is either configured to store the session (i.e. the connection to the database) directly in TLS or is configured to scope the session to the current JTA transaction which in turn relies on TLS. Or take for example security checks using the
Principal which is fetched from
EJBContextImpl.getCallerPrincipal which makes a call to
AllowedMethodsInformation.checkAllowed which then calls the
CurrentInvocationContextwhich uses TLS and simply returns if no context is found in TLS, rather than doing a proper permission check as is done on line 112.
These reliances on TLS mean that many standard Java EE features no longer work when using
CompletableFutures or indeed the Java SE fork-join pool or indeed other thread pools, whether they are managed by the container or not.
To be fair to Java EE, the things I have been doing here work as designed! Starting new threads in the EJB container is actually forbidden by the specs. I remember a test I once ran with an old version of Websphere more than ten years ago - starting a thread caused an exception to be thrown because the container was really strictly adhering to the specifications. It makes sense: not only because the number of threads should be managed by the container but also because Java EE's reliance on TLS means that using new threads causes problems. In a way, that means that using the
CompletableFuture is illegal because it uses a thread pool which isn't managed by the container (the pool is managed by the JVM). The same goes for using Java SE's
ExecutorService as well. Java EE 7's
ManagedExecutorService is a special case - it's part of the specs, so you can use it, but you have to be aware of what it means to do so. The same is true of the
@Asynchronous annotation on EJBs.
The result is that writing asynchronous non-blocking applications in a Java EE container might be possible, but you really have to know what you are doing and you will probably have to handle things like security and transactions manually, which does sort of beg the question of why you are using a Java EE container in the first place.
So is it possible to write a container which removes the reliance on TLS in order to overcome these limitations? Indeed it is, but the solution doesn't depend on just Java EE. The solution might require changes in the Java language. Many years ago before the days of dependency injection, I used to write POJO services which passed a JDBC connection around from method to method, i.e. as a parameter to the service methods. I did that so that I could create new JDBC statements within the same transaction i.e. on the same connection. What I was doing was not all that different to what things like JPA or EJB containers need to do. But rather than pass things like connections or users around explicitly, modern frameworks use TLS as a place to store the "context", i.e. connections, transactions, security info, etc. centrally. As long as you are running on the same thread, TLS is a great way of hiding such boilerplate code. Let's pretend though that TLS had never been invented. How could we pass a context around without forcing it to be a parameter in each method? Scala's
implicit keyword is one solution. You can declare that a parameter can be implicitly located and that makes it the compilers problem to add it to the method call. So if Java SE introduced such a mechanism, Java EE wouldn't need to rely on TLS and we could build truly asynchronous applications where the container could automatically handle transactions and security by checking annotations, just as we do today! Saying that, when using synchronous Java EE the container knows when to commit the transaction - at the end of the method call which started the transaction. If you are running asynchronously you would need to explicitly close the transaction because the container could no longer know when to do so.
Of course, the need to stay non-blocking and hence the need to not depend on TLS, depends heavily on the scenario at hand. I don't believe that the problems I've described here are a general problem today, rather they are a problem faced by applications dealing with a niche sector of the market. Just take a look at the number of jobs that seem to be currently on offer for good Java EE engineers, where synchronous programming is the norm. But I do believe that the larger IT software systems become and the more data they process, the more that blocking APIs will become a problem. I also believe that this problem is compounded by the current slow down in the growth hardware speed. What will be interesting to see is whether Java a) needs to keep up with the trends toward asynchronous processing and b) whether the Java platform will make moves to fix its reliance on TLS.
Sensu: workflow automation for monitoring. Learn more—download the whitepaper. }}
|
https://dzone.com/articles/is-asynchronous-ejb-just-a-gimmick
|
CC-MAIN-2019-09
|
refinedweb
| 2,816
| 51.78
|
Bitten by inlining (again)
By Darryl Gove-Oracle on Feb 15, 2010
So this relatively straight-forward looking code fails to compile without optimisation:
#include <stdio.h> inline void f1() { printf("In f1\\n"); } inline void f2() { printf("In f2\\n"); f1(); } void main() { printf("In main\\n"); f2(); }
Here's the linker error when compiled without optimisation:
% cc inline.c Undefined first referenced symbol in file f2 inline.o ld: fatal: Symbol referencing errors. No output written to a.out
At low optimisation levels the compiler does not inline these functions, but because they are declared as inline functions the compiler does not generate function bodies for them - hence the linker error. To make the compiler generate the function bodies it is necessary to also declare them to be extern (this places them in every compilation unit, but the linker drops the duplicates). This can either be done by declaring them to be
extern inline or by adding a second prototype. Both approaches are shown below:
#include <stdio.h> extern inline void f1() { printf("In f1\\n"); } inline void f2() { printf("In f2\\n"); f1(); } extern void f2(); void main() { printf("In main\\n"); f2(); }
It might be tempting to copy the entire function body into a support file:
#include <stdio.h> void f1() { printf("In duplicate f1\\n"); } void f2() { printf("In duplicate f2\\n"); f1(); }
This is a bad idea, as you might gather from the deliberate difference I've made to the source code. Now you get different code depending on whether the compiler chooses to inline the functions or not. You can demonstrate this by compiling with and without optimisation, but this only forces the issue to appear. The compiler is free to choose whether to honour the inline directive or not, so the functions selected for inlining could vary from build to build. Here's a demonstration of the issue:
% cc -O inline.c inline2.c inline.c: inline2.c: % ./a.out In main In f2 In f1 % cc inline.c inline2.c inline.c: inline2.c: % ./a.out In main In duplicate f2 In duplicate f1
Douglas Walls goes into plenty of detail on the situation with inlining on his blog.
Note that using inline with sunpro is asking for trouble.
(not sure the second one is caused by inline as I don't have the means to reproduce it, but it is my first guess)
Posted by Marc on February 15, 2010 at 08:56 PM PST #
Thanks, Marc.
I don't agree with your conclusion that using inlining always causes trouble. However, you do seem to have hit some issues!
The first issue is a bug.
The second issue is a problem of some kind, but there's not sufficient information to be able to figure out where the problem is. It looks like non-inlined versions of the functions get included multiple times.
Yeah, cg dying is always a bug.
Posted by Darryl Gove on February 16, 2010 at 02:31 AM PST #
|
https://blogs.oracle.com/d/entry/bitten_by_inlining_again
|
CC-MAIN-2016-22
|
refinedweb
| 503
| 61.97
|
John Nagle wrote: > Chris Rebert wrote: >> On Tue, Mar 30, 2010 at 8:40 AM, gentlestone <tibor.beck at hotmail.com> >> wrote: >>> Hi, how can I write the popular C/JAVA syntax in Python? >>> >>> Java example: >>> return (a==b) ? 'Yes' : 'No' >>> >>> My first idea is: >>> return ('No','Yes')[bool(a==b)] >>> >>> Is there a more elegant/common python expression for this? >> >> Yes, Python has ternary operator-like syntax: >> return ('Yes' if a==b else 'No') >> >> Note that this requires a recent version of Python. > > Who let the dogs in? That's awful syntax. > Yes, that's deliberately awful syntax. Guido designed it that way to ensure that people didn't aver-use it, thereby reducing the readability of Python applications. Speaking purely personally I hardly ever use it, but don't dislike it. regards Steve -- Steve Holden +1 571 484 6266 +1 800 494 3119 See PyCon Talks from Atlanta 2010 Holden Web LLC UPCOMING EVENTS:
|
https://mail.python.org/pipermail/python-list/2010-March/572409.html
|
CC-MAIN-2014-15
|
refinedweb
| 156
| 68.16
|
Date: 2004-12-05T22:47:12
Editor: AlexKarasulu <akarasulu@apache.org>
Wiki: Apache Directory Project Wiki
Page: EveGeneral
URL:
Change Log:
------------------------------------------------------------------------------
@@ -4,41 +4,13 @@
== Out-of-the-box Authentication ==
-I really wanted to make Authentication something that does not get in the way of users
-not needing it. Meaning if users did not have any security requirements where
-they're just using Eve (especially in embedded mode) as a simple backing store using LDAP
-as the namespace they should not have to authenticate. To balance enabling both types of
-users (those needing and not needing auth) while minimizing first time startup configuration
-overheads and authorization issues we needed a policy for dealing with user passwords in
-general and the system user password. First let's list some of our requirements and some
-notes about the problems.
-
-Requirements for Setting Admin (super-user) Password:
- * minimize setup overhead in general
- * config-less operation even without providing a password should be possible for those that
just want to use eve as an LDAP backing store
- * users that do not care about authorizing effectively want to be super users all the time
to get around authorization issues to have free reign
-
-Notes:
- * According to LDAP JNDI provider implementation guidelines, "if this property [java.naming.security.authentication]
is not set then its default value is none, unless the java.naming.security.credentials property
is set, in which case the default value is simple." So this means config-less operation presumes
anonymous binds and we must conform to these guidelines.
- * Most LDAP browsers do not allow simple binds using null or empty passwds. This makes
using a null password a poor choice for the super user.
-
-So from this information we find ourselve kinda stuck. Without any credential information
anonymous binds should be in effect not simple binds. So we need to provide something. Secondly
we can't use null or empty passwords for any users.
-
-There is one gray area though where we might be able to appease all however its a risky proposition.
The anonymous user right now defaults to the empty string or empty DN user "" which is never
authenticated. We can make the anonymous user configurable and have it default to the super
user. This is sorta insane for anyone except for those who want to embed the server. Even
they may not want to expose this via LDAP though. Right now LDAP access is enabled by default
blowing up the LDAP server automatically. We can supress this default behavoir when the anonymous
account is the super-user. We can require a property to force it on in addition to the property
we have today to force disabling the LDAP server.
-
-So a new property for setting the anonymous account would need to be created: eve.anonymous.user.
If this property is not present, the super-user account's DN (uid=admin,ou=system) is used
for anonymous binds by default. If present and set to null or the empty string the anonymous
user is the empty string DN (""). If the anonymous account is set to the super-user then
the default behavoir of firing up the LDAP server is supressed. An explicit force server
on property can be used to turn on the server even when the super-user is the anonymous user.
Right now eve.net.disable.protocol is used to force suppression of firing up the server.
We can use eve.net.enable.protocol to force enabling the protocol server when the super-user
account is used as the anonymous user. These two symmetric properties are overrides. Should
we just use one property for the override and use a true or a false value instead of having
two marker properties? This might be best and we can use eve.net.protocol.override which
can be "on" or "off". Also by default we must enable anonymous binds. Right now anonymous
binds are turned off out of the box. To turn them on the eve.enable.anonymous property must
be present. I say we just swap the setup where we rename eve.enable.anonymous to eve.disable.anonymous
and enable anonymous binds by default.
-
-These present security issues but they make using Eve as a JNDI provider sooo much easier
without having to set any properties at all. Not just yet! We still need to look at what
happens on the first start which creates the uid=admin account. With a config-less setup
the passwd is set to "" the empty string. If the credentials property is provided on startup
it is presumed to be the admin's password. If the principal property is also provided on
the first startup it must be the admin user DN. If it is not then we throw a configuration
exception. By default if credentials are provided without a principal name the super-user
is presumed to be the principal by default since the authentication type now becomes simple.
-
-WDYT?
-
-
-
-
-
+* Eve's super-user (uid=admin,ou=system) is created on the first start and has its userPassword
field set to "secret".
+* Another test user uid=akarasulu,ou=users,ou=system is created on first startup and has
password "test".
+* Any user entry that has the userPassword attribute set can be authenticated. The user
need not be under ou=users, ou=system.
+* There are advantages to creating users under ou=users, ou=system. First the user is available
regardless of the context partitions that are created. The user also is protected by some
hardcoded authorization rules within the system. Namely only self read is possible for all
users on their own accounts. Users cannot see the credentials of others minus the super-user
of course. This is an intermediate hardcoded authorization rule set until the authorization
subsystem matures.
|
http://mail-archives.apache.org/mod_mbox/directory-commits/200412.mbox/%3C20041206064712.11875.63465@minotaur.apache.org%3E
|
CC-MAIN-2016-22
|
refinedweb
| 964
| 56.55
|
The Java Specialists' Newsletter
Issue 0682003-04-21
Category:
Performance
Java version:
GitHub
Subscribe Free
RSS Feed
Welcome to the 68th edition of The Java(tm) Specialists' Newsletter, sent to 6400 Java
Specialists in 95 countries.
Since our last newsletter, we have had two famous Java authors
join the ranks of subscribers. It gives me great pleasure to
welcome Mark Grand and Bill Venners to our list of
subscribers.
Mark is famous for his three volumes of Java Design Patterns
books. You will notice that I quote Mark in the brochure
of my Design Patterns course. Bill is famous for his book
Inside The Java Virtual Machine.
Bill also does a lot of work training with Bruce Eckel.
Our last newsletter on BASIC Java
produced gasps of disbelief. Some readers
told me that they now wanted to unsubscribe, which of course I
supported 100%. Others enjoyed it with me. It was meant in
humour, as the warnings at the beginning of the newsletter clearly
indicated.
NEW:
Please see our new "Extreme Java" course, combining
concurrency, a little bit of performance and Java 8.
Extreme Java - Concurrency & Performance for Java 8.
The first code that I look for when I am asked to find out why
some code is slow is concatenation of Strings. When we concatenate
Strings with += a whole lot of objects are constructed.
Before we can look at an example, we need to define a Timer class
that we will use for measuring performance:
/**
*;
}
}
In the test case, we have three tasks that we want to measure.
The first is a simple += String append, which turns out to be
extremely slow. The second creates a StringBuffer and calls
the append method of StringBuffer. The third method creates
the StringBuffer with the correct size and then appends to
that. After I have presented the code, I will explain what
happens and why.());
}
});
}
}
This program does use quite a bit of memory, so you should set
the maximum old generation heapspace to be quite large, for example
256mb. You can do that with the -Xmx256m flag.
When we run this program, we get the following output:
String += 10000 additions
Length = 38890
Took 2203ms
StringBuffer 300 * 10000 additions initial size wrong
Length = 19888890
Took 2254ms
StringBuffer 300 * 10000 additions initial size right
Length = 19888890
Took 1562ms
You can observe that using StringBuffer directly is
about 300 times faster than using +=.
Another observation that we can make is that if we set
the initial size to be correct, it only takes 1562ms
instead of 2254ms. This is because of the way that
java.lang.StringBuffer works. When you create a new
StringBuffer, it creates a char[] of size 16. When
you append, and there is no space left in the char[]
then it is doubled in size. This means that if you
size it first, you will reduce the number of char[]s
that are constructed.
The time that the += String append takes is dependent
on the compiler that you use to compile the code. I
discovered this accidentally during my Java course last
week, and much to my embarrassment, I did not know why
this was. If you compile it from within Eclipse, you get
the result above, and if you compile it with Sun's
javac, you get the output below. I think
that Eclipse uses jikes to compile the code, but I am not
sure. Perhaps it even has an internal compiler?
javac
String += 10000 additions
Length = 38890
Took 7912ms
StringBuffer 300 * 10000 additions initial size wrong
Length = 19888890
Took 2634ms
StringBuffer 300 * 10000 additions initial size right
Length = 19888890
Took 1822ms
This took some head-scratching, resulting in my fingers
being full of wood splinters. I started by writing a
class that did the basic String append with +=.
public class BasicStringAppend {
public BasicStringAppend() {
String s = "";
for(int i = 0; i < 100; i++) {
s += i;
}
}
}
When in doubt about what the compiler does, disassemble
the classes. Even when I disassembled them, it took a
while before I figured out what the difference was and
why it was important. The part where they differ is in
italics. You can disassemble a class with the
tool javap that is in the bin directory of
your java installation. Use the -c parameter:
javap
javap -c BasicStringAppend
Compiled with Eclipse:
Compiled from BasicStringAppend.java
public class BasicStringAppend extends java.lang.Object {
public BasicStringAppend();
}
Method BasicStringAppend()
0 aload_0
1 invokespecial #9 <Method java.lang.Object()>
4 ldc #11 <String "">
6 astore_1
7 iconst_0
8 istore_2
9 goto 34
12 new #13 <Class java.lang.StringBuffer>
15 dup
16 aload_1
17 invokestatic #19 <Method java.lang.String valueOf(java.lang.Object)>
20 invokespecial #22 <Method java.lang.StringBuffer(java.lang.String)>
23 iload_2
24 invokevirtual #26 <Method java.lang.StringBuffer append(int)>
27 invokevirtual #30 <Method java.lang.String toString()>
30 astore_1
31 iinc 2 1
34 iload_2
35 bipush 100
37 if_icmplt 12
40 return
Compiled with Sun's javac:
Compiled from BasicStringAppend.java
public class BasicStringAppend extends java.lang.Object {
public BasicStringAppend();
}
Method BasicStringAppend()
0 aload_0
1 invokespecial #1 <Method java.lang.Object()>
4 ldc #2 <String "">
6 astore_1
7 iconst_0
8 istore_2
9 goto 34
12 new #3 <Class java.lang.StringBuffer>
15 dup
16 invokespecial #4 <Method java.lang.StringBuffer()>
19 aload_1
20 invokevirtual #5 <Method java.lang.StringBuffer append(java.lang.String)>
23 iload_2
24 invokevirtual #6 <Method java.lang.StringBuffer append(int)>
27 invokevirtual #7 <Method java.lang.String toString()>
30 astore_1
31 iinc 2 1
34 iload_2
35 bipush 100
37 if_icmplt 12
40 return
Instead of explaining what every line does (which I hope should not
be necessary on a Java Specialists' Newsletter) I present
the equivalent Java code for both IBM's Eclipse and Sun. The differences,
which equate to the disassembled difference, is again in italics:
public class IbmBasicStringAppend {
public IbmBasicStringAppend() {
String s = "";
for(int i = 0; i < 100; i++) {
s = new StringBuffer(String.valueOf(s)).append(i).toString();
}
}
}
public class SunBasicStringAppend {
public SunBasicStringAppend() {
String s = "";
for(int i = 0; i < 100; i++) {
s = new StringBuffer().append(s).append(i).toString();
}
}
}
It does not actually matter which compiler is better, either is terrible.
The answer is to avoid += with Strings wherever possible.
You should never reuse a StringBuffer object. Construct it, fill it,
convert it to a String, and then throw it away.
Why is this? StringBuffer contains a char[]
which holds the characters to be used for the String. When you call
toString() on the StringBuffer, does it make a copy of
the char[]? No, it assumes that you will
throw the StringBuffer away and constructs a String with a pointer to
the same char[] that is contained inside
StringBuffer! If you do change the StringBuffer after creating
a String, it makes a copy of the char[] and
uses that internally. Do yourself a favour and read the source code
of StringBuffer - it is enlightning.
char[]
toString()
But it gets worse than this. In JDK 1.4.1, Sun changed the way that
setLength() works. Before 1.4.1, it was safe to do the following:
... // StringBuffer sb defined somewhere else
sb.append(...);
sb.append(...);
sb.append(...);
String s = sb.toString();
sb.setLength(0);
The code of setLength pre-1.4.1 used to contain the following
snippet of code:
if (count < newLength) {
// *snip*
} else {
count = newLength;
if (shared) {
if (newLength > 0) {
copy();
} else {
// If newLength is zero, assume the StringBuffer is being
// stripped for reuse; Make new buffer of default size
value = new char[16];
shared = false;
}
}
}
It was replaced in the 1.4.1 version with:
if (count < newLength) {
// *snip*
} else {
count = newLength;
if (shared) copy();
}
Therefore, if you reuse a StringBuffer in JDK 1.4.1, and any one of the
Strings created with that StringBuffer is big,
all future Strings will have the same size char[]. This is not very
kind of Sun, since it causes bugs in many libraries. However, my argument
is that you should not have reused
StringBuffers anyway, since you will have less overhead simply creating
a new one than setting the size to zero again.
This memory leak was pointed out to me by Andrew Shearman during one
of my courses, thank you very much! For more information, you can
visit Sun's
website.
When you read those posts, it becomes apparent that JDOM reuses StringBuffers
extensively. It was probably a bit mean to change StringBuffer's setLength()
method, although I think that it is not a bug. It is simply highlighting
bugs in many libraries.
For those of you that use JDOM, I hope that JDOM will be fixed soon to cater
for this change in the JDK. For the rest of us, let us remember to throw away
used StringBuffers.
So long...
Heinz
Performance Articles
Related Java Course
|
http://www.javaspecialists.co.za/archive/Issue068.html
|
CC-MAIN-2014-52
|
refinedweb
| 1,465
| 72.46
|
PyWebDAV 0.9.8
WebDAV library including a standalone server for python
WebDAV library for python.
Consists of a server that is ready to run Serve and the DAV package that provides WebDAV server(!) functionality.
Currently supports
- WebDAV level 1
- Level 2 (LOCK, UNLOCK)
- Experimental iterator support
It plays nice with
- Mac OS X Finder
- Windows Explorer
- iCal
- cadaver
- Nautilus
This package does not provide client functionality.
Installation
After installation of this package you will have a new script in you $PYTHON/bin directory called davserver. This serves as the main entry point to the server.
Examples
Example (using easy_install):
easy_install PyWebDAV davserver -D /tmp -n
Example (unpacking file locally):
tar xvzf PyWebDAV-$VERSION.tar.gz cd pywebdav python setup.py develop davserver -D /tmp -n
For more information:
Changes
0.9.8 (March 25 2011)
Restructured. Moved DAV package to pywebdav.lib. All integrators must simply replace ”from DAV” imports to ”from pywebdav.lib”. [Simon Pamies]
Remove BufferingHTTPServer, reuse the header parser of BaseHTTPServer. [Cédric Krier]
Fix issue 44: Incomplete PROPFIND response [Sascha Silbe]
0.9.4 (April 15 2010)
Add somme configuration setting variable to enable/disable iterator and chunk support [Stephane Klein]
Removed os.system calls thus fixing issue 32 [Simon Pamies]
Fixed issue 14 [Simon Pamies]
Removed magic.py module - replaced with mimetypes module [Simon Pamies]
Print User-Agent information in log request. [Stephane Klein]
Fix issue 13 : return http 1.0 compatible response (not chunked) when request http version is 1.0 [cliff.wells]
Enhance logging mechanism [Stephane Klein]
Fix issue 15 : I’ve error when I execute PUT action with Apple Finder client [Stephane Klein]
Fix issue 14 : config.ini boolean parameter reading issue [Stephane Klein]
0.9.3 (July 2 2009)
Setting WebDAV v2 as default because LOCK and UNLOCK seem to be stable by now. -J parameter is ignored and will go away. [Simon Pamies]
Fix for PROPFIND to return all properties [Cedric Krier]
Fixed do_PUT initialisation [Cedric Krier]
Added REPORT support [Cedric Krier]
Added support for gzip encoding [Cedric Krier]
Fix for wrong –port option [Martin Wendt]
Handle paths correctly for Windows related env [Martin Wendt]
Included mimetype check for files based on magic.py from Jason Petrone. Included magic.py into this package. All magic.py code (c) 2000 Jason Petrone. Included from. [Joerg Friedrich, Simon Pamies]
Status check not working when server is running [Joerg Friedrich]
Fixed wrong time formatting for Last-Modified and creationdate (must follow RFC 822 and 3339) [Cedric Krier]
0.9.2 (May 11 2009)
Fixed COPY, MOVE, DELETE to support locked resources [Simon Pamies]
Fixed PROPFIND to return 404 for non existing objects and also reduce property bloat [Simon Pamies]
Implemented fully working LOCK and UNLOCK based on in memory lock/token database. Now fully supports cadaver and Mac OS X Finder. [Simon Pamies]
Fixed MKCOL answer to 201 [Jesus Cea]
Fixed MSIE webdav headers [Jesus Cea]
Make propfind respect the depth from queries [Cedric Krier]
Add ETag in the header of GET. This is needed to implement GroupDAV, CardDAV and CalDAV. [Cedric Krier]
Handle the “Expect 100-continue” header [Cedric Krier]
Remove debug statements and remove logging [Cedric Krier]
Use the Host header in baseuri if set. [Cedric Krier]
Adding If-Match on PUT and DELETE [Cedric Krier]
0.9.1 (May 4th 2009)
Restructured the structure a bit: Made server package a real python package. Adapted error messages. Prepared egg distribution. [Simon Pamies]
Fix for time formatting bug. Thanks to Ian Kallen [Simon Pamies]
Small fixes for WebDavServer (status not handled correctly) and propfind (children are returned from a PROPFIND with “Depth: 0”) [Kjetil Irbekk]
0.8 (Jul 15th 2008)
First try of an implementation of the LOCK and UNLOCK features. Still very incomplete (read: very incomplete) and not working in this version. [Simon Pamies]
Some code cleanups to prepare restructuring [Simon Pamies]
Port to minidom because PyXML isn’t longer maintained [Martin v. Loewis]
utils.py: Makes use of DOMImplementation class to create a new xml document Uses dom namespace features to create elements within DAV: namespace [Stephane Bonhomme]
davcmd.py: Missing an indent in loop on remove and copy operations on trees, the effect was that only the last object was removed/copied : always leads to a failure when copying collections. [Stephane Bonhomme]
propfind.py: missing a return at the end of the createResponse method (case of a propfind without xml body, should act as a allprops). [Stephane Bonhomme]
0.7
Added MySQL auth support brought by Vince Spicer Added INI file support also introduced by Vince Some minor bugfixes and integration changes. Added instance counter to make multiple instances possible Extended –help text a bit [Simon Pamies]
0.6
Added bugfixes for buggy Mac OS X Finder implementation Finder tries to stat .DS_Store without checking if it exists Cleaned up readme and install files Moved license to extra file Added distutils support Refactored module layout Refactored class and module names Added commandline support Added daemonize support Added logging facilities Added extended arguments
some more things I can’t remember [Simon Pamies]
Changes since 0.5.1
Updated to work with latest 4Suite
Changes since 0.5
added constants.py data.py must now return COLLECTION or OBJECT when getting asked for resourcetype. propfind.py will automatically generate the right xml element. <href> now only contains the path changed HTTP/1.0 header to HTTP/1.1 which makes it work with WebFolders added DO_AUTH constant to AuthServer.py to control whether authentication should be done or not. added chunked responses in davserver.py One step in order to get a server with keep-alive one day. we now use 4DOM instead if PyDOM the URI in a href is quoted complete rewrite of the PROPFIND stuff: error responses are now generated when a property if not found or not accessible namespace handling is now better. We forget any prefix and create them ourselves later in the response. added superclass iface.py in DAV/ in order to make implementing interface classes easier. See data.py for how to use it. Also note that the way data.py handles things might have changed from the previous release (if you don’t like it wait for 1.0!) added functions to iface.py which format creationdate and lastmodified implemented HEAD
lots of bugfixes
Changes since 0.3
removed hard coded base uri from davserver.py and replaced by a reference to the dataclass. Added this to iface.py where you have to define it in your subclass. added davcmd.py which contains utility functions for copy and move reimplemented DELETE and removed dependencies to pydom. move actual delete method to davcmd. implemented COPY implemented MOVE fixed bugs in errors.py, needs revisiting anyway.. URIs are now unquoted in davserver.py before being used paths in data.py are quoted in system calls in order to support blanks in pathnames (e.g. mkdir ‘%s’ ) switched to exceptions when catching errors from the interface class added exists() method to data.py added more uri utility functions to utils.py millenium bugfixes ;-)
- Downloads (All Versions):
- 0 downloads in the last day
- 35 downloads in the last week
- 1423 downloads in the last month
- Author: Simon Pamies
- Keywords: webdav,server,dav,standalone,library,gpl,http,rfc2518,rfc 2518
- License: GPL v2
- Platform: Unix,Windows
- Categories
- Development Status :: 5 - Production/Stable
- Environment :: Console
- Environment :: Web Environment
- Intended Audience :: Developers
- Intended Audience :: System Administrators
- License :: OSI Approved :: GNU General Public License (GPL)
- Operating System :: MacOS :: MacOS X
- Operating System :: POSIX
- Programming Language :: Python
- Topic :: Software Development :: Libraries
- Package Index Owner: ced, spamsch
- DOAP record: PyWebDAV-0.9.8.xml
|
https://pypi.python.org/pypi/PyWebDAV
|
CC-MAIN-2016-18
|
refinedweb
| 1,271
| 56.96
|
The
DataFrame.axes attribute in Pandas is used to return a list of values representing the axes of a given DataFrame. It also returns the labels of the row and columns axis as the only member.
DataFrame.axes attribute
This attribute takes no parameter value.
This attribute returns a list showing the labels of the row and column axis labels as the only members in that order.
import pandas as pd # creating a dataframe df = pd.DataFrame({'AGE': [ 20, 29], 'HEIGHT': [94, 170], 'WEIGHT': [80, 115]}) # obtaining the list representing the axes of df print(df.axes)
pandasmodule.
df.
DataFrame.axesattribute to obtain the list representing the axes of
df. We print the result to the console.
RELATED TAGS
CONTRIBUTOR
View all Courses
|
https://www.educative.io/answers/how-to-return-a-list-representing-the-axes-of-a-given-dataframe
|
CC-MAIN-2022-33
|
refinedweb
| 123
| 58.38
|
Opened 3 years ago
Last modified 3 years ago
#11879 new Bugs
Incorrect use of reset cause unexpected impact on previous code segment
Description
#include <boost/scoped_ptr.hpp> #include <iostream>
int main() {
boost::scoped_ptr<int> p{new int{1}}; std::cout << *p << '\n'; std::cout << p.get() << '\n'; p.reset(new int{2}); std::cout << *p.get() << '\n'; std::cout << p.get() << '\n';
p.reset((int *)4); Problem: Because of this statement std::cout of above lines are not printing anything. When this line is commented the program works fine. I understand I have used reset function incorrectly but it should impact to the next statements but it is also impacting above statements too. Please explain the cause.
std::cout << *p.get() << '\n'; std::cout << p.get() << '\n';
p.reset(); std::cout << std::boolalpha << static_cast<bool>(p) << '\n';
}
Change History (6)
comment:1 Changed 3 years ago by
comment:2 Changed 3 years ago by
In this case segmentation fault occurred with few other compilers.
comment:3 Changed 3 years ago by
comment:4 Changed 3 years ago by
std::cout is buffered. You'll get the result you expect, if you add a flush before the incorrect code.
comment:5 Changed 3 years ago by
comment:6 Changed 3 years ago by
Why there is no exception handling for scoped_ptr creation?
I forgot to use wiki formatting. Comment start is missing from the statement p.reset((int *)4);
|
https://svn.boost.org/trac10/ticket/11879
|
CC-MAIN-2018-34
|
refinedweb
| 239
| 68.57
|
Is it possible to add a documentation string to a namedtuple in an easy manner?
I tried
from collections import namedtuple
Point = namedtuple("Point", ["x", "y"])
"""
A point in 2D space
"""
# Yet another test
"""
A(nother) point in 2D space
"""
Point2 = namedtuple("Point2", ["x", "y"])
print Point.__doc__ # -> "Point(x, y)"
print Point2.__doc__ # -> "Point2(x, y)"
You can achieve this by creating a simple, empty wrapper class around the returned value from
namedtuple. Contents of a file I created (
nt.py):
from collections import namedtuple Point_ = namedtuple("Point", ["x", "y"]) class Point(Point_): """ A point in 2d space """ __slots__ = ()
Then in the Python REPL:
>>> print nt.Point.__doc__ A point in 2d space
Or you could do:
>>> help(nt.Point) # which outputs...
Help on class Point in module nt: class Point(Point) | A point in 2d space | | Method resolution order: | Point | Point | __builtin__.tuple | __builtin__.object ...
If you don't like doing that by hand every time, it's trivial to write a sort-of factory function to do this:
def NamedTupleWithDocstring(docstring, *ntargs): nt = namedtuple(*ntargs) class NT(nt): __slots__ = () __doc__ = docstring return NT Point3D = NamedTupleWithDocstring("A point in 3d space", "Point3d", ["x", "y", "z"]) p3 = Point3D(1,2,3) print p3.__doc__
which outputs:
A point in 3d space
|
https://codedump.io/share/CsoqqsrHCM2c/1/adding-docstrings-to-namedtuples
|
CC-MAIN-2017-26
|
refinedweb
| 212
| 62.98
|
Howdy all,
I've just rolled out the first install.sh (version 6.0.11) with support for Ubuntu 18.04 LTS. I've also created a new virtualmin-bionic repository with 18.04 LTS binary packages.
This should be considered a beta release. Please test your use case thoroughly before counting on it in production! We always expect issues to be found when new distributions are released, because things change in subtle ways. Versions of underlying software changes, paths may change, default options might change, etc. In my testing of the most common installation scenarios, it seems to work well, but Virtualmin has a gazillion options and manages a bazillion moving parts, so surely y'all will find something borked once you start using it.
Known issues:
If you run into any bugs, please let me know in the issue tracker.
Cheers,
Joe
Edit 7/10/2018: Updated to specifically mention the netplan issue, which is ongoing. So, still beta. Read the whole thread and links for more info on the status of that, if you want to install on 18.04 today. You must use the old network configuration scripts for now.
New known issue on Ubuntu 18.04:
Ubuntu's default network configuration system changed completely in 17.10 to something called netplan. I didn't notice because the test systems I spun up at a virtual machine provider I frequently use had been configured to use the old network configuration system.
Jamie is planning to implement support for netplan in Webmin in the coming weeks. But, and this is a big but: This is a whole new service with it's own configuration files, its own management tools, and its own way of doing things. It is not a minor juggling of config file locations or some new management tools stacked on top of the old thing. Thus, it's gonna take some real work to support it. I do not know how long; Jamie can't dig in until this weekend, so we won't even know the scope of the work until then. Jamie is wicked fast, but even he is human, so I think an optimistic estimate for supporting netplan is at least a couple of weeks and probably longer.
So...that means if you want to install Virtualmin on Ubuntu 18.04 today, you'll need to convert your system to using the old network configuration scripts. This isn't hugely complicated from what I've found, but I haven't tried it yet, so I dunno for sure. There have been several discussions about this change and how to revert it at Ask Ubuntu among other places. I'll find a provider that has Ubuntu instances that use the new configuration in the next day or two (or make my own new image for Cloudmin and spin it up on one of our servers), so I can try it out and document how to change it.
Anyway, I wanted to get this out, since I finally sorted out today that all of the disparate bug reports I've gotten were actually this same problem, even though the reports were about a bunch of different services. (Folks didn't notice that the installer bailed before it finished...fatal errors will exit the installer before completing the rest of the config...so you can't ignore a fatal installer error. The system isn't expected to be configured correctly after a fatal error during install.)
--
Thank you so much Joe, you folks are amazing awesome. I'll stick to 16.04 in the meantime.
Rinus
Inbox distro upgrades seem to also work. I had no issues with both my machines running Pro on 1 and the GPL version on the other.
Joseph Dobransky
FWIW, there are 2 versions of Ubuntu 18.04 server installer ISO. A new "live" version and an "alternate" old version. Unsure if the net config on each is different.
EDIT: just did 3 cloudmin KVM installs of 18.04. Using mini.iso, using live.iso, using alt.iso. All 3 default to netplan config in /etc/netplan/*.cfg. Only difference I saw in my quick install tests was the live.iso got an IPV6 automatically and didn't ask for or care if it had IPV4.
I'd be happy to let you test with these KVMs, or an empty system.
There isn't really anything to test until we've got netplan support in Webmin. The installer currently works (close to perfectly, as far as I can tell, though I'd still recommend testing a bit before putting into production) if the system is using the old method of configuring networks, and will not work if the system is using netplan.
So...we're waiting on Jamie to have enough spare time to implement some netplan support in Webmin, and then I'll make the installer aware of netplan. My part of it is small, and shouldn't take more than a day or so, but Jamie's part is big. It's probably not so awful, since at least it uses a well-known existing config file format (YAML), so Jamie doesn't have to write a new custom parser (though he's good as heck at doing that, it's still more work than just using a library), but he'll still need to munge the data into data structures the Webmin network module understands so it can be presented in the UI and be interacted with by other modules, like Virtualmin.
--
Just to note that the installer has curl set up as so:
/usr/bin/curl -f -s -L -O
this needs the
-kswitch to get it past the invalid certificate warning on the script.
Otherwise, so far so good :)
Signature on it's way...
Hi all,
Is support for installing Ubuntu 18.04 LTS with install.sh imminent?
Trying to decide whether I should hold off for a few days or go with 16.4 now.... would hate to find out install.sh would have installed 18.04 if I had just waited another day or two :)
Chris
oh, FYI, it says on the install.sh script that 18.04 is supported... I'll assume this thread is more up to date and that's a leftover...
It's still beta and netplan is still unsupported. I changed the OS support list in the installer a little prematurely because I didn't know about the netplan issue (my test instances at our VPS did not use netplan, so everything Just Worked in my testing).
--
Yesteday I install Ubuntu 18.04 and latest Virtualmin. Installation was ok, but Network/DNS configuration can be done only manual. So we must wait for compliant Virtualmin version. Does anyone know date when it can be happen?
This is work in progress. See also
But progress seems to have stalled a bit.
Just a quick update on this:
Jamie is finishing up the new Webmin release with Netplan support as we speak, with expectation to roll it out within the next few hours (barring any problems).
Once that gets rolled out, I'll do some testing and make any modifications we need to the install script and the virtualmin-config Net plugin.
Hopefully, y'all will be able to try it out before the end of the week, and it should be production-ready within a week or so (I assume any new OS needs some testing by real users before it's really solid).
--
Awesome, thanks for the update!
Another update: Webmin 1.890 with netplan support is in the repos. I should be able to do some testing and make any necessary updates in the next day or two for Ubuntu 18.04 support.
--
Thanks, Joe!
Will Ubuntu 18.04 now be mentioned in the install script?
When it's finished it will be. So, another day or two.
--
install script with LEMP bundle option on Ubuntu 18.04 LTS produces an installation that does not come back from a reboot. the host becomes unreachable.
When install completes, I connect via port 10000 and complete the wizard. the dashboard says a reboot is needed, so I reboot it and then the host becomes unreachable. I reinstalled Ubuntu 18.04 on the vps and ran the install.sh script again with the same result. I have not been able to locate the issue as of yet.
install with LAMP works and I was able to manually install nginx which produced a working server.
Let me know if anyone else can reproduce this problem.
I am having the same problem on a LEMP install on DigitalOcean, Ubuntu 18.04.1 as well. When install completes, I can't even access port 10000 though. After rebooting, I can no longer access SSH remotely.
I faced a similar problem with EC2. I used LAMP with -m and after reboot I can't access it anymore either through SSH or webmin.
Well today, my working LAMP installation on Ubuntu 18.04 got some updates and after installing them, needed a reboot. never came back from the reboot. so let's just say at this time, virtualmin does not work on Ubuntu 18.04 whether you select the LAMP or LEMP bundle. I am now falling back to Ubuntu 16.04
I am testing this on Digital Ocean and I am still able to access the VPS from their console VNC access but not from the outer internet. I tried shutting down firewalld to no avail.
There was a configuration corruption issue in Webmin 1.890 with netplan-based distributions in some cases. I never saw it in my tests, but several folks reported it, and we think it's been fixed in 1.891. Try a fresh install and let us know what happens.
Also, no installation options will make any difference with this problem...there's no need to test different options. It will either work or it won't, regardless of whether it is LAMP, LEMP or minimal mode installation. The problem was (and maybe still is if we missed something) with the network configuration step, which happens the same across all installation modes.
--
I've done a fresh install of Ubuntu 18.04 and virtualmin and it is looking good so far. if anything goes wrong, I'll update this thread.
Thanks!
I redid it again just now and after the restart everything seems to works fine. Thanks Joe
I tried installing again on DigitalOcean but the installation turned up some errors:
Installing updates to Virtualmin-related packages [ â ]
â£â£â£ Phase 3 of 3: Configuration
[1/23] Configuring AWStats [ â ]
[2/23] Configuring Bind [ â ]
[3/23] Configuring ClamAV [ â ]
[4/23] Configuring Dovecot [ â ]
[5/23] Configuring Firewalld [ â ]
[6/23] Configuring MySQL [ â ]
[7/23] Configuring NTP [ â ]
[8/23] Configuring Net âââââââE rror: No interface named macaddress found
Error
-----
No interface named macaddress found
-----
â£â£â£ Cleaning up
[WARNING] The following errors occurred during installation:
â Postinstall configuration returned an error.
[WARNING] The last few lines of the log file were:
[2018/08/09 16:19:55] [INFO] - Code: 0 Result: success
[2018/08/09 16:19:55] [INFO] - Code: 0 Result: success
[2018/08/09 16:19:56] [INFO] - Code: 0 Result: success
[2018/08/09 16:19:56] [INFO] - Code: 0 Result: Warning: ZONE_ALREADY_SET: public
success
[2018/08/09 16:19:56] [INFO] - Succeeded
[2018/08/09 16:19:56] [INFO] - Configuring MySQL
[2018/08/09 16:20:00] [INFO] - Succeeded
[2018/08/09 16:20:00] [INFO] - Configuring NTP
[2018/08/09 16:20:01] [INFO] - System clock source is kvm-clock, skipping NTP
[2018/08/09 16:20:01] [INFO] - Succeeded
[2018/08/09 16:20:01] [INFO] - Configuring Net
[2018-08-09 16:20:01 UTC] [DEBUG] Cleaning up temporary files in /tmp/.virtualmi n-1197.
[2018-08-09 16:20:01 UTC] [WARNING] The following errors occurred during install ation:
[2018-08-09 16:20:01 UTC] [WARNING] The last few lines of the log file were:
EDIT: The problem seems to happen only when private networking is enabled on DigitalOcean.
This is the netplan config of the offending droplet (eth1):
network:
version: 2
ethernets:
eth0:
addresses:
- 142.93.118.130/20
- 2604:A880:0400:00D1:0000:0000:088D:1001/64
- 10.10.0.5/16
gateway4: 142.93.112.1
gateway6: 2604:a880:0400:00d1:0000:0000:0000:0001
match:
macaddress: ee:79:ef:d2:d2:7b
nameservers: &id001
addresses:
- 67.207.67.3
- 67.207.67.2
search: []
set-name: eth0
eth1:
addresses:
- 10.136.108.102/16
match:
macaddress: 86:68:26:48:4e:93
nameservers: *id001
set-name: eth1
Wondering if anything could be done about this?
[EDITED]
Did some further tests. On DigitalOcean droplets, with Private Networking disabled, installation completes successfully. However, connection is still lost upon a reboot.
Noted that my /etc/netplan/50-cloud-init.yaml config is changed by Virtualmin as follows:
network:
version: 2
ethernets:
eth0:
addresses: ['/32']
gateway4: 142.93.112.1
gateway6: 2604:a880:0400:00d1:0000:0000:0000:0001
nameservers:
addresses: [127.0.0.53,127.0.0.1]
Doing netplan apply returns an error, stating that the the format of 'addresses' being wrong.
With private networking enabled, installation doesn't even complete. Modifying 50-cloud-init.yaml to remove eth1 allows installation to complete, but upon rebooting, I lose access again.
Fresh install. I'm using LEMP stack. Working like a charm. Thank you!
Just one minor issue... after replacing MySQL with MariaDB, I told apparmor about MariaDB to %$#@ off.
ln -s /etc/apparmor.d/usr.sbin.mysqld /etc/apparmor.d/disable/
I have just tried to install on Azure Cloud Ubuntu 18.04...it falls over at step 9...
[9/23] Configuring Net error: No interface named match found.
What does this mean? I also dont have a virtualmin log in /var/log there is a virtualmin license in /etc/ but not virtualmin (webmin and usermin are both present)
Is my issue to do with Ubuntu 18 or something i have done wrong with Azure?
edit...i just rolled an Ubuntu 16.04 vm on Azure with identical resource and network settings and it worked perfectly.
Just installed on 18.04 on my Worldstream server and all looks ok, but if I reboot the system, it doesn't work. I get the error "ERR_TUNNEL_CONNECTION_FAILED". Any idea?
I did another test on Digital Ocean and this time I saved a backup of the file /etc/netplan/50-cloud-init.yaml before installing Virtualmin.
I also temporarily set a short password for root so as to make it easier to enter it manually to log in from the VNC console that does not support copying / pasting.
After installing Virtualmin and rebooting (and confirming the server was unreachable), I logged in from the Digital Ocean VNC console and comparing the new 50-cloud-init.yaml file with the backup I saw that the Virtualmin file had the following entry:
addresses: ['/32']
while the backup had:
addresses: - MY_DROPLET_IP_ADDRESS/20 - 10.10.0.6/16
so I copied the addresses: lines from the backup and after "netplan apply" and reboot the server is now accessible.
By the way the original 50-cloud-init.yaml also shows additional lines such as "macaddress" while the Virtualmin does not have them (not sure about how important they are) but just resetting the addresses line seems to be enough to regain connectivity.
If I launch do-release-upgrade on 16.04 LTS, add your new repo and update, should I expect Webmin to work? And Cloudmin? Thanks!
The old network configuration works. The new netplan does not (at least not for everyone and not with every configuration).
I don't know whether release-upgrade would switch the network to netplan (but, I doubt it).
It's not a mystery what's not working (and I think the next Webmin will fix it for everyone). Netplan is a completely new (and very complex) network configuration system, and support for it is still spotty and buggy in Webmin. If you want to install/run Virtualmin, you probably need to use the old network configuration system. How you get there is up to you (though I think upgrading from 16.04 would do it, it's probably simpler to just switch the network config on an 18.04 system).
--
By the way in 18.04 I found javascript-common.conf enabled for Apache which aliases /javascript to /usr/share/javascript/ thus breaking a number of sites. "a2disconf javascript-common" should fix this nuisance.
Is it possible for Virtualmin to ask us if we want to just "switch Ubuntu 18 over to use the old network configuration system" ????
(I would click yes if prompted)
We're very close to a new release of Webmin that'll fix the remaining issues (I hope/think). I wouldn't want to invest time implementing a network config switcher, as it would basically need to understand both the old and new network config systems...which is what we're having a hard time implementing, already!
--
There's a new Webmin in the repos that Jamie believes fixes all of the known netplan issues. Give it another try, if you're waiting for Ubuntu 18.04 support. I'll be doing some testing today, as well.
--
Hi Joe, thanks for the update.
Unfortunately, still receiving the following error when installing on DigitalOcean:
Can you tell me the exact settings you used when creating your Droplet? I've been testing on Digital Ocean and I'm not seeing this error. I'd like to setup an environment where this error happens so I can let Jamie interact with it.
Edit: Nevermind, I haven't reproduced this exact error, but I see how it could happen, probably, and I'm seeing the root cause and I've handed off a server to Jamie that's exhibiting the problem. Should be able to sort it soon.
--
Hi Joe, just wondering if there is any update in this regard? would love to give 18.04 another go. Thanks! :)
Private networking, ipv6 and backups enabled.
Not sure if this is related but I installed virtualmin on 18.04 and all seems to be fine except LetsEncrypt certificates are not installing. It seems that the necessary files in .well-known are not being created so the verification callback fails.
You wouldn't have an .htaccess file with mod_rewrite on in your public_html dir?
nothing at all, it's a totally empty domain.
If you add a file in the public_html directory, you can access it through the domain name?
That side of things is fine, I've tested. The problem is that the /.well-known/ directory and the necessary files within it are simply not generated at all. Is there any way I can find logs to give me some idea of what is happening?
OK I've found the letsencrypt logs in /var/log. And I found this line
2018-09-12 00:12:34,535:DEBUG:certbot.plugins.webroot:Attempting to save validation to /home/prod/public_html/.well-known/acme-challenge/0tn42AAtDk8wWBtjH6EBC6_b7q_XITel5oURR6a-Dms
But no error message and no file is ever created.
I'm just following up to say that I eventually tracked down the issue and it is unrelated to 18.04 - it was tied to the fact that there was an IPv6 IP on that domain and Virtualmin had not set the host to respond on IPv6. This was combined with the fact that certbot now deletes the challenge files after use - something that my old CentOS machines don't do - so I was mistakenly thinking the files were never created. In fact, they were created then deleted.
sudo do-release-upgrade Checking for a new Ubuntu release Get:1 Upgrade tool signature [819 B] Get:2 Upgrade tool [1,258 kB] Fetched 1,259 kB in 0s (0 B/s) authenticate 'bionic.tar.gz' against 'bionic.tar.gz.gpg' extracting 'bionic]
Reading package lists... Done Building dependency tree Reading state information... Done Hit xenial InRelease Get:1 xenial-backports InRelease [107 kB] Hit xenial InRelease Get:2 xenial-updates InRelease [109 kB] Get:3 xenial-updates InRelease [109 kB] Get:4 xenial-security InRelease [107 kB] Get:5 xenial-backports InRelease [107 kB] Get:6 xenial-security InRelease [107 kB] Hit xenial InRelease Hit xenial InRelease Hit virtualmin-xenial InRelease Ign sarge InRelease Hit virtualmin-universal InRelease Hit sarge Release Fetched 646 kB in 0s (0 B/s) Reading package lists... Done Building dependency tree Reading state information... Done
Updating repository information
Third party sources disabled
Some third party entries in your sources.list were disabled. You can re-enable them after the upgrade with the 'software-properties' tool or your package manager.
To continue please press [ENTER]
Get:1 bionic InRelease [242 kB] Get:2 bionic-updates InRelease [88.7 kB] Get:3 bionic-backports InRelease [74.6 kB] Get:4 bionic-security InRelease [83.2 kB] Get:5 bionic/main amd64 Packages [1,019 kB] Get:6 bionic/main i386 Packages [1,007 kB] Get:7 bionic/main Translation-en [516 kB] Get:8 bionic/restricted amd64 Packages [9,184 B] Get:9 bionic/restricted i386 Packages [9,156 B] Get:10 bionic/restricted Translation-en [3,584 B] Get:11 bionic/universe amd64 Packages [8,570 kB] Get:12 bionic/universe i386 Packages [8,531 kB] Get:13 bionic-security/main amd64 Packages [167 kB] Get:14 bionic/universe Translation-en [4,941 kB] Get:15 bionic/multiverse amd64 Packages [151 kB] Get:16 bionic/multiverse i386 Packages [144 kB] Get:17 bionic/multiverse Translation-en [108 kB] Get:18 bionic-updates/main amd64 Packages [322 kB] Get:19 bionic-updates/main i386 Packages [286 kB] Get:20 bionic-updates/main Translation-en [122 kB] Get:21 bionic-updates/universe amd64 Packages [192 kB] Get:22 bionic-updates/universe i386 Packages [192 kB] Get:23 bionic-updates/universe Translation-en [90.1 kB] Get:24 bionic-updates/multiverse amd64 Packages [4,180 B] Get:25 bionic-updates/multiverse i386 Packages [4,336 B] Get:26 bionic-updates/multiverse Translation-en [2,740 B] Get:27 bionic-backports/universe amd64 Packages [2,704 B] Get:28 bionic-backports/universe i386 Packages [2,704 B] Get:29 bionic-backports/universe Translation-en [1,136 B] Get:30 bionic-security/main i386 Packages [132 kB] Get:31 bionic-security/main Translation-en [62.9 kB] Get:32 bionic-security/universe amd64 Packages [66.6 kB] Get:33 bionic-security/universe i386 Packages [66.5 kB] Get:34 bionic-security/universe Translation-en [39.2 kB] Get:35 bionic-security/multiverse amd64 Packages [1,444 B] Get:36 bionic-security/multiverse i386 Packages [1,608 B] Get:37 bionic-security/multiverse Translation-en [996 B] Fetched 27.3 MB in 0s (0 B/s)
Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done
Calculating the changes
Calculating the changes No candidate ver: linux-image-4.13.0-36-generic No candidate ver: linux-image-4.13.0-38-generic No candidate ver: linux-image-4.13.0-39-generic No candidate ver: linux-image-4.13.0-41-generic No candidate ver: linux-image-4.13.0-43-generic No candidate ver: linux-image-4.13.0-45-generic No candidate ver: linux-image-extra-4.13.0-36-generic No candidate ver: linux-image-extra-4.13.0-38-generic No candidate ver: linux-image-extra-4.13.0-39-generic No candidate ver: linux-image-extra-4.13.0-41-generic No candidate ver: linux-image-extra-4.13.0-43-generic No candidate ver: linux-image-extra-4.13.0-45-generic
Do you want to start the upgrade?
3 installed packages are no longer supported by Canonical. You can still get support from the community.
16 packages are going to be removed. 226 new packages are going to be installed. 625 packages are going to be upgraded.
You have to download a total of 530 M. This download will take about 1 hour 7 minutes with a 1Mbit DSL connection and about 20 hours with a 56k modem.
Fetching and installing the upgrade can take several hours. Once the download has finished, the process cannot be canceled.
Continue [yN] Details
Install: autopoint binutils-common binutils-x86-64-linux-gnu btrfs-progs cpp-7 dh-autoreconf dirmngr e2fsprogs-l10n fdisk firebird3.0-common firebird3.0-common-doc fontconfig fonts-droid-fallback fonts-noto-mono g++-7 gcc-7 gcc-7-base gcc-8-base geoip-database gnupg-l10n gnupg-utils gpg gpg-agent gpg-wks-client gpg-wks-server gpgconf gpgsm ibverbs-providers imagemagick-6-common libalgorithm-c3-perl libarchive-cpio-perl libasan4 libass9 libassuan0 libauthen-sasl-perl libavcodec57 libavdevice57 libavfilter6 libavformat57 libavresample3 libavutil55 libb-hooks-endofscope-perl libb-hooks-op-check-perl libbind9-160 libbinutils libbluray2 libboost-filesystem1.65.1 libboost-iostreams1.65.1 libboost-system1.65.1 libcairo2 libcdio-cdda2 libcdio-paranoia2 libcdio17 libchromaprint1 libclass-c3-perl libclass-c3-xs-perl libclass-data-inheritable-perl libclass-method-modifiers-perl libcom-err2 libcrypt-openssl-bignum-perl libcrypt-openssl-rsa-perl libcryptsetup12 libcurl4 libdata-optlist-perl libdatrie1 libdevel-callchecker-perl libdevel-caller-perl libdevel-globaldestruction-perl libdevel-lexalias-perl libdevel-stacktrace-perl libdist-checkconflicts-perl libdns-export1100 libdns1100 libdynaloader-functions-perl libegl-mesa0 libegl1 libemail-date-format-perl libeval-closure-perl libevent-2.1-6 libevent-core-2.1-6 libexception-class-perl libext2fs2 libfastjson4 libfcgi-bin libgbm1 libgcc-7-dev libgdbm-compat4 libgdbm5 libgdk-pixbuf2.0-0 libgdk-pixbuf2.0-bin libgdk-pixbuf2.0-common libgl1 libglvnd0 libglx-mesa0 libglx0 libhunspell-1.6-0 libibverbs1 libicu60 libidn2-0 libio-socket-ssl-perl libip4tc0 libip6tc0 libipc-shareable-perl libiptc0 libisc-export169 libisc169 libisccc160 libisccfg160 libisl19 libjson-c3 libksba8 libldap-common libllvm3.9 liblog-dispatch-perl liblwres160 libmagic-mgc libmail-dkim-perl libmailtools-perl libmime-lite-perl libmime-types-perl libmodule-implementation-perl libmodule-runtime-perl libmpfr6 libmpg123-0 libmpx2 libmro-compat-perl libmysofa0 libnamespace-autoclean-perl libnamespace-clean-perl libnet-smtp-ssl-perl libnl-route-3-200 libnpth0 libnss-systemd libopendkim11 libopenjp2-7 libopenmpt0 libpackage-stash-perl libpackage-stash-xs-perl libpadwalker-perl libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libparams-classify-perl libparams-util-perl libparams-validationcompiler-perl libperl5.26 libpixman-1-0 libpng16-16 libpostproc54 libprocps6 libpsl5 libpython3.6 libpython3.6-minimal libpython3.6-stdlib libreadline7 libreadonly-perl libref-util-perl libref-util-xs-perl librole-tiny-perl librsvg2-2 librsvg2-common librubberband2 libruby2.5 libsdl2-2.0-0 libsndio6.1 libspecio-perl libstdc++-7-dev libsub-exporter-perl libsub-exporter-progressive-perl libsub-identify-perl libsub-install-perl libsub-name-perl libsub-quote-perl libswresample2 libswscale4 libtfm1 libthai-data libthai0 libtommath1 libtry-tiny-perl libunistring2 libva-drm2 libva-x11-2 libva2 libvariable-magic-perl libvorbisfile3 libvpx5 libwayland-client0 libwayland-cursor0 libwayland-egl1-mesa
:
Continue [yN] Details [d
(if you use code blocks, your text will be much easier to read!!)
This text: < code > sample < /code > (without the spaces)
Like this:
sample
sample
sample
sample
Installed 18.04 on Rackspace Cloud Server yesterday. Failed after reboot on 5 successive attempts.
Tried to update machine first, ie apt update / apt upgrade, then reboot. Reboot was good. Did install. No errors. Could connect via GUI fine.
After install: apt update
All packages are up to date.
N: Usage of apt_auth.conf(5) should be preferred over embedding login information directly in the sources.list(5) entry for ''
N: Usage of apt_auth.conf(5) should be preferred over embedding login information directly in the sources.list(5) entry for ''
Post installation wizard "lost connection to server" for about 10 seconds after "next" click on the "memory use" screen.
Virtualmin now gives me a warning
Warning!
Recent package updates (such as a new kernel version) require a reboot to be fully applied.
Reboot and all connection from outside of Rackspace fails. I can still connect from the Rackspace console.
Pages
|
https://www.virtualmin.com/comment/802308
|
CC-MAIN-2020-16
|
refinedweb
| 4,689
| 57.77
|
I'm curious of the conversion of interfaces. Here is the
example
In that example,
interface B defines a method returning an interface A, and
struct C implement a method returning a pointer to struct CA which
implements interface A.
I wonder why Go can't deduce interface B from C.
If *interface A, B* and *struct C, CA* are in different package, then
the only way to make it possible is either
- refine the method of *C.Get* to *func Get() A*, this means I need to
import the definition of interface A from other package and introduce more
dependency, or
- refine the method *Get* of C and B to *func Get() interface{}*
Can anyone give me some hint or clue?
--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit.
|
https://grokbase.com/t/gg/golang-nuts/157x4g5rz5/go-nuts-implicit-conversion-of-interfaces/oldest
|
CC-MAIN-2021-49
|
refinedweb
| 161
| 60.55
|
.d.ts generator.d.ts generator
Generates a single
.d.ts bundle containing external module declarations exported from TypeScript module files.
What does this mean?What does this mean?
If you have a project with lots of individual TypeScript files that are designed to be consumed as external modules,
the TypeScript compiler doesn’t allow you to actually create a single bundle out of them. This package leverages the
TypeScript language services in TypeScript 1.4+ to generate a single
.d.ts file containing multiple
declare module 'foo' declarations. This allows you to distribute a single
.d.ts file along with your compiled
JavaScript that users can simply reference from the TypeScript compiler using a
/// <reference path /> comment.
.d.ts generator will also correctly merge non-external-module files, and any already-existing
.d.ts files.
UsageUsage
npm install dts-generator
Generate your
d.tsbundle:
Programmatically:
require('dts-generator')({ name: 'package-name', project: '/path/to/package-directory', out: 'package-name.d.ts' });
Command-line: ```bash dts-generator --name package-name --project /path/to/package-directory --out package-name.d.ts
Grunt:
module {grunt;grunt;};
Reference your generated d.ts bundle from somewhere in your consumer module and import away!:
///
import Foo = require('package-name/Foo');
// ...
## Options * `baseDir?: string`: The base directory for the package being bundled. Any dependencies discovered outside this directory will be excluded from the bundle. *Note* this is no longer the preferred way to configure `dts-generator`, please see `project`. * `excludes?: string[]`: A list of glob patterns, relative to `baseDir`, that should be excluded from the bundle. Use the `--exclude` flag one or more times on the command-line. Defaults to `[ "node_modules/**/*.d.ts" ]`. * `externs?: string[]`: A list of external module reference paths that should be inserted as reference comments. Use the `--extern` flag one or more times on the command-line. * `files: string[]`: A list of files from the baseDir to bundle. * `eol?: string`: The end-of-line character that should be used when outputting code. Defaults to `os.EOL`. * `indent?: string`: The character(s) that should be used to indent the declarations in the output. Defaults to `\t`. * `main?: string`: The module ID that should be used as the exported value of the package’s “main” module. * `moduleResolution?: ts.ModuleResolutionKind`: The type of module resolution to use when generating the bundle. * `name: string`: The name of the package. Used to determine the correct exported package name for modules. * `out: string`: The filename where the generated bundle will be created. * `project?: string`: The base directory for the project being bundled. It is assumed that this directory contains a `tsconfig.json` which will be parsed to determine the files that should be bundled as well as other configuration information like `target` * `target?: ts.ScriptTarget`: The target environment for generated code. Defaults to `ts.ScriptTarget.Latest`. ## Known issues * Output bundle code formatting is not perfect yet ## Thanks [@fdecampredon]() for the idea to dump output from the compiler emitter back into the compiler parser instead of trying to figure out how to influence the code emitter. ## Licensing © 2015 SitePen, Inc. New BSD License.
|
https://preview.npmjs.com/package/dts-generator-tf
|
CC-MAIN-2020-05
|
refinedweb
| 514
| 52.15
|
retrieve the Emitter's number of particles (with their corresponding PSR) at a given frame using expresso?
I tried dragging the emitter to the xpresso editor but I couldn't see such data but I could be wrong.
Regards,
Ben
P.S. I'm referring to the emitter object and not the thinking particles.
I understand there is a documentation about particles but there is no sample code so I'm a bit lost on how to use it.
Hello @bentraje,
thank you for reaching out to us. The pages you did discover are there for when does override ObjectData.ModifyParticles. They do not contain any particle emitter or amount information. The number of emitted particles for an emitter is being exposed with ParticleObject::GetParticleCount(). ParticleObject has never been exposed to Python, but one can use the raw data access one can use in many cases. The particle information is stored in a Tparticle VariableTag. When we then know the stride/data size a single particle has in a particle tag, we can produce this:
ObjectData.ModifyParticles
ParticleObject::GetParticleCount()
ParticleObject
Tparticle
VariableTag
import c4d
import struct
def GetParticleCount(emitter):
"""Returns the particle count for an emitter object.
"""
if (not isinstance(emitter, c4d.BaseObject) or
not emitter.CheckType(c4d.Oparticle)):
return 0
tag = emitter.GetTag(c4d.Tparticle)
buffer = tag.GetLowlevelDataAddressW()
items = int(len(buffer)/88)
count = 0
for index in range(items):
index *= 88
bits = int(struct.unpack("B", buffer[index+80:index+81])[0])
count += (bits != 0)
return count
def main():
"""
"""
print(f"{op.GetName()} has {GetParticleCount(op)} particles.")
# Execute main()
if __name__ == '__main__':
main()
Cheers,
Ferdinand
Hi @ferdinand
Thanks for the response.
RE: import struct
I'm guessing this is the built-in struct library and not the maxon.Struct?
RE: /88
Is there a documentation on the list of parameters? I'm guess 88 is referring the particle number? Would be nice to have a list of them for reference like the PSR.
no, struct has nothing to do with maxon.Struct and 88 is not the particle count. Your question is effectively out of scope of support since it is about Python libraries and some computer science concepts. I did provide a "brief" explanation below. Please note that this is not a commitment of us to do this regularly, this is just a special case since we effectively mentioned the topic. But in the end, we cannot provide support for learning these concepts.
struct
maxon.Struct
Thank you for your understanding,
Ferdinand
import c4d
import struct
def GetParticleCount(emitter):
"""Returns the particle count for an emitter object.
Args:
emitter (any): The entity to check for being an emitter and its
particle count.
Returns:
int: The number of particles that have been emitted. c4d.NOTOK if no
particle data can be found for 'emitter'.
References:
[1] -
[2] -
"""
# Making sure that emitter is of type Oparticle and has a Tparticle tag
# attached to it.
if (not isinstance(emitter, c4d.BaseObject) or
not emitter.CheckType(c4d.Oparticle)):
return c4d.NOTOK
tag = emitter.GetTag(c4d.Tparticle)
if tag is None:
return c4d.NOTOK
# Get a memoryview of the particle tag data. memoryview is a type [1] of
# Python which presents a block of memory. It is similar in purpose and
# functionality to the older bytes and bytearray types. With S22 and
# prior, Cinema 4D used its own types to pass blocks of memory to the
# user in Python. With R23 and the new Python 3 core, Cinema does use
# memoryview instead. GetLowlevelDataAddressW() returns a memoryview
# of the raw particle data stored in the particle tag.
buffer = tag.GetLowlevelDataAddressW()
# 88 is not the number of particles in the statement below, but the stride
# of a block of memory.
#
# Imagine a single precision, three component float vector type, e.g.,
# c4d.Vector. It is composed out of three floating point values with
# 4 bytes, 32 bits, each. In memory this are then just 4 * 32 bits in a
# row. In a pseudo c-style we can define the type like that:
#
# type Vector
# {
# float x, y, z;
# }
#
# An instance of that type in memory looks then like this. N is the start
# of an instance of type Vector at a memory address, and the computer
# knows that it has a total length of 3*4 bytes, i.e., 12 bytes, since we
# said so in our type declaration by saying it has three float fields.
#
# Address : N, ...................., N + M
# Values (hex) : 0000 0000 0000 0000 0000 0000
# Field : x y z
#
# A block of four digits in hex are 16 bit, we could also write this in
# binary, i.e., as a number with 16 digits, but it is more convenient to
# read memory in this hexadecimal form as a human (unless you are Neo,
# then binary is also game :) ).
#
# This area of memory then simply contains what makes up an instance of
# a Vector, three floats. The first two blocks, 0000 0000, are where our
# first float x is being stored, since each block is 16 bit and our float
# is single precision, i.e., 32 bits, i.e., two blocks. The 3rd and 4th
# block then make up y and the 5th and 6th z.
#
# This principle can then be extended to more complex types, imagine a
# matrix type and braking it down into atomic types and most importantly
# also to arrays. So we can then have a block of memory which is an array
# of our Vector instances. The size of the elements in this array is then
# often called stride.
#
# Area of memory containing an array of three Vector instances. There are
# of course no brackets and commas, I added them just for readability:
#
# [[4B, 4B, 4B], [4B, 4B, 4B], [4B, 4B, 4B]]
# x, y, z x, y, z x, y, z
#
# So this memory structure would have a stride of 12 (bytes), i.e., a
# single element in it, a Vector, is 12 bytes. Its total length would be
# 36 bytes (3 * 12). The memoryview type is such a block of memory
# containing the data for all particles in the particle system, an array
# of particles. 88 is the stride of that block of memory, a single
# particle is composed out of 88 bytes of data. So the variable items
# calculated below is the THEORETICAL amount of particles the system
# currently holds.
items = int(len(buffer)/88)
# So why THEORETICAL? We now loop over this block of memory with our
# stride.
count = 0
for index in range(items):
index *= 88
# And unpack the data for the 81th byte in that particle we are
# currently looking at into an integer. The Python library struct [2]
# has nothing to do with maxon.Struct, but is an raw memory access
# library. In this case we cast binary to "B", which is an unsigned
# char, effectively an integer in Python.
byte = int(struct.unpack("B", buffer[index+80:index+81])[0])
# Now we increment the counter for the particles when this byte is
# not zero. What we unpacked there was the particle flag. When the
# flag is PARTICLEFLAGS_NONE == 0, then we will not count it, since
# the particle is then neither visible nor alive.
count += (byte != 0)
# So the final count is then only the amount of particles which are either
# visible, alive or both.
return count
def main():
"""
"""
print(f"{op.GetName()} has {GetParticleCount(op)} particles.")
# Execute main()
if __name__ == '__main__':
main()
RE: Please note that this is not a commitment of us to do this regularly . .
I understand. Thanks for the detailed explanation and background theory for the answer.
Have a nice ahead!
Will close this thread now.
|
https://plugincafe.maxon.net/topic/13395/get-emitter-s-number-of-particles-and-their-psr/1
|
CC-MAIN-2021-43
|
refinedweb
| 1,268
| 66.13
|
For example, if the primary grouping interval is 3, and the secondary is 2, then this corresponds to the pattern "#,##,##0", and the number 123456789 is formatted as "12,34,56,789". Want to make suggestions and discuss with dev team? It is the same code as in this tutorial. The members, admins, and authors of this website respect your privacy. Source
It seems odd that some objects are fully qualified (org....) while your HTMLParser object is not. Your participation helps us to help others. share|improve this answer answered Aug 31 '12 at 8:16 Dan 29.2k44665 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign share|improve this answer answered Aug 31 '12 at 8:17 matt freake 3,07011238 Wow, thanks guys!
Based on just the code you've shown us: including the line import java.text.DecimalFormat; at the beginning of the file should fix this problem. [Jess in Action][AskingGoodQuestions] Mindy Truitt Greenhorn Posts: You first need to import the DecimalFormat class:
import java.text.DecimalFormat;
Then create a format object. contact | privacy policy | terms of use © 2014-15 recalll × Modal header Body... Solution to Chef and Squares challenge, timing out in Java but not in C++ Drawing a torso with a head (using \draw) How were Lisps usually implemented on architectures that has
Not the answer you're looking for? Join them; it only takes a minute: Sign up Java help Decimal format cannot be resolved to a type up vote 1 down vote favorite I have spent a while trying Browse other questions tagged java or ask your own question. Import Decimal Python thank you Anonymous #java #programming number-formatting made simple Nvehmann Made Java formula precise….thank you PeterWadethespider Thanks guy.
The grouping size is the number of digits between the grouping separators, such as 3 for "100,000,000" or 4 for "1 0000 0000". Import Roundingmode deta it's helpful. One import statement is missed: import org.openqa.selenium.WebDriver; "WebDriver cannot be resolved to a type" error in Android WebDriver ja... For organizations looking to jump-start a big data analytics initiative, Talend provides applications that accelerate data loading and other aspects of Hadoop setup by enabling developers and analysts to leverage powerful
Not the answer you're looking for? Java Decimalformat 2 Decimal Places All logos and trademarks in this site are property of their respective owner. Now, two Java networking experts demystify Java's complex networking API, giving developers practical insight into the key techniques of network development, and providing extensive code examples that show exactly how it's Draw a hollow square of # with given width Are there continuous functions for which the epsilon-delta property doesn't hold?
View More at... anchor I have got the solution. Java How To Use Decimalformat What would be the consequences of a world that has only one dominant species of non-oceanic animal life? Java Decimalformat Class Only one import statement is needed to fix the error.
View More at... more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed I have installed "android-server-2.32.0.apk" properly and successfully. View More at... Import Decimalformat Class Java YA novel involving immortality via drowning If an image is rotated losslessly, why does the file size change? I tried the import before but did not put the import before the public class lol –darkleo91 Feb 6 '13 at 0:14 add a comment| up vote 1 down vote import Cheri Hello William, This was extremely helpful. - Java's rich, comprehensive networking interfaces make it an ideal platform for Import Java Text Decimalformat Meaning Automated exception search integrated into your IDE Test Samebug Integration for IntelliJ IDEA 0 mark Building problems Coderanch | 1 decade ago | Dan Aero java.lang.Error: Unresolved compilation problems: Interpreter cannot import java.text.DecimalFormat; public class Assign31 { public static void main(String[] args) { // Problem 1 // declare variables DecimalFormat speedPattern = new DecimalFormat("00.00"); int gearRadius = 100; double pi = Math.PI;
Simple is better for a Java noob like me. Right click on the project in the Package Explorer and select Build Path/Configure Build Path Sorry it's not working.I tried it.It gives the same error. I don't know why. How To Round Double In Java Tech-Recipes: A Cookbook Full of Tech Tutorials Tech-Recipes: A Cookbook Full of Tech Tutorials Contact Us About Us Advertise Android Apple Windows Internet Database Programming Apps Java: Decimal Format to Create Take a tour to get the most out of Samebug. He writes frequently for Java publications and holds a BA in Software Engineering from Bond University, Queensland, Australia. What's the most robust way to list installed software in debian based distros?
The convention for Java class names says they have a capital letter at the beginning, so that's an easy way to spot the error, even if you're not using an IDE. Michael Reilly is a software engineer and network programmer working in Brisbane, Australia. This site uses cookies, as explained in our cookie policy. If you agree to our use of cookies, please close this message and continue to use this site.
There are actually two different grouping sizes: One used for the least significant integer digits, the primary grouping size, and one used for all others, the secondary grouping size. It finally works! ^.^ –Lisa Aug 31 '12 at 8:26 add a comment| up vote 2 down vote check out the documentation its DecimalFormat ** , don't forget to **import The number is always divided in equal size. Thanks for making this so clear.
|
http://systemajo.com/how-to/decimalformat-cannot-resolved.php
|
CC-MAIN-2018-34
|
refinedweb
| 968
| 54.73
|
I am trying to assign
int
std::map
int
F:\Programming\korki\BRUDNOPIS\main.cpp|14|error: invalid user-defined conversion from 'char' to 'const key_type& {aka const std::basic_string&}' [-fpermissive]|
#include <iostream>
#include <string>
#include <cstdlib>
#include <map>
using namespace std;
int main()
{
std::map <std::string,int> map;
map["A"] = 1;
int x;
std:: string word = "AAAAAA";
x = map[word[3]];
cout << x;
return 0;
}
I am trying to assign int type value to each letter in latin alphabet using std::map.
So you have to use
char (instead of
std::string) as key of the map; something like
#include <iostream> #include <string> #include <map> int main() { std::map<char, int> map; map['A'] = 1; int x; std:: string word = "AAAAAA"; x = map[word[3]]; std::cout << x << std::endl; return 0; }
As observed by others, now you're trying to use a
char as a key for a
std::map where the key is a
std::string. And there isn't an automatic conversion from
char to
std::string.
Little Off Topic suggestion: avoid to give a variable the same name of a type, like your
std::map that you've named
map. It's legit but confusion prone.
|
https://codedump.io/share/gttVrZgEQmLF/1/error-invalid-user-defined-conversion-from-char-to-const-keytypeamp
|
CC-MAIN-2021-21
|
refinedweb
| 202
| 59.77
|
Tutorial: Create the Keyboard application
In this tutorial, we'll show you how to use the glview library and OpenGL ES to capture and process input from the keyboard of your BlackBerry 10 device and to render graphics to the screen.
The app also shows you how to:
- show or hide the touch screen keyboard
- change the keyboard layout
- rotate the displayed square
- change the rotation speed of the square
If you are using a device with a physical keyboard, such as the BlackBerry Q10, the touch screen keyboard doesn't apply. The Keyboard sample app lets you use the physical keyboard to:
- rotate the displayed square
- change the rotation speed of the square
You will learn to:
- Configure your project
- Show the touch screen keyboard
- Process keyboard input
Before you begin
- The BlackBerry 10 Native SDK
- Your BlackBerry 10 device or simulator
- A basic understanding of the C language and some experience running apps with the Native SDK
Configure your project
To import the complete project:
- From the NDK-Samples repository in GitHub, download and extract the sample app.
- Start the Momentics IDE for BlackBerry.
- On the File menu, click Import.
- Expand General, then select Existing Projects into Workspace. Click Next.
- Browse to the location where you extracted the sample app and click OK.
- Click Finish to import the project into your workspace
Using the keyboard
The event handling framework for BlackBerry 10 is BlackBerry Platform Services (BPS). This framework allows you to register for and receive events from the underlying OS. The events cover things such as the virtual keyboard, screen, and device sensors. In this sample, we use the BPS library to set up the virtual keyboard.
To use any virtual keyboard function, we must include the appropriate header file at the beginning of main.c:
#include <bps/virtualkeyboard.h>
This application draws a colored square on the screen and uses OpenGL ES to update and render the graphics. The Glview library provided with the BlackBerry 10 Native SDK makes it easy to develop apps with OpenGL ES. It provides an execution loop as well as API functions for an application to register callbacks at different points of execution. We use Glview to set up and run the application.
#include <glview/glview.h>
In main(), all we need to do is register three callback functions with Glview and then call glview_loop().
int main(int argc, char *argv[]) { glview_initialize(GLVIEW_API_OPENGLES_11, &render); glview_register_initialize_callback(&initialize); glview_register_event_callback(&event); return glview_loop(); }
Now let's take a closer look at the callbacks.
Initialization callback
We register the initialization callback to initialize the square and display the virtual keyboard. This callback is called before Glview enters the main loop.
glview_register_initialize_callback(&initialize);
In initialize(), we request virtual keyboard events so that we know when someone presses a key.
virtualkeyboard_request_events(0);
Next, we display the virtual keyboard on the screen:
virtualkeyboard_show();
The initialization code then uses calls to OpenGL ES functions to set up the display area, the background color and smooth shading, and initialize a simple orthographic projection for rendering:
unsigned int surface_width, surface_height; glview_get_size(&surface_width, &surface_height); glShadeModel(GL_SMOOTH); glClearColor(1.0f, 1.0f, 1.0f, 1.0f););
Display callback
We need to register a callback to render the square. This callback is required for using Glview and is called frequently to refresh the graphics to ensure the smooth rotation of the square.
Our display callback is render() and we register it using glview_initialize(). We also specify the use of OpenGL ES version 1.1:
glview_initialize(GLVIEW_API_OPENGLES_11, &render);
To draw the square, we initialize an array of vertices and an array of colors for the vertices at the beginning of main.c:
static const GLfloat vertices[] = { -0.25f, -0.25f, 0.25f, -0.25f, -0.25f, 0.25f, 0.25f, 0.25f }; static const GLfloat colors[] = { 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f };
We also define a variable angle, which denotes the rotation angle for the square:
static float angle = 0.0;
Our callback must first call glClear() to clear the color buffer:
glClear(GL_COLOR_BUFFER_BIT);
Then it enables the GL_VERTEX_ARRAY state and provides an array of vertices that describe our simple square:
glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(2, GL_FLOAT, 0, vertices);
Similarly, it enables the GL_COLOR_ARRAY state and provides an array that defines the color of each of our vertices:
glEnableClientState(GL_COLOR_ARRAY); glColorPointer(4, GL_FLOAT, 0, colors);
Next, it adds a rotation for the vertical axis and then renders the square. The rotation angle is initialized to 0 so that when the square is rendered initially, it doesn't rotate. In the event callback code, the value of angle changes according to the keyboard input we receive.
glRotatef(angle, 0.0f, 1.0f, 0.0f); glDrawArrays(GL_TRIANGLE_STRIP, 0 , 4);
Finally, we disable all client states used by the current rendering logic as it's generally a good practice to do so. This simple step can save you a lot of time in front of the debugger as EGL code is typically difficult to debug.
glDisableClientState(GL_VERTEX_ARRAY); glDisableClientState(GL_COLOR_ARRAY);
Event callback
To handle user input from the screen, we register an event callback, in which we check each incoming event and respond to it. When a user presses a key, for example, a screen event is generated and Glview invokes the event callback we register.
glview_register_event_callback(&event);
The event callback handles two types of events: virtual keyboard events and screen events. When receiving a virtual keyboard event, it sets the keyboard_visible flag accordingly. It checks this flag later to determine whether to display the virtual keyboard.
if (virtualkeyboard_get_domain() == domain) { switch (code) { case VIRTUALKEYBOARD_EVENT_VISIBLE: keyboard_visible = true; break; case VIRTUALKEYBOARD_EVENT_HIDDEN: keyboard_visible = false; break; } }
When a user touches anywhere on the screen or presses a key, a screen event is sent to the application. On detecting a touch on the screen, we display the virtual keyboard if it isn't already visible.
switch (screen_val) { case SCREEN_EVENT_MTOUCH_TOUCH: if (!keyboard_visible) { virtualkeyboard_show(); } break;
When a key is pressed, we receive a keyboard event and perform the desired action. For example, we can change the layout of the keyboard so that it's more suited to some function the user wants to perform, such as sending an email or entering a phone number.
case SCREEN_EVENT_KEYBOARD: screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_KEY_FLAGS, &screen_val); if (screen_val & KEY_DOWN) { screen_get_event_property_iv(screen_event, SCREEN_PROPERTY_KEY_SYM,&screen_val); fprintf(stderr, "The '%c' key was pressed\n", (char)screen_val); switch (screen_val) { case KEYCODE_I: // Display the email layout with "Send" enter key virtualkeyboard_change_options(VIRTUALKEYBOARD_LAYOUT_EMAIL, VIRTUALKEYBOARD_ENTER_SEND); break; case KEYCODE_O: // Display the phone layout with "Connect" enter key virtualkeyboard_change_options(VIRTUALKEYBOARD_LAYOUT_PHONE, VIRTUALKEYBOARD_ENTER_CONNECT); break; case KEYCODE_P: // Display the default layout with default enter key virtualkeyboard_change_options(VIRTUALKEYBOARD_LAYOUT_DEFAULT, VIRTUALKEYBOARD_ENTER_DEFAULT); break;
On the screen, the layouts would look as follows:
The fprintf() statement in the example above sends the keyboard character that was pressed to the console, in case you want to debug the application later.
We can also hide the virtual keyboard, which is useful when you want to use the full screen to display something. The user can pop up the virtual keyboard again by touching the screen.
case KEYCODE_H: // Hide the keyboard virtualkeyboard_hide(); break;
Next, we specify how to rotate the squares using the a or z keys.
#define ANGLE_INCREMENT 3.0f #define CIRCLE_DEGREES 360.0f
The rotation angle is incremented each time a is pressed, and decremented each time z is pressed.
case KEYCODE_A: // Increment rotation angle angle = fmod(angle + ANGLE_INCREMENT, CIRCLE_DEGREES ); break; case KEYCODE_Z: // Decrement rotation angle angle = fmod(angle - ANGLE_INCREMENT, CIRCLE_DEGREES ); break;
Recall that the Glview library calls our display callback render() to refresh the graphic display. The incremented or decremented value of the angle variable causes the square to rotate at an increased or decreased speed.
glRotatef(angle, 0.0f, 1.0f, 0.0f);
That's it! You can now build and run your application. Try playing with the keyboard to see how the application changes or changing the event callback to map different functions to different keys. You can also explore other callbacks that Glview supports and try adding a new callback!
Last modified: 2015-03-31
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/documentation/device_comm/input/tutorial_keyboard.html
|
CC-MAIN-2018-43
|
refinedweb
| 1,370
| 50.46
|
I have this regular FETCH code that returns an object called json I want to be able to use the object in another function so I was trying to turn it into a constant but it won't let me do it
import {fetch} from 'wix-fetch'; // ... fetch("", {method: "get"}) .then( (httpResponse) => { if (httpResponse.ok) { return httpResponse.json(); } else { return Promise.reject("Fetch did not succeed"); } } ) .then(json => // I AM ABLE TO USE JSON HERE BUT IT WON'T LET ME DECLARE A CONST console.log(json) ) .catch(err => console.log(err)); export function Button_click(event, $w) { // I want to use the oject here }
declare a global variable like let jsonContent; then at the console.log(json) do jsonContent = json; then you will get the content from the global variable. Cheap trick.
Other solution would be to make the fetch a function that returns the json to the other outside function.
|
https://www.wix.com/corvid/forum/community-discussion/extracting-an-object-from-fetch
|
CC-MAIN-2020-05
|
refinedweb
| 151
| 74.69
|
On Fri, 16 Jan 2009 02:51:43 -0500, Ken Pu wrote: > Hi, below is the code I thought should create two generates, it[0] = > 0,1,2,3,4,5, and it[1] = 0,10,20,30,..., but they turn out to be the > same!!! [...] > I see what Python is doing -- lazy evaluation doesn't evaluate (x+(i*10) > for x in count()) until the end. But is this the right behaviour? How > can I get the output I want: [0, 1, 2, 3, 4] > [10, 11, 12, 13, 14] The solution I would use is: itlist = [0,0] for i in range(2): itlist[i] = ( lambda i: (x+(i*10) for x in count()) )(i) Or pull the lambda out of the loop: itlist = [0,0] def gen(i): return (x+(i*10) for x in count()) for i in range(2): itlist[i] = gen(i) -- Steven
|
https://mail.python.org/pipermail/python-list/2009-January/520363.html
|
CC-MAIN-2016-50
|
refinedweb
| 149
| 80.55
|
Hello
After some advice from a few more experienced members on this board I decided to scrap my poor attempt at a chess engine and start over , this time using OOP.
I am currently having an issue with using enum variables and objects.
Please excuse my ignorance before hand , I come from a background of embedded programming.
Basically I have a class ChessGame for the game itself , a class ChessPiece and a class for each of the types of Pieces (king , queen , bishop etc) which extends ChessPiece.
First of all , why do the individual pieces classes extend ChessPiece? Because I use an enum type for the color and rank of the piece and I doesn't allow me to declare it in each class separately.
Now , when I start the interface , I create a new instance of the ChessGame. In that instance of the ChessGame I want to create an instances of all the pieces on the board meaning 2 kings , 2 queens , 16 pawns etc .
Can someone tell me why this is giving me an error? The line where I make an instance of a king.
I've tried making an instance of the king from the class , constructor , main and anywhere else I could think of but it's giving me an error saying that the enum types ( WHITE , RANK )
- Cannot find symbol : variable WHITE .
Why is it not working? The King class extends the ChessPiece class where the enum types are declared.
public class ChessGame { King kingWhite = new King(WHITE , KING , 0 , 4 ); private int chosenSquareXCoordinate; private int chosenSquareYCoordinate; //...........................................
The constructor in the King class:
public King(Shade color, Rank rank, int piecePositionX , int PiecePositionY) { super(); isAlive = true; // Whether the piece is still on the board this.color = color; // Color of the piece this.rank = rank; // Type of piece this.piecePositionX = piecePositionX; this.piecePositionY = piecePositionY; }
and the Chesspiece Class where the enum is declared:
enum Shade { WHITE , BLACK } enum Rank { PAWN , ROOK , KNIGHT , BISHOP , QUEEN , KING , BLANK } public class ChessPiece { Shade color; // color of the piece Rank rank; // Type of piece public ChessPiece() { } // ....................
|
http://www.javaprogrammingforums.com/object-oriented-programming/34502-problems-objects-enum.html
|
CC-MAIN-2014-42
|
refinedweb
| 346
| 70.43
|
Nepomuk-Core
#include <Nepomuk2/Tag>
Detailed Description
A Tag can be assigned to any Thing.
This allows simple grouping of resources. Each Tag is identifed by its label which should be unique.
Definition at line 38 of file tag.h.
Constructor & Destructor Documentation
Member Function Documentation
Retrieve a list of all available Tag resources.
This list consists of all resource of type Tag that are stored in the local Nepomuk meta data storage and any changes made locally. Be aware that in some cases this list can get very big.
In those cases it might be better to use the asyncronous approach via Query::QueryServiceClient and a Query::ResourceTypeTerm with type Soprano::Vocabulary::NAO::Tag().
Definition at line 100 of file tag.
|
https://api.kde.org/4.x-api/kdelibs-apidocs/nepomuk-core/html/classNepomuk2_1_1Tag.html
|
CC-MAIN-2019-13
|
refinedweb
| 122
| 58.58
|
[How To] Building components with Quasar
This is a more detailed write up from my post here:
This is still work in progress and some topics are missing. If some information is wrong, or I am missing on something, just let me know and I will update this post accordingly.
Building reusable components with Quasar
Vue.js greatly encourages the use of components to encapsulate reusable code and therefore DRY up your code.
Most of the time Vue components are distributed as so called “Single File Components”. Single file components have the
.vuefile extension and allow to write the JS code, template, and style in the same file. These files are then put into a build system like webpack and vue-loader which will transform the template into a render function and extract the styles into a CSS file.
Most of Quasars components are also distributed as single file components, you can check out their source here.
Extending components
Quasar is a framework and therefore provides building blocks to build your own Apps on top of it. But often the question arises how one could use the already existing Quasar components to build own components.
The first thing to notice it that Vue.js favors composition over inheritance.
Inheritance is a concept know from object oriented programming, where classes are able to extend another class to reuse its methods and attributes to build a new but similar class. Composition, on the other hand, is also known from object oriented programming, but instead of extending or overwriting an existing class, the class uses other classes to provide some common services.
Mixins
Mixins allow reusing certain features that you need in a set of components to not repeat yourself writing that code over and over.
To define a mixin one has to export an object that looks similar to a normal component. Other components now can use this mixin to implement the mixin functionality.
For example, lets we need to call a
registermethod on a lot of different components. This method calls an API and returns some identifier that should be stored in the data object of the component.
First, let us define the RegisterMixin:
export const RegisterMixin = { data () => { return { id: '' } }, methods: { register () { // Lets assume we extracted the AJAX call to the Registration class new Registration() .register() .then(response => { this.id = response.id }) } }, created () { this.register() } }
Now that we have defined the mixin, we can use it on any other component and it will be mixed in the component attributes.
import { RegisterMixin } from './registerMixin' export default { mixins: [RegisterMixin] }
A component can use as many mixins as it likes.
But be aware how Vue merges the options. You can read more about mixins in the Vue docs.
Quasar uses mixins for some of its internal functionality. For example, the RouterLinkMixin which allows adding link functionality to different components.
But as great as mixins are, you can not use another single file component as mixin because only the attributes are mixed in and not the template or style definition.
Let’s assume we want to build a component called
MySelectwhich behaves a bit different from
QSelect.
If we would write the following code:
import { QSelect } from 'quasar' export default { mixin: [QSelect] }
We would end up with a component that has all the internal methods and data from
QSelectbut no template at all. So we would have to get to the source of
QSelectand copy the whole template definition. This would work as long as
QSelectgets updated and you forget to update the template as well. Even if you only update minor versions it could break, because you are not relying on the external interface of
QSelectwhich is described in the docs, but also on the internal code, which normally one shouldn’t have to care about.
But how do we build or own
MySelectcomponent?
Custom select component
Let’s take an example from the forum. Someone asked how to build a component that hides some of the props passed to
QSelect. Specifically, he wanted to build a select component which always had the
filterprop set to true and always apply a default
filter-placeholder.
A simple implementation of this component could look like this:
<template> <q-select : </template> <script> import { QSelect } from 'quasar' export default { props: ['value', 'options'], methods: { handleChange (newVal) { this.$emit('input', newVal) } }, components: { QSelect } } </script>
Because
v-model="foo"is just syntactic sugar for
:value="foo" @input="foo = $event.target.the value"we can define a property
valueon our new component, which is then passed as
valueto the inner
QSelect. We are then listening to the
changeevent on the
QSelectwhich indicates that the value has changed. If we receive such an event, we are emiting an
inputevent from our new component and pass the new value as parameter.
Now we can use the component like this:
<template> <my-select </template> <script> import MySelect from './MySelect' export default { data () => { return { selected: null, myOptions: [] } }, components: { MySelect } } </script>
And this would render a
QSelectwith
filterset to true and
filter-placeholderset to “select”.
But if we wanted to set other properties on the internal
QSelectwe would have to define all of them on our own component und pass them to
QSelect.
Pinpad component
Another user also asked how to build a custom component which is again a good example on how to use composition to create new components. He wanted to build a
Pinpadcomponent.
We can simply achieve that by using
QBtns which are aligned on a flexbox grid:
<template> <div> <div v- <div v- <q-btn @ {{ (row-1)*3 + col }} </q-btn> </div> </div> </div> </template> <script> import { QBtn } from 'quasar' export default { data () { return { pin: '' } }, methods: { handleClick (digit) { this.pin += digit } }, components: { QBtn } } </script>
This gives us a whole new component by using existing components.
We could now even extend this component with other Quasar components like an
QInputto allow for manually entered pins.
How to style custom components
Styling custom components is easy, just declare your styles in the
<style></style>section of your component.
But what if we want our styles to be consistent and be able to change them in a single place?
Quasar uses Stylus variables for that purpose.
If you want to use some of the variables in your own components you can just import them like so:
<template>...</tempalte> <script>...</script> <style lang="stylus"> @import '~src/themes/app.variables.styl' </style>
Now you can use all the variables like colors or breakpoints in your own component.
Todos
- Explain slots
- Explain component communication
- Static components
- Directives
- Quasar Utils
- spectrolite last edited by
- rstoenescu Admin last edited by
@a47ae said in [How To] Building components with Quasar:
Most of Quasars components are also distributed as single file components, you can check out their source here.
Link is broken … no major just thought I mention.
- minimalicious last edited by
I tried to follow this guide but I think the quasar API has changed so much that it doesn’t work anymore.
I managed to make a wrapped q-select that works and might be helpful if you are wrapping quasar (or Vue) components
I wanted a q-select that would have basic filtering enabled. e.g. It could replace q-select and not have to setup extra code for filtering.
<ex-select v-model.number="product.supplierId" label="Supplier" :options="supplierListSorted" option-value="supplierId" option-label="companyName" emit-value map-options filled />
Vue components can use v-bind="$attrs" to bind any attributes that do not match the component params e.g. if you use <ex-select filled … then the filled attribute will be passed down to the component automatically.
The same applies for v-on="$listeners" where any event handlers will be passed down to the component. v-model binds the input event so this is why it works without any extra code.
The props “options”, “optionValue” and “optionLabel” are declared because I want to use them in the component code. So if you need to work with an attribute then you need to declare it and add it manually to the wrapped component.
There is code to work with lists that have a key value and display value e.g. supplierId and supplierName. If optionLabel is set, then filter on that property on the object. Otherwise assume the array just contains string values and filter directly.
Here is the full ExSelect.vue code
<template> <q-select <template v-slot:no-option> <q-item> <q-item-section No results </q-item-section> </q-item> </template> </q-select> </template> <script> export default { name: "ExSelect", // eslint-disable-next-line props: ["options", "optionValue", "optionLabel"], data() { return { selectOptions: null }; }, updated() { if (!this.selectOptions) { console.log("[ExSelect] updated options is ", this.options); // keep a copy of the original options list this.selectOptions = this.options.slice(); } }, methods: { handleFilter(value, doneFunc) { console.log("[ExSelect] handleFilter", value); if (value === "") { doneFunc(() => { // reset the list console.log("[ExSelect] handleFilter reset list", value); this.selectOptions = this.options.slice(); }); return; } doneFunc(() => { const input = value.toLowerCase(); console.log("[ExSelect] handleFilter filtering", value); this.selectOptions = this.options.filter((item) => { if (this.optionLabel) { // search the display property return item[this.optionLabel].toLowerCase().indexOf(input) > -1; } return item.toLowerCase().indexOf(input) > -1; }); }); } } }; </script>
I’m still learning Vue so there are probably better ways to do some things but might help out someone trying to do a similar thing.
- metalsadman last edited by
@minimalicious great work, just to add, don’t need to declare the props that’s accepted by q-select. it’s accessible with
$attrs. ie.
props: ["options", "optionValue", "optionLabel"], you can access with
this.$attrs.options,
this.$attrs['option-value'],
this.$attrs['option-label']respectively.
- minimalicious last edited by
Oh thanks, that is a handy shortcut to the attributes.
|
https://forum.quasar-framework.org/topic/696/how-to-building-components-with-quasar/4
|
CC-MAIN-2020-24
|
refinedweb
| 1,617
| 54.63
|
Hi,
I am a newbie and have been learning this language for only 3 months as a part of university degree. I can't figure out the problem with this code.
It is basically a simple programme which tells a user how many letters he has entered. I have debugged the programme and it all works fine until it reaches the for loop inside string_length_func(string_data) this function. please help as i am stuck quite bad.
cheers
Code:#include <stdio.h> #define MAX_SIZE 10 int string_length_func(char string_data[]); //func prorotype int main(void) { /* Declare a character array to store the string in */ char string_data[MAX_SIZE]; int length_word; /* Ask the user for a string */ printf("Enter a word: "); scanf("%9s", string_data); /* calling a function*/ length_word = string_length_func(string_data); printf("This word contains %d letters",length_word); int string_length_func(char string_data[]) { int element_num =0; /* here lies the problem as i have checked with debugger and all worked fine until here. */ for (element_num = 0;((element_num < MAX_SIZE) && !(element_num == 0)) ; element_num ++) if (string_data [0] == '\0') printf("no letter was entered"); return (element_num); }
|
http://cboard.cprogramming.com/c-programming/73838-program-error-counting-length-string.html
|
CC-MAIN-2015-11
|
refinedweb
| 175
| 53.41
|
Hey everyone- I'm new here and this is my first forum post so I hope this is in the right place!
I would appreciate all the help I can get:
I'm trying to make a Java code to find the volume of a sphere but can't seem to use the user's input because I'm getting an error that "keyboard.readLine()" does not represent a real integer. I've attached my code below - I'm sure it's poorly written but I wouldn't know how to properly write it until I fix the input problem- I'd like to use the user's input in the volume equation which follows- thanks everyone!
import java.io.*; // Stores information between user and computer.
public class VolumeOfSphere
{
public static void main (String[] args) throws IOException
{
String radius;
DataInputStream keyboard = new DataInputStream(System.in);
System.out.println("What is the radius of the sphere: ");
radius = keyboard.readLine();
volume = ( 4.0 / 3.0 ) * Math.PI * Math.pow( radius, 3 );
System.out.println("The volume of the sphere is: " + volume);
}
}
|
http://www.javaprogrammingforums.com/%20java-theory-questions/30331-user-input-output-math-calculations-java-printingthethread.html
|
CC-MAIN-2014-42
|
refinedweb
| 179
| 64.41
|
In early February of this year, I happened to see a tweet:
The author of fluent Python has an exciting message: he’s writing the second edition!
If you want to vote for the best Python advanced bibliography, this book will definitely be one of the most popular books. When I first wrote the “Python cat recommendation series”, I wanted to recommend it, but I thought that good things should be kept until the end, so I have been dragging it till now
If you’ve read it, you’ll certainly think it’s worth recommending; if you don’t, read on to see if my introduction can move you to make it a must read~
The English title of this book is fluent python, which was published in August 2015. Two years later, Turing education in China produced a translation, which was published in May 2017, with a score of 9.4 for Douban. (it’s a long process to translate / publish books.)
Luciano Ramalho is a Brazilian, a veteran Python programmer / speaker, and a member of the PSF (Python Software Foundation). The technical reviewers and recommenders of books include a number of big names in the circle.
As soon as this book was published, it was highly praised by the circle. Publishing houses all over the world have introduced copyright one after another. At present, there are at least nine language versions of this book :
PS: the picture is from @ fluent python, the thinnest version in simplified Chinese, coincidentally occupying the C position. According to Turing education statistics, the sales volume of simplified Chinese version exceeds 40000 copies, and it is expected to surpass that of English version in 2020.
So, what does this book really write about? What are the special features?
The book is full of content, except for the preface, appendix and glossary, it is divided into six parts and 21 chapters. I made a mind map of the core chapters:
(in
Python catThe official account replied.fluent, with complete HD source image)
The above is the mind map of the main chapter. The numbers in the map are the number of folded branches.
Let’s take a look at some details:
The original picture is too big to show. stay
Python catReply in official accountfluent“With complete HD source image, PDF version and markdown version
As can be seen from the chapters, this book is mainly for advanced developers. It does not involve entry-level content, but focuses on data model, data structure, function object, object-oriented, control flow and meta programming.
Opening the first chapter of the book, the author uses a few dozen lines of Python code to realize a deck of playing cards with his bare hands
import collections Card = collections.namedtuple('Card', ['rank', 'suit']) class FrenchDeck:]
Then, it points out the core topic of the bookA data model consisting of various special methods.
The special method is__ xxx__ () this kind of thing named before and after double underline, usually called magic method and dunder method, is a unique design of Python.
Data model is undoubtedly the key core of Python language and the cornerstone of the formation of the so-called Python style. Everything in Python is an object, and the data model is the interface specification of these objects. Because of this, python can obtain strong consistency of behavior.
Fluent Python starts with the data model and sets the tone of the full text,That is to pay attention to the construction of Python objects and the characteristics of the language details, the purpose is to let readers write more authentic, concise and efficient, readable and easy to use code.
The author of fluent Python and Chinese version
Next, it introduces python This paper introduces the features of some built-in types (sequence type, mapping type, text and byte type) in. It introduces the functions as special objects and the usage of general objects. It also introduces the control flow (iterator, generator, context manager, coroutine and concurrent programming). Finally, it goes deep into meta programming (descriptor and metaclass) which is known as black magic.
The book has more than 600 pages. It is full of rich content, which makes people feel “learned new knowledge” from time to time, and a desire of “Oh, I want to further learn XXX”.
Many students who have read the book share the same feeling: its “extended reading / random talk” is not idle writing. On the contrary, some contents are more wonderful than the text. The author shows his rich knowledge (official documents, community allusions, grammar evolution, article videos, open source projects, language differences, etc.) and each chapter is worth reading. No Python book can match it in this respect.
I recommend that you find the chapters you are interested in. In addition, some people have made very good reading notes (all very long). I put them here: (by hongweipeng)(by Maodong)
The first edition of fluent Python was based on the latest Python 3.4 at the time. Over the years, python has been enriching itself, officially announcing the end of Python 2, and rapidly evolving to the latest version of 3.9.
However, since the author focuses on the core concepts of Python and explores the features that are basically unchanged, there is no need to worry too much about outdated content. It is still a highly recommended book to buy and read.
I’m very concerned about the second edition, but I also know that writing takes time, and English publishing, Chinese translation and Chinese publishing also need time, so let’s wait for the good news.
|
https://developpaper.com/if-only-one-python-book-is-recommended-i-want-to-pick-it/
|
CC-MAIN-2021-21
|
refinedweb
| 936
| 60.24
|
On Mon, 2006-05-08 at 23:15 +0200, Roman Zippel wrote:> The point is to give the _clock_ control over this kind of stuff, only the > clock driver knows how to deal with this efficiently, so as long as you > try to create a "dumb" clock driver you're going to make things only > worse. Now the archs have most control over it, but creating a single > bloated and slow blop instead will not be an improvement.> It's not about moving everything into the clock driver here, it's about > creating a _powerful_ API, which leaves control in the hands of the clock > driver, but at the same time keeps them as _simple_ (and not as dumb) as > possible.Part of my concern here is keeping the code manageable and hackable.What you're suggesting sounds very similar to what we have for the i386timer_opts code, which I don't want to duplicate.Maybe it would help here if you would better define the API for thisabstraction. As it stands, it appears incomplete. Define the statevalues stored, and list what interfaces modify which state values, etc. ie: clock->get_nsec_offset(): What exactly does this measure? Is this strictly defined by the statestored in the clocksource? If not, how will this interact w/ dynamicticks? What is the maximum time we can delay interrupts before the clockwraps? How do we know if its tick based? Another issue: if get_nsec_offset() is clock specific in itsimplementation, but the state values it uses in at least the common caseare modified by generic code, how can we implement something like theppc lock free read? That will affect both the generic code and the clockspecific code.> > What arch specific optimizations for continuous clocks do you have in> > mind? In other words, what would be an example of an architecture> > specific optimization for generating time from a continuous counter?> > The best example is the powerpc gettimeofday.> > > For the sake of this discussion, I claim that optimizations made on> > converting a continuous cycle based clock to an NTP adjusted time can be> > made to all arches, and pushing the nanosecond conversion into the> > driver is messy and needless. What are examples contrary to this claim?> > What kind of NTP adjustments are you talking about? A nsec_offset function > could look like this:> > unsigned long nsec_offset(cs)> {> return ((cs->xtime_nsec + get_cycles() - cs->last_offset) * cs->mult) >> SHIFT;> }> > This is fucking simple, what about this is "messy"? There is no NTP > adjustment here, this is all happening somewhere else. That's my point. if nsec_offset is opaque, then what is the interfacefor making NTP adjustments to that function? Are all nsec_offsetfunctions required to use the xtime_nsec, last_offset, and mult values?> Keeping it in the > driver allows to make parameter constant, skip unnecessary steps and > allows to do it within 32bit. This is something you can _never_ do in a > generic centralized way without making it truly messy. I'd be happy to be > proven otherwise, but I simply don't see it.Well, my issue w/ you desire to have a 32bit continuous clock is that Idon't know how useful it would be. For a 1Mhz clock, You've got 1,000 ns per cycle and the algorithm looks something likesay: get_nsec_offset: cycles * 1,000 >> 0which gives you ~4 seconds of leeway, which is pretty good.However, the short term jitter is 1000ppm.So lets bump up the SHIFT value, get_nsec_offset: cycles * 1,024,000 >> 10The short term jitter: <1ppm, so that's good.But the max cycles is then ~4,000 (4ms), which is too short.You can wiggle around here and maybe come up with something that works,but it only gets worse for clocks that are faster. For robust timekeeping using continuous cycles, I think the 64bit multin gettimeofday is going to be necessary in the majority of cases (wecan wrap it in some sort of arch optimized macro for a mul_lxl_ll orsomething if possible).> > > The first step would be to keep it separate from the current > > > update_wall_time() code. I just got rid of clock read in the hrtimer > > > interrupt code you are about to introduce it again here. Many clocks don't > > > need that much precision, and especially if it's an expensive operation > > > it's a complete waste of time.> > > > With continuous cycle based counters, the clock read is *necessary* when> > updating xtime for robust timekeeping. We can move update_wall_time so> > we don't run it every timer interrupt, but we cannot keep correct time> > by just guessing how much time has passed and adding it in.> > It has almost nothing to do with continuous cycles. On an UP system only > anything running with a higher priority than the timer interrupt or if the > cycle adjustment happens asynchron to the timer interrupt (e.g. TSC) can > see the adjustment. Again it depends on the clock whether the common > adjustment is significant bigger than the time needed to read the clock, > otherwise it's just a waste of time.Eh? I didn't understand that. Mind clarifying/expanding here?thanks-john-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
http://lkml.org/lkml/2006/5/8/216
|
CC-MAIN-2015-14
|
refinedweb
| 872
| 62.88
|
Migrating OOP Libraries and Frameworks to PHP 5.3.
For instance, we've been doing things like the following in Zend Framework:
Zend_Controller_Request_Abstract
Zend_View_Interface
These conventions make it really easy to find Abstract classes and Interfaces
using
find or
grep, and also are predictable and easy to understand.
However, they won't play well with namespaces. Why? Consider the following:
namespace Zend::Controller::Request class Http extends Abstract { // ... }
Spot the problem?
Abstract is a reserved word in PHP. The same goes for
interfaces. Consider this particularly aggregious example:
namespace Zend::View abstract class Abstract implements Interface { // ... }
We've got two reserved words there:
Abstract and
Interface.
Stas, Dmitry, and I sat down to discuss this a
few weeks ago to come up with a plan for migrating to PHP 5.3. In other OOP
languages, such as Python, C#, interfaces are denoted by prefixing the
interface with a capital 'I'; in the example above, we would then have
Zend::View::IView. We decided this would be a sane step, as it would keep the
interface within the namespace, and visually denote it as well. We also decided
that this convention made sense for abstract classes:
Zend::View::AView. So,
our two examples become:
namespace Zend::Controller::Request class Http extends ARequest { // ... }
and:
namespace Zend::View abstract class AView implements IView { // ... }
Another thing that looks likely to affect OOP libraries and frameworks is autoloading, specifically when using exceptions. For instance, consider this:
namespace Foo::Bar class Baz { public function status() { throw new Exception("This isn't what you think it is"); } }
You'd expect the exception to be of class
Foo::Bar::Exception, right? Wrong;
it'll be a standard
Exception. To get around this, you can do the following:
namespace Foo::Bar class Baz { public function status() { throw new namespace::Exception("This is exactly what you think it is"); } }
By using the
namespace keyword, you're telling the PHP engine to explicitly
use the Exception class from the current namespace. I also find this to be more
semantically correct — it's more explicit that you're throwing a particular
type of exception, and makes it easy to find and replace these with alternate
declarations at a later date.
I'd like to recommend other libraries adopt similar standards — they're sensible, and fit already within PEAR/Horde/ZF coding standards. What say you?
|
https://mwop.net/blog/181-Migrating-OOP-Libraries-and-Frameworks-to-PHP-5.3.html
|
CC-MAIN-2019-04
|
refinedweb
| 393
| 54.83
|
Search Criteria
Package Details: ros-indigo-desktop-full 1.1.4-1
Dependencies (7)
- ros-indigo-desktop
- ros-indigo-perception
- ros-indigo-simulators
- cmake (cmake-git) (make)
- git (git-git) (make)
- ros-build-tools (ros-build-tools-py3) (make)
- ros-indigo-catkin (make)
Latest Comments
emersonjr commented on 2016-11-14 18:06
Thank you @mimoralea
it worked for me too.
In my environment, i had the problem @jberhow was having.>
The last change, was:
`sudo mv /usr/bin/qmake /usr/bin/qmake.bk`
`sudo ln -s /usr/bin/qmake-qt4 /usr/bin/qmake`
Everything went smooth. :D
mimoralea commented on 2016-10-26 22:35
Okay, it was the version of qmake that was running.
`which qmake` will show you which qmake is in your path.
You have to make sure you are using qmake for qt4. I had qt3 qt4 and qt5 install as well as an anaconda binary for qmake. To resolve this issue in my environment I did:
`mv /home/mimoralea/anaconda3/bin/qmake /home/mimoralea/anaconda3/bin/qmake.bk`
`sudo mv /usr/bin/qmake /usr/bin/qmake.bk`
`sudo ln -s /usr/bin/qmake-qt4 /usr/bin/qmake`
Make sure to revert those changes later if you want to.
mimoralea commented on 2016-10-26 22:24
Same error as @jberhow, any word on how to solve this issue? @bchretien? Anyone?
jberhow commented on 2016-10-16 10:15
Getting fatal error: QWidget: No such file or directory
#include <QWidget>
when it hits the ros-indigo-rtq stuff. It seems it's a problem between QT4 and QT5, but I don't know how to resolve it.
bchretien commented on 2016-03-05 11:04
@joaocandre: ok it should be fixed now. Both python-rospkg and python2-rospkg provide the same binary, hence the possible conflict, but they can also be used as Python modules, and in this case, the Python 2 module is required.
joaocandre commented on 2016-03-04 14:23
@bchretien the particular package is `ros-indigo-dynamic-reconfigure`. I think `python-rospkg` or `python2-rospkg` is listed as a dependency, as I have it installed, but the error still persists.
bchretien commented on 2016-03-04 00:48
@joaocandre: which ROS package is failing? If it does rely on rospkg, it should be in its dependencies, and if it's not listed as a dependency, it's something we'd need to report upstream.
joaocandre commented on 2016-03-03 23:54
Getting `ImportError: No module named rospkg` error when compiling dependencies. I tried the solution proposed at, but to no avail.
bchretien commented on 2015-08-10 20:52
@AbdealiJK: which tutorial are you referring to?? Also, you can now use the up-to-date community ogre package (1.9), incompatibility was fixed a while back for RViz and Gazebo.
AbdealiJK commented on 2015-08-10 19:28
I'm trying to install ROS in Arch - and I was able to get this package installed. After that I used the following in my bashrc - and run `rosenv` before running ros commands.
When I do `catkin_make` in one of my workspaces - I get the following error:
I've installed also (as suggested on the wiki page in ROS+Arch)
Any help on how to fix this ?
|
https://aur.tuna.tsinghua.edu.cn/packages/ros-indigo-desktop-full/
|
CC-MAIN-2021-10
|
refinedweb
| 545
| 55.95
|
I'll keep this short and sweet, mostly because I don't have a lot of time to write a paragraph per question like I usually do.
1: What is the point of properties in a class?
2: Why do people sometimes inherit from an object class? I mean something like this:
class stuff(object):
Typing dir(object) revealed it's documentation, which wasn't helpful.
3: Sometimes, I see something like this:
def func(*args, **kwargs):
Or something of that similar nature. What are the args and the kwargs?
What is hard is making code that accepts different and sometimes unexpected types of input and still works.
This is what truly takes a large amount of effort on a developer's part.
|
https://forum.audiogames.net/post/427203/karmaplus/e16cf3d89c1dedaae4c13100e6648670420d2e88/
|
CC-MAIN-2019-35
|
refinedweb
| 123
| 73.78
|
Thomas Wittek <[EMAIL PROTECTED]> writes: > Steffen Schwigon schrieb: >> At least the many keywords seem to be necessary to map the complexity >> of different paradigms possible in Perl6. Multimethods are not just >> overloading as in C++. Second, the different keywords declare >> different behaviour you can choose. Just read S06, it's explained >> quite understandable. > > Hm, but wouldn't whose be equivalent? > > sub foo ($bar) { > $bar.say; > } > > multi sub foo ($bar) { > $bar.say; > } > > Aren't subs a subset of multi subs, meaning that every sub can be > expressed as a multi sub? > Is there anything a sub has in advantage of a multi sub? > So might not just every sub be a multi sub? > If the only difference is, that you _must not_ declare a sub twice with > different argument lists, I think that this one is relatively > unimportant and letting every sub be a multi sub seems to be more > consistent to me in opposite to this arbitrary looking distinction. > > Maybe I just phenomenally misunderstood multi subs, but unless I > did, I can't see why we want to have subs when we can have multi > subs that can do the same and even more.
I understand your point and I confess I'm not sure. At least there seems to be a visibility difference. In S12 I found those two sentences: 1. [sub (or method) without a multi] [...] Only one such sub (or method) can inhabit a given namespace, and it hides any outer subs (or less-derived methods) of the same short name. 2. [subs or methods declared multi] [...] It does not hide any routines with the same short name but a different long name. In other words, multis with the same short name can come from several different namespaces provided their long names differ and their short names aren't hidden by a non-multi declaration in some intermediate scope. GreetinX Steffen -- Steffen Schwigon <[EMAIL PROTECTED]> Dresden Perl Mongers <>
|
https://www.mail-archive.com/perl6-users@perl.org/msg00315.html
|
CC-MAIN-2018-34
|
refinedweb
| 321
| 64.91
|
.
Two old designers in VS 2005 have been removed from Orcas product. There are several reasons why they are removed:
1, there will be a brand new schema designer built into VS. However, because of schedule issues, it is not in Orcas, but will be released off-cycle;
2, the two old designers depend on some old components of HTML designer. In Orcas, the HTML designer is replaced, and those components are gone/replaced. It becomes too costly to maintain them, especially when a new designer comes out soon.
3, the old schema designer also depend on a designer surface implemented in an old COM package, which is about to be removed.
Currently, there is no plan to build a new data grid editor, which view/edit an XML file in a data grid. The old designer only targets data set files, and it is very limited. It will be interesting to see whether people need it before spending time on it.
The Service Reference is new feature added into VS Orcas, so we don't have to use svcutil when using VS IDE. Here is the side by side comparsion table between svcutil and the service reference in Orcas:
Svcutil command line
VS service reference (.svcmap)
VS UI
/out
(always delivery from the name of .svcmap)
/config
(always the config file of the project)
/mergeConfig
Always true
/noConfig
Not supported
/dataContractOnly
/language
Always based on the project
/namespace
<NamespaceMappings>
But *-> ReferenceName is always added
/messageContract
<GenerateMessageContracts>
Yes
/enableDataBinding
<EnableDataBinding>
/serializable
<GenerateSerializableTypes>
/async
<GenerateAsynchronousMethods>
/internal
<GenerateInternalTypes>
/reference
<ReferenceAllAssemblies>
<ReferencedAssemblies>
<ReferencedDataContractTypes>
<ServiceContractMappings>
Can pick up existing reference assemblies of the project automatically
Limited
/collectionType
<CollectionMappings>
/excludeType
<ExcludedTypes>
/noStdLib
Through <ReferencedAssemblies>
/serializer
<Serializer>
/importXmlTypes
<ImportXmlTypes>
/targetClientVersion
Based on the project
/t:metadata
/validation
/t:xmlSerializer
Multiple url/file
<MetadataSources>
/svcutilConfig
<ExtensionFile Name=”Reference.config” />
Essentially, the svcutil is built for several tasks like exporting metadata, validating the service. But the service reference is built for generating proxy/configuration only. However, in the proxy/configuration generating scenario, the function of the service reference and svcutil are overlapped. The service reference supports most options svcutil supports in its command line. However, to access those options, editing the .svcmap file is necessary (only limited options are exposed through UI.) By saving the options in a file, however, it is easier to repeat the process, when the service is updated.
The advantage to use the service reference is clear when using the IDE. We don't lose most ablility of the svcutil tool, but gain some convenience: options like the language/targetPlatform don't have to be chose again, but will be picked up from the project system automatically. The result config is also automatically merged, and previous injected configuration will be tracked, and removed when the reference is removed or updated, so it won't inject lots of duplicated items. (That function is limited in this version if binding in the configuration is editted.) The IDE also supports better experience when the service is secured.
It is a problem asked by some customers. They were using Begin/End invocation pattern to call web service asynchronously. Those async methods were generated in web proxies in Visual Studio 7.x, and when they are using wsdl.exe tool, but were gone when they use the Visual Studio 2005 or Orcas Beta 2. In both VS 2005 and Orcas, the generated proxy only contains event-driven programming pattern methods.
First, for who is using the event-driven model, there is no reason to turn on the Begin/End pattern methods, because only one of them should be used. For many users, the event-driven model is also easier to use.
For those who still need use the Begin/End pattern methods, it is still possible to turn on a project level option to get them back:
To do that, we need unload the project from VS, and edit the project file directly in an editor. (Please backup one in case we do something wrong.)
In the first section of PropertyGroup, you will see many properties like:<ProjectGuid>{F4DC6946-F07E-4812-818A-xxxxxxxx}</ProjectGuid><OutputType>Exe</OutputType>...Don't change any of those, but do add an extra property just like the OutputType into that section like:<WebReference_EnableLegacyEventingModel>true</WebReference_EnableLegacyEventingModel>Ignore the warning from the xml editor, save the file, and reload the project into VS.
After that, you have to regenate all proxy code (by updating the reference, or running the custom tool on the .map file)
Note: it is only for old web reference proxies. For old service references (WCF proxies), the Begin/End pattern async methods will be generated when "Generate async method" is turned on for that reference. (That is beta 2 feature.) The methods for another pattern (event model) will not be generated in beta 2, but will be added in RTM when the project is targetting the new 3.5 framework.
The WCF tool in the Visual Studio exposes several layers of extensibility APIs. Those were built into the product to make a third party could extend the feature to make it easy to use in their special environment, or make the feature work in a third party environment.
Those extensibility points include:
1, a set of VSIP APIs to allow third party feature to list and manipulate WCF references inside a project;
2, allow a third party project system to enable/disable the feature, and how to store the information in the project system;
3, a way to extend the current “add service reference” dialog to help finding services;
4, support and extend standard WCF extensibility model, including support WSDL/Policy importer extensions, and allow extra settings of the extensions to be persisted in the reference.
Those new samples could be found at
Passing DataTable (without embeded into a dataset) across web services is supported in .Net 2.0 framework. However we found a bug in this area, which might affect using this feature with strong typed dataTable. For example, when a web service function wants to return a DataTable to the client, an independent dataTable (a dataTable which is not a part of a dataset) will work, but a dataTable inside a dataSet won't work. What happens is that the client side couldn't deserialize the dataTable from wire. The root reason is the DataTable inside a dataset inherits a schema namespace from the dataSet, but an independent datatable has an empty namespace (unless we change it). Data cannot be changed between two dataTables with different namespace.
The problem is that the instance of the dataTable on the client side is created by framework directly. It is impossible to change the namespace without changing generated strong type dataset code. To work around this issue, the server side need copy data to an independent dataTable before passing it to the client.
That affects both .Net 2.0 web services and 3.0 Windows communication fundation services.
In VS 9, the "Add Web Reference" menu command was "replaced" by "Add Service Reference" command in all client projects (VB/C#) targeting 3.x platform. Although "service reference" works for most existing web servers, the proxy generated from "Service Reference" and the old "Web Reference" is very different. The service reference is actually a new WCF client, which gives us a lots of flexibility through its configuration system. However, it is fairly different from the old Web Reference API, and the old code consuming old reference certainly will not work without extra work. And there is a tricky part when we use a WCF client to consume an old web reference: the WCF expects you to define types with new DataContract format, and most type defined in web reference is XmlSerialiable types. The proxy generator might not work well with the default options, and sometime, we have to adjust some options to make it to work. (Often the "Auto" serializer doesn't work, and we might have to use "XmlSerializer". Unfortunately, that option is not exposed through an UI dialog, so we have to edit the .svcmap file directly)
We might want to continue using old style web reference for those web services. Unfortunately, "Add Web Reference" menu is disabled by default in the projects, unless you already had such reference in the project. In beta 1, there is no way to find this command. We have to downgrade the project to target 2.0 platform to add one web reference and upgrade the project again to add the first web reference. It is very ugly workaround. In CTPs after beta 1, the command can be accessed through the dialog popping up when we click the "advance" button on the "Add Service Reference" dialog. It is not an obvious place to find it, but the menu should be enabled after that, so it only need be done once.
After we add a WCF service reference into a Visual Studio project, a .svcmap file will be added to the project, and it contains most information of the service reference. Actually, it is the only essential file of the reference. In most case, we can remove all other files in the service reference, and still can pull down all the files from services by updating the service reference. By default the .svcmap file is hidden in the solution explorer unless we turn on "show all files..." option.
Do I need understand the .svcmap file? No, you don't have to, if you just want to play with the new WCF feature, just like you don't have to know much about command line options of the svcutil tool before you start to use that. Actually, the file contains options for the proxy code generator, makes it equlivent to those command line options in many cases. However, when we need deal with some slightly complex scenarios, like consuming an old web services, sometimes without turning those options, we don't get useful proxy code. That is when we need look into this file.
Some common code generator options are exposed through a "service reference settings" dialog (not in beta1). But many of them are not exposed through any UI. To change those options, we will have to open and edit the .svcmap file inside a service. Fortunatly, the format of the file is fairly simple, and the schema of that file is shipped with the Visual Studio, so the Xml Editor will provide intellisense when we edit the file.
Although the most options in the .svcmap file works exactly the same way as command line options of svcutil tool, there is some difference between them. For example, when you turn on type sharing in existing assemblies, the svcutil will share service contract types as well as data contract types. But the code generator in VS will only share data contract types. Service contract types will not be shared automatically. A white list must be added in the .svcmap file to make service contract type sharing to work.
Other than the code generator options, there are some interesting things we can do when we edit the file directly. One sample is that there is a MetadataSource section in the file, which contains where to download metadata file. By default, there is always one MetadataSource when the service reference is created through UI. It doesn't have to be this way. For example, if the metadata is not exposed by the service, but you get them through other way of communication, like through an email. We can copy all metadata files into the service reference folder, and reference them directly in the .svcmap file, and delete all MetadataSources. The result will be a service reference, which generates valid code, but will not try to download files from a server. Another possible is to add two sources, so the metadata will be pulled into one reference, and code will be generated together. It is similar like providing two URLs in the svcutil tool.
We can also apply things similar to '/toolconfig' in the svcutil command line. That will allow us to add/remove WSDL/Policy importer extensions. The easiest way is to add those tool configuration in the web.config/app.config of the project. However, that solution has two problems. First, we usually don't want to ship those design time options in the product. The web.config/app.config is a part of the product, so we do mix the design thing and runtime thing in the same place. Second, if we have two service references, and we want different importer extensions for them, there is no way to do so. Fortunately, it is possible to give every service reference a seperated tool config file. The way to do this is to add a Reference.config file in the service reference folder, and reference that file as an extension file in the .svcmap file, just like how .svcinfo file is referenced.
Type sharing is very useful when we want to pass same data between two services. Without type sharing, we will get seperated types in the proxy for every service we consume. That means a lot of code to convert data in one type to another before and after calling a service, which could be painful, and unnecessary coupling in code.
For service reference, there are two kind of type sharing:
1, reuse type pre-defined in a class library in proxy code (do not generate new type in proxy)
2, share type between services (generate new type only once for different services)
The SDK tool svcutil supports the first kind of type sharing. The WCF feature in the Visual Studio provides same level support for this kind of type sharing. However, it becomes easier to use, because the generator in VS could automatically extra types in depedent assemblies of the project. We do need remember to add reference to the assembly containing pre-defined types, but don't have to pass it as a parameter as we use svcutil. The feature in VS also provides more refined control to the user, so it is possible to share types in a small set of assemblies or disable it.
The type sharing is actually done by the DataContract importer extension, so the type sharing in VS inherits the same limitation of the feature in svcutil.exe. Basically, if you are only using new DataContract in service contracts, it works well. But it doesn't support sharing some xml serializable types. Sharing dataSet is supported, but somewhat limited. The problem is that the data set type must be defined in exactly same CLR namespace in both server and client before the type sharing can work. It should not be a problem if we want to share a same data set between client and server, but could be a problem if we only want to share the type between clients. Of course, since data contract types are generated by extensions, it is possible to improve that by adding new or customized extensions.
One limitation which could not be resolved by adding extension is that the shared type must be defined in a library, but can not be defined in the same project consuming the service. Considering that the type must be used by the proxy generator, and the proxy is a part of the project. Reusing type in the same project could somehow cause a recursive logic in the current way how the generator works today. We may have to live with that limitation.
The second style of type sharing is supported by wsdl.exe. svcutil.exe also allows to have two metadata source URL in its command line, although it works only if there is no conflict in metadata from the two sources. That works if the wsdl files are hand crafted, but doesn't work well when the metadata is generated automatically, because we often get duplicated schema files from difference sources. The WCF consumption feature in VS works in a very similar way as svcutil, although it would try to remove duplicated schemas.
(BTW, the type sharing is not supported in beta 1, but CTPs after that)
The ContextSwitchDeadlock MDA is a very annoying debugger message. The message is reported by a background thread, which wakes up once a while and if it finds a remote call doesn't pass in 60 seconds, it raises the error. But the problem is that the error message contains a few context code, but doesn't tell you the exactly location it happens, or tell you which thread makes the call.
Based on the document, the problem happens, when a thread (so often the main UI thread) is working on something and doesn't pump message for 60 seconds. In that period, if a background thread tries to make a remote call to the thread, it could be blocked for more than 60 seconds, and the problem will be raised. If the application uses COM, (maybe directly or indirectly -- for example one control is built this way), it could be a problem. Even the application doesn't use multi thread directly, and we don't feel that we make any remote call. The problem is that most managed application depends on the GC to release COM marshalling object, and the GC is running in a background thread and it could run at any time. So, when the GC tries to release a COM object when it finds no one is using that, it will make a call to the STA thread. If the STA thread ever does something longer than 60 seconds, it becomes a problem.
It could be worse, if we create a background STA thread. It is almost impossible to stop that thread, because if GC hasn't cleaned up all COM objects created in that STA thread at that point. It would end up in the ContextSwitchDeadlock, because no way to call into a dead thread. Of course, we couldn't control GC to clean up those objects before we stop the thread. Unless you can control the life of those ReleaseComObject, it is better to create MTA background thread instead.
ReleaseComObject itself could be a nightmare. Unless you own the both side, the code could be broken easily if the COM object written in native code is updated to managed later. The internal count is increased in any native-managed boundary, so we need know the exactly boundary, which is not really detectable in code. 'IsComObject' does not provide much value, because an object could be implemented in half managed and half native code. For those objects, IsComObject will return true, although the call doesn't go through native/managed boundary. Or even the half managed half native object is passed through a such boundary, its internal count never increases. The first time when you call ReleaseComObject, the native half of the object will be released, and the whole thing is now broken.
The new service reference is persisted in a similar way how the old web reference is persisted in a visual studio project. All files including metadata file downloaded from the server will be persisted into a single folder, which defines the CLR namespace proxy code lives in.
Files in a service reference is not shown in the solution explorer by default, but it is easy to see them by turning on "Show All Files".
Those metadata file includes all WSDL and XSD files. DISCO files will be also kept, if the metadata is downloaded from a disco port, although those disco files are never used in configuration/proxy generation.
All those metadata files and code generator operations are persisted in a .svcmap file. The .svcmap file is the basic unit for the proxy/configuration generator. We may add multiple metadata sources into one .svcmap file, so the code generator will handle them at the same time. If the metadata is property prepared, the code generator will generate shared data contracts for multiple service contracts. This is just one function that you have to get by editing the .svcmap file directly, because it is not exposed through simplified UI.
Besides the .svcmap file, there is a '.svcinfo' file which tracks the configuration added to the app.config/web.config, so that portion can be removed when the service reference is removed from the project. When we update the reference after the service is updated, we will try to update the app.config file, but not just inject another section of configuration into the file, like what svcutil did. The format of this file is not designed to read/edit by human, it is better not to mess up this file.
However, currently, the fuction to maintain the app.config is faily weak. It is certainly better than svcutil, with which you have to maintain the file by yourself. But while the function works fine when you live with the configuration generated automatically, it doesn't work well after you change it. In most case, it will not take out the configuration changed by you, but will inject the new section side by side with it. That makes that function less useful in those advanced scenarios.
There are talks about what to do when the injected configuration has been updated, but the extensibility of the configuration in WCF makes it more complex than what can be done in a short product cycle.
.svcinfo file is persisted as one extension file of the .svcmap file. The .svcmap file allows other customized extension files, so actually a customized WSDL importer/policy importer extensions could pick up options from one extension files to control options in code generator. BTW, just like svcutil, those importer extensions are supported in VS. Without the toolconfig parameter, you need add those extensions in the machine.config or app.config of the project. The drawback to put those design time configuration into app.config is that you would leak them into the runtime. In beta 2, it is possible to put those design time configuration in a seperated config file.
So, an advanced user could write an extension to change generated configurations by changing certain values, or remove not needed ports. It could be done in a general way, and its real behavior can be controlled by an extension file, which could sepecify like which port need be removed, and what binding parameter need be changed, so you don't have to manage the app.config file by hand.
The Visual Studio Orcas beta1 supports generating and using WCF client in the ASP.Net web site projects. From the platform point of view, ASP.Net wants to enable an user to build a web site without any Visual Studio tools. The feature to generate WCF client has been built with the same sprit. With Visual studio, you can easily add a WCF client, but without it, you can create your own by creating a 'svcmap' file. The 'svcmap' file will be processed by a build provider at the runtime to generate WCF client code, which will be compiled on the fly, so web site code could consume those classes.
However, that comes with a price. Because the build provider hasn't been built into the 3.0 framework (it is added in 3.5 framework), so it becomes not possible to consume WCF this way in a project targetting 3.0 framework. That is why we can create a WCF server with a 3.0 project, but couldn't consume it in the same project. (Of couse, if you use svcutil tool, it is still valid to generate WCF client, and consume it in a 3.0 web site project. But you have to do everything by hand.)
However, a build provider might work well to do some simple code generation. It actually is not a good platform to geneate complex proxy code. The problem is that it can either sucess without any message, or fail with one message string, so it becomes no way to output waring messages, if anything might be wrong. That is why it could be difficult to figure out a problem in the WCF client in a web site project. Using a web application project might be a better choice, or maybe I will add the same WCF client to a client project to see warning messages.
The svcmap file is also used in the client project, so it is actually easy to copy one from a client project to a web site project. Metadata files pulled down from services are saved in individular xsd/wsdl files, and actually referenced by the svcmap file. It is somewhat similar to the discomap file in a web reference in VS 8. The svcmap file also contains code generation options. Many options of the svcutil tool are supported. However, those options are not supported in the beta 1 product. Unlike the old web reference, default proxy works fine in most cases. Sometime, it is necessary to adjust those WCF client options to get right proxy code, that is especially true when you consume a service built with old web service platform.
When you create a project targetting .Net framework 3.x in Orcas Beta 1, the "Add Web Reference..." menu is replaced by "Add Service Reference..." menu. "Add Service Reference..." will create a WCF (Windows Communication Framework) client to a web service. The proxy generated by WCF is very different from old web reference proxy. You can consume old web reference through new WCF client.
The WCF client is not supported when your project is targetting .Net Framework 2.0. You are still able to add old web reference to those projects. When you upgrade a 2.0 project to 3.x, if your project contains any old web reference, you will still be able to add extra old web reference. In that case, both "add web reference..." and "add service reference..." will be shown.
So if you do need add old web reference to a project targetting 3.x framework, the workaround is that you can change the targetFramework to 2.0, add one web reference and change back the target framework.
BTW, in Orcas beta 2, you will be able to add Web Reference to 3.x projects.
Normally, it is not possible to use XmlSerializer with internal classes. However, it is possible to work around this issue, when the internal classes are written in C#.
To do that, we need create a special build configuration, and '#if' block in the code to build a version of assembly, where those internal classes are public. (The result assembly is only used in the workaround, and could be discarded after that.) Once we get this assembly, we can use sgen.exe in the framework to generate a serialization assembly for this assembly (with public classes.) We need use command line option '/keep' when we do this. That option will leave the C# source code in the directory. However, the tool pick up a random name for this code file, so it is better to do this in an empty directory. Once we get this C# code file, we will include it in the original project. Because those generated classes have been built into the assembly, they can access internal classes correctly. It is necessary to edit the generated code a little bit to remove assembly attribute, change the namespace of those classes to whatever we want, and maybe change the generated classes to internal as well.
We need change the code where XmlSerializer is created to use the generated Serializer. It is actually easier to use. With pre-generated code, it is also faster. However, it is a problem to maintain the generated code up to date. We have to regenerate the code when the change in those classes impacts the serializer. It is possible to do some scripting to make the whole process done automatically.
If.
|
http://blogs.msdn.com/lifenglu/
|
crawl-002
|
refinedweb
| 4,655
| 62.78
|
Requirements
You’ll need at least Python 2.5 to run Snakelets and Yaki.
Please consider installing a
sendfile(2)system call extension module such as the one available on Pypi, which will boost performance significantly when handling static files. Snakelets will automatically use it if it is installed.
Out-of-the-box startup
It is possible to start the Snakelets server out-of-the-box without changing anything. If you don’t enable the virtual host feature, Snakelets will scan the
webapps directory and will load all web applications it finds on the current host.
Yaki runs as the main app (named
ROOT), which is used as the web application for the root context ‘/’.
You can just start the
app.py script without configuring anything, and it will launch Snakelets and Yaki on port 9080, making it available to you alone on.
A number of web apps that originally shipped with Snakelets are available under
webapps.disabledfor your perusal.
Apache
…is not needed: Snakelets contains its own multithreaded web server. But if you still want to use Apache, lighttpd or nginx, you can use
mod_proxy or its equivalent for forwarding requests to a running Snakelets server behind it (which is the recommended configuration, since you may need to run additional services).. To make sure Snakelets understands this when it generates URLs internally, you should edit
app.py to read:
bindname='localhost' serverURLprefix='/snake/' externalPort=80
If you use the virtualhost mapping, the
serverURLprefix should be set to an empty string.
Finally, if you are using
mod_cache, you should tell it to not cache Snakelets urls, which can be done by the following:
<IfModule mod_cache.c> CacheDisable /snake </IfModule>
lighttpd
The following is a simplified example of a lighttpd configuration for Yaki:
## modules that are generally useful server.modules = ( ... "mod_proxy", "mod_rewrite", "mod_redirect", ... ) ## a domain.com vhost with reverse proxy to Yaki $HTTP["host"] =~ "^(((the|www).)?domain.com)$" { ## the actual proxy entry $HTTP["url"] =~ "^/*" { proxy.server = ( "" => ( ( "host" => "127.0.0.1", "port" => 9080 ) ) ) } ## a few sample redirects that usually come in handy url.redirect = ( "^/?$" => "", "^/space$" => "" ) }
nginx
server { listen 80 default; ## listen for ipv4 listen [::]:80 default ipv6only=on; ## listen for ipv6 server_name _; ## our default server_name_in_redirect off; # usual defaults index index.html; sendfile on; tcp_nodelay on; keepalive_timeout 75 20; # send these to Yaki proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain application/xml text/html text/css text/javascript application/x-javascript application/javascript; gzip_disable "MSIE [1-6]\."; client_max_body_size 50m; client_body_buffer_size 128k; # the actual proxying location = / { proxy_pass; } # sample rewrites again rewrite ^/?$ /space break; }
Varnish
This is a somewhat more elaborated example, since Varnish is a more sophisticated beast:
# we assume Yaki will be the default back-end backend default { .host = "127.0.0.1"; .port = "9080"; } # redefine the receive subroutine to forward the original client IP # and help somewhat with the default static paths sub vcl_recv { if (req.http.x-forwarded-for) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For ", " client.ip; } else { set req.http.X-Forwarded-For = client.ip; } # these are where there are more static assets if (req.request == "GET" && ( req.url ~ "^/themes/" || req.url ~ "^/media/" || req.url ~ "^/static/" || req.url ~ "^/attachment/" )) { unset req.http.cookie; unset req.http.Authorization;); }
Virtual Hosts
To enable this feature you have tell the server what web applications to load and to what host names they must be bound in
webapps/__init__.py (the webapp module init file), which contains the following configuration items:
ENABLED- set this to
Trueto enable virtual hosts. Setting it to
Falsedisables this feature and reverts back to out-of-the-box startup (see above).
defaultenabledwebapps- a list of webapps that will be loaded for the default config (if vhosts is disabled). Use
['*']as a wildcard to enable all available webapps. ( ‘/’ ) of the server on that virtual host. The web root hosts must be known virtualhosts specified in
virtualhosts.
aliases- a mapping of vhost-alias name to real-vhost name (this avoids duplicate loading of webapps).
defaultvhost- the name of the default virtual host that will be used when the browser doesn’t send a ‘Host’ header.
Every vhost can have a different list of webapps that are deployed on it, but a webapp can also be deployed on multiple vhosts at the same time. However, all deployed instances will be separate, unrelated instances of the webapp - if you deploy a webapp on multiple vhosts, it will be created for each vhost, and the
init function will be invoked once for every copy.
Web applications that you configured in the virtual host config are installed automatically, whereas any other web applications in the
webapps directory are ignored.
Startup Parameters
When you run
app.py, it instantiates the internal multi-threaded web server in
snakeserver.server.main with the following parameters:
HTTPD_PORT- where it will listen locally (default=9090). Note: on most operating systems you have to be root (have admin rights) to be able to use ports below 1024.
externalPort- where the server is visible from the outside world (default=same as
HTTPD_PORT). If you’re running behind a forwarding proxy you may need to set this.
bindname- hostname the server will bind on, None (default) means only the current host.
serverURLprefix- URL prefix for all urls that this server uses (for instance,
/snakelets). Default is ‘’ (empty). Slashes will be added/stripped automatically if required.
debugRequests- print incoming requests and headers (defaults to False).
precompileYPagesto find possible errors early? Default is True (boolean). You may want to set this to False to allow faster startup times, but then you won’t find out if an Ypage can’t compile until the page is actually requested.
writePageSource- should generated Ypage source code be written to a file in the
tmpdirectory? Default is False (boolean). You may want to set this to True for easier Ypage debugging.
serverRootDir- root directory for the Snakelet server (i.e. the directory that contains the logging config, the webapps and userlibs directories etc). Default is None. If not specified, the current directory is used.
runAsUser- username that you want the server process to run as (used if you need to start the server as root)
runAsGroup- groupname that you want the server process to run as (used if you need to start the server as root)
Monitoring and Restarting
It is also possible to use the
monitor.py script. This script is designed to run on Linux, and will check if the server is active. If it’s not active (or hanging) the monitor script will restart the Snakelets server (as a daemon process in the background).
You can invoke the script from
cron periodically to check and restart the server if necessary.
Logging
The app server uses the standard Python 2.3+ logging module to log messages. Log files appear in the
var/log directory. Logging configuration is in the
logging.cfg file. There are a few predefined loggers, some of which use log rotation:
Snakelets.loggeris the logger that is used for server messages to file
server.log(rotating).
Snakelets.logger.accesslogis used for logging the web server requests (Apache format) to
access.log. The loglevel is set to
NOTSET. If you set it to
CRITICAL, no access logging is performed, which improves performance and can be helpful if you’re running Snakelets behind a reverse proxy and/or have no need for HTTP logs.
Snakelets.logger.stdoutand
Snakelets.logger.stderrare the logger adapters for the standard output and standard error messages. These messages are printed on the console but are also written to
server_console.log.
You can use the logging facility in your own code by doing:
import logging log=logging.getLogger("Snakelets.logger") log.debug("my debug message")
User libraries / modules
If you want to use a library or module from within several webapps, you don’t have to include it in every webapp directory. There is a special
userlibs folder in which you can place modules and packages that you want to use..
Also, Yaki ships with a modified Snakelets version that will prepend
userlibs to the module search path, thereby making it easy to overlay updated (or more stable) versions of system-level libraries.
|
https://taoofmac.com/space/docs/snakelets/starting
|
CC-MAIN-2018-47
|
refinedweb
| 1,388
| 56.86
|
Which Programming Language?
In this article, I survey a clutch of popular programming languages and provide some personal opinions as to when it is appropriate to use each of them. I'll also talk about some of the development tools available for each language. Hopefully, by the end of this article, Paul will be in a position to make an informed decision. (A small annotated bibliography is also included).
Let me begin by listing the five contenders: C, C++, Java, Python and Perl. There are, of course, lots of other programming languages; however, references to these five appear more than most, especially in the Linux world.
Before looking at each language in turn, there is one thing we can say about them all: they are general-purpose and can be used for most any programming task. A general distinction is that C and C++ are compiled languages, much like Fortran, whereas Python and Perl are interpretive, like most versions of BASIC. Java is somewhere in the middle; source code is compiled into an intermediate format which is then interpreted.
C has a heritage that dates back to the first versions of UNIX--it was used to write most of the OS. The Linux kernel, together with most other parts of the OS, is also written (mainly) in C. This is not an accident, as C excels as a systems-level programming tool. C gives you complete control over everything you do. Despite the fact that C is a small programming language, the devil is in the details, and all that control comes at a price. You, the programmer, must handle allocation and deallocation of memory. There is also the direct manipulation of memory via pointers and pointer arithmetic with which to contend. This basically rules out C for casual programming. However, if you want to play around with your kernel code or write a device-driver, you had better invest a serious amount of time in mastering C.
The C that comes with Linux is the GNU C Compiler, gcc. Don't be fooled by the commercial vendors (and their flashy advertisements) into thinking gcc is anything but capable. The compiler technology is excellent and very mature. Paid support for gcc is available from several commercial organizations, most notably Cygnus (now part of Red Hat), the maintainers of gcc. Free support is also available; check the GNU Service Directory at for more details on both types of support.
So, when would you decide to buy a commercial C compiler, especially when gcc comes bundled with Linux? Well, with all the support available for gcc, paying for your compiler can be hard to justify. If you are moving from the Wintel and Macintosh platforms, you may be repelled by the thought of command-line switches and makefiles, not to mention the editing modes of vi. If it is an Integrated Development Environment (IDE) you're after, Linux has it's fair share. Typically, these integrated tools are hosted within a GUI, and the IDE provides centralized and consistent access to the tools you'll use to edit, compile, link, debug and run your code. Example IDE's include KDevelop and Source Navigator. GUI builders also exist, and the most well-known in the open-source world is Glade (see the web Resources sidebar). If you really do like the IDE tools available on your current platform, you can also find them on Linux. One such product is CodeWarrior from MetroWerks, which is also available for the Wintel and Macintosh platforms. Please have your credit card ready.
C has another interesting property. It forms the basis of all the other languages discussed in this article. C++ is designed to be a very close superset of C. Python and Perl are written in C. And Java is a derivative that is, to the best of my knowledge, written mostly in C. To repeat myself, C excels as a systems-level programming tool.
C++ is all that C is and more. The "more" refers to a large chunk of object-oriented (OO) technology included, in addition to better type-safety, namespace support, templates and exception handling. If you are planning a large systems-level project or application, C++ can be a good choice. Its use can lead to code that is more modular and easier to maintain than equivalent C code. Bear in mind that all I've said about C also applies to C++, and all that new technology can be difficult to master. It can also be totally awesome if used properly. Of course, you don't need to use the new features if you don't want to, as C++ does not force them on the programmer. This is especially true of the OO technology, which can be messy if used inappropriately.
Surprisingly, the gcc compiler used with plain C can also process and compile code written in C++, which can make the transition from C to C++ relatively painless from a tools perspective. To ask gcc to compile C++ code, simple invoke the compiler not as gcc but as g++. (You can also rely on gcc's built-in behavior that will compile your code as C++ if the file extension ends in .C, .cc, .cpp, c++, .cp or .cxx).
Something worth considering when looking at C and C++ is the vast collection of libraries available for these languages. If you plan to write some GUI code, you will find plenty of APIs and libraries which are usable within GNOME, KDE and plain X. Much-used libraries include Qt, ACE and Gtk--. If you're really into C++, take the time to check out the Standard C++ Library which is now part of the ISO C++ standard. This library includes the technology known as STL, the Standard Template Library. I was first exposed to STL during an advanced OO course about four years ago and thought it was the coolest piece of C++ I'd ever come across. It's now part of C++ proper, which is to be applauded.
Java, among other things, is billed as similar, but easier, than C++. This is great news for C++ programmers, but does not wash with the rest of us--most programming languages are easier than C++! In fact, some suggest Java is closer to C, with a number of major exceptions; Java is totally object-oriented (i.e., you must program the OO-way), dispenses with the pointers of C and C++, and provides for automatic memory management (which is a huge plus in many programmers eyes).
Of all the programming languages discussed in this article, Java wins the prize for generating the greatest amount of hype and copy. To believe its creator and custodian, Sun Microsystems, Java is all the programming language you'll ever need. Don't allow yourself to be fooled so easily.
Every Java implementation provides a Java Virtual Machine (JVM) that sits on top of the host operating system. The Java code that you write is "compiled" to run on this JVM, not on the host OS, and JVMs exist on all the major platforms including Linux. As the JVM on all these disparate systems is supposed to be identical, operations specific to any one platform are not allowed (at least, that's the theory). The Java Native Interface allows the platform-specific programmer to bypass this restriction.
Ask most programmers who aren't using Java what they believe its biggest drawback is, and the vast majority will comment on runtime performance. In short, it can be poor. Every Java supplier is working on solutions to this problem, and a large number of Just-In-Time (JIT) compilers are available. The JIT technology goes a long way towards improving Java's run-time performance; however, when compared to the run-time performance of equivalent C/C++ code, Java is still (and will more than likely always be) in second place. This has more to do with the design of the language than with the implementations. If you like Java and you wish it could be compiled, don't fret--the gcc compiler will also compile (to machine code) your Java code. Of course, you lose all the Java portability benefits when you do this. You also become part of the experiment, as Java support in gcc is a work-in-progress.
Of course, once you've compiled your Java code (with a Java compiler, not gcc), it should run on any JVM, regardless of platform. So, in theory, the Java-based program you develop on Linux can be shipped in "compiled form" to users in the Wintel and Macintosh worlds. Again, in theory, the program should run identically on each of the target platforms. Of course, you will need to ensure that each of the JVMs you are shipping to support the version of Java you are writing to. Prudent programmers (as well as those that like to keep their jobs) are well advised to test their developed applications on each of the JVM's they target. It's a case of write your code once, compile it once and test it everywhere.
Java is very big with an intimidatingly large (and growing) standard library, which can make it daunting to learn and master. However, Java is interesting because it is highly Internet-aware. If your plan is to write applets for web pages, you will be hard pressed to get better support for your work than that provided by Java.
The standard Java library is full of reusable goodies for the programmer, and includes everything from high-level data structures to GUI toolkits. The most useful of these is Swing technology, which, with the most recent versions of Java, takes on the appearance of a programmer-configured OS/GUI. This means it's possible to have a Windows look'n'feel on top of X Windows or Mac.
Java tools and JVM's are freely available and should come with most major Linux distributions. Sun provides a full set of command-line tools for free download, and a large number of traditional tools vendors are more than willing to sell IDEs based on Java to all.
If, having just read through the last few paragraphs, you get the feeling that I'm not too impressed with Java, then you'd be right. For some, Java is seen as a good step up from C and C++ for general purpose applications development (which accounts for a large portion of its popularity), but it is not, in my opinion, a big enough step to warrant all the excitement. Which, rather nicely, brings me to the final two contenders: Python and Perl.
The great thing about Python and Perl is that what can take pages of code in C, C++ or Java can be accomplished in just a few lines of code with these programming languages. If you need a quick little program to do something useful, both Python and Perl let you produce something that works in no time at all. Like Java, Python and Perl look after memory allocation and deallocation for you. Unlike C, C++ and Java, both Python and Perl operate at a higher level and are often referred to as "scripting" languages.
Python is a cool little language. It is well designed, compact, easy to learn and fun to program in. Python strongly encourages the programmer to program in an OO-way, but does not require it. In my opinion, it is one of the best languages to use when learning OO-programming. The implementation of OO in Python is clean and simple, while being incredibly powerful. The basic Python execution environment is also the most interactive of the five discussed here, which can be very useful (especially when debugging code).
Python gets a lot of stick for giving syntactical meaning to whitespace. For example, if a block of code is associated with an if statement, then the code needs to be indented beneath the if statement in order for the association relationship to work in Python. Some programmers hate the very idea of this. They should just get over it, as the Python method of indentation effectively does away with the need for braces and semi-colons within code, which (if you are an old C-dog like me), takes a little getting used to. But, get used to it you will, and after using Python for a while, you'll hardly ever notice they are missing. Of course, the nice thing about all this indentation is that Python code tends to look quite neat and tidy, regardless of who actually writes it.
Like Java, Python has a large standard library of reusable code. Python is an interactive interpreter, and runs (typically) from the command-line. An experimental IDE, called IDLE, is shipped with the current distribution and provides a GUI-hosted development environment. Although not fully functional, IDLE offers a glimpse of what's to come in the future. Another interesting project associated with Python is JPython. This is an implementation of Python written entirely in Java. So, if your target system has a JVM, you can use JPython to program it. JPython is being renamed Jython, as the JPython people have broken away from CNRI (The Corporation for National Research Initiatives), and CNRI owns the JPython trademark. The Jython people say the new name will grow on you. We shall see.
Something that concerns me about Python is that its creators appear to be positioning it as the modern equivalent of Pascal. This strategy may well cause more harm than good. Pascal was best known as the teaching language of choice, despite its use in some high-profile technologies: when Apple Computer released The Macintosh, its applications programming language was Pascal; and Inprise Corporation use an object-oriented Pascal as the basis of their Delphi RAD tool. Unfortunately for Pascal, the "teaching language" label is a stigma that prevented many from taking it seriously. The same fate may well await Python. Let's hope that doesn't happen.
Many Perl programmers will tell you that if there was no Perl, they would all be programming in Python. The problem for Python is there is a Perl, and it is getting better all the time.
Perl is a huge beast of a programming language. It is perhaps not an accident that the languages logo is a camel. Part of the reason for Perl's size is that in Perl "there's more than one way to do it!" (as their motto says). This is seen as a huge boon to some, confusing to others. Some like Perl's ability to do things in numerous ways, whereas others get hung-up on choosing the "one true way" to do something. In Perl culture, there is no one true way, and the Perl programmer is encouraged to choose the method that works best for them. This freedom to work the way you want to work is one of the reasons for Perl's success.
Another reason is CPAN, the Comprehensive Perl Archive Network. Available on a number of mirrored Internet sites, CPAN provides access to a wealth of reusable modules for Perl, including everything from talking to databases and processing XML to working with GUIs. In fact, nearly every conceivable use of a programming technology has been "CPANed". If you are considering Perl, take a few minutes to look at the list of add-on modules located at the.
Perl can be used with the Tk technology (from TCL fame) for programming GUIs. (Python also uses Tk for GUI programming, calling the technology tkInter). Support for other GUI toolkits also exists, most notably a Perl API to Gtk (which is used by the GIMP). Check out CPAN for more details.
Do not be fooled into thinking Python and Perl can't be used to program large, sophisticated applications. In the case of Perl, a little more programmer discipline may be required than for Python. Granted, you are not going to write an OS or device-driver using either of these programming languages, but everything else is fair game. Like Java, Python and Perl fair poorly when it comes to raw execution speed. They are all interpreters, after all. And if performance is critical to you, you'll need to look beyond these languages and go for C or C++. As you might imagine, both Python and Perl excel as tools for the casual programmer.
Most Linux distributions already include these programming languages on their CD-ROMs. (Python is heavily used by Red Hat for system administration tasks, and Perl is a favorite of the folks at Mandrake and Debian). It costs you nothing (other than your time) to try them out and decide which one is best for you. Happy hacking!
|
http://www.linuxjournal.com/article/4402?quicktabs_1=2
|
CC-MAIN-2016-07
|
refinedweb
| 2,808
| 61.97
|
The.
Sign up for a google voice account a Gmail account will be needed to do this. A guide on how to sign up for google voice is at the following link(I assume you can do this its super easy):
Enable 2-step verification of your google account.
a link can be found on the following page:
Ensure your Raspberry Pi is setup with your distro of choice and has internet access.
A guide to setting up the Pi can be found at the following link:
Log into your Pi and cd to your home directory in my case its /home/pi
Navigate to scroll to the bottom and in the name field put “Raspberry Pi” (or whatever other name you want to identify the device). Click on Generate password. You will then be given a one-time use password.
First run the following command to prevent SSL errors:
# export GIT_SSL_NO_VERIFY=1
Next run the following command to clone the git repository:
# git clone
Run the following command (you can use nano if you wish):
# vi text.py
Place the following code in the file we just created:
#!/usr/bin/python
import pygvoicelib
number = raw_input(‘number:’)
txtmsg = raw_input(‘message:’)(‘number:’)
txtmsg = raw_input(‘message:’)
client = pygvoicelib.GoogleVoice(‘name@gmail.com’,)
Run the following command:
# python text.py
It will prompt you for a phone number and then for the message you wish to send.
Now that you can send a SMS message from your Raspberry Pi you can setup shell scripts to check on system health and if something is not right the you can get a text message alerting you of the issue.
I have this and other guides on my site
Nice write up. Looking forward to your future how-tos on Nagios on your Pi.
Nice and concise. I love reading everything the Pi can be used for, especially in simplifying IT. I never thought to use it for Nagios. I'll await your other How-To's and learn something very useful!
I like the Rasp at home and I would love to see your howto on your Pi!
Edit: There it is:
I wonder if the new Google Voice lockdowns will impact this? I hope not as I was thinking of trying to use this while searching for RPi ideas on SW.
So far it is still working for me. I use it regularly.
Thanks for the tutorial. With my prior Google Voice Setup and my new Raspberry Pi, this works great!!
I get an error
client.sms("3194153960","test")
File "/home/pi/pygvoicelib/pygvoicelib.py", line 341, in sms
'_rnr_se':self.rnr_se}, mode='raw')
File "/home/pi/pygvoicelib/pygvoicelib.py", line 224, in get_auth_url
raise ServerError(err_code, resp)
pygvoicelib.ServerError: (500, '\n\nInternal Server Error\n\n\n
My google voice was not setup correctly. I tried called the google voice number and it did not work. Changed to a new google voice number and now everything works!
Seems to be much simpler to set up email on the PI and send to the carrier's SMS gateway which is usually your phonenumber@carrier.net. A quick google search will find your carrier's email.
|
https://community.spiceworks.com/how_to/68063-send-sms-messages-from-raspberry-pi
|
CC-MAIN-2018-22
|
refinedweb
| 531
| 74.29
|
Simple c# live soap client. No need to create or use proxy classes. It parses WSDL and get a simplified definition, then use webclient to call webservice. It doesn't work with complex types (definitions in xsd) and not with wcf (soapAction parsing). maybe it could he...
This projects creates a wrapper around the Jira SOAP API 4.01. Most of this code is created directly from the WSDL source.api client jira soap soapclient webservice webservices ws
N'.Google nusoap php5
Provide a base class ServiceNowSoapClient to use phps soapclient class and interact with service now. Additional classes will include incident, user, choice list interactions etc to provide a web server with ability to interact with service now via php.servicenow soap
WSDoop wsdl wsdl2php
Sales force provides various webservices. It has been implemented in php. I have written a class with which we can call salesforce web services. This is first version. I will update as i go further.SFapi SFwebservices SoapClient
JavaScriptを使用���手製ツールを管������。 soapclient 公開�れ���soapclient.jsを�用�カスタマイズ��も�。
Ripcord is an attempt to create an RPC client and server around PHP's xmlrpc library which is as easy to use as possible. You can create xml-rpc, (simplified) soap 1.1 and simple rpc clients with one call and then call rpc methods as if they were local methods of the client. You can create a server with one call, passing it any number of objects whose methods it will publish and automatically document. It is not an attempt to create a full blown SOAP client or server, it has no support for any oXML-RPC
PhpWsdlI started to develop my own WSDL generator for PHP because the ones I saw had too many disadvantages for my purposes. The main problem - and the main reason to make my own WSDL generator - was receiving NULL in parameters that leads the PHP SoapServer to throw around with "Missing parameter" exceptions. F.e. a C# client won't send the parameter, if its value is NULL. But the PHP SoapServer needs the parameter tag with 'xsi:nil="true"' to call a method with the correct number of parametersJSON REST RPC SOAP Webservice Webservices WSDL XML
DroidSoapclient facilitates the development of Android applications that communicate with Web Services. Used in conjunction with the tool KSOAP2, developer you will not have much more work. Enjoy and contribute. A example: /*Follow the method "soma" in Servico.jws published in AXIS: * \tpublic class Servico {\t\t\tpublic String soma(int valor1, int valor2) {\t\t\t\tint result = valor1 + valor2;\t\t\treturn String.valueOf(result);\t\t\t} \t}*/package br.com.android;import br.com.android.webservicAndroid fast ksoap Tool Webservice
SOAP Client SOAP Client ucarbon cfnetwork cocoa mac nsxml osx soap webservicescore wsdl xml xslt
We have large collection of open source products. Follow the tags from
Tag Cloud >>
|
http://www.findbestopensource.com/product/soapclient
|
CC-MAIN-2017-13
|
refinedweb
| 513
| 57.16
|
Microsoft officially defined .NET Framework as follows:
The .NET Framework is the heart of Microsoft .NET. The .NET Framework is a software development platform of Microsoft .NET. Like any platform, it provides a runtime, defines functionality in some libraries, and supports a set of programming languages. The .NET Framework provides the necessary compile-time and run-time foundation to build and run .NET-based applications.
The .NET Framework consists of:
• Common Language Runtime
• Class Libraries
• Support for Multiple Programming Language
The common language runtime (CLR) (also refer as runtime). The Common Language
Runtime is the core of Microsoft's .NET vision. This is said to be the execution engine of .NET platform. The runtime (CLR) handles runtime services, including language integration, security, and memory management. During development, the runtime (CLR) provides features that are needed to simplify development.
Class libraries: Class libraries provide reusable code for most common tasks, including data access, XML Web service development, and Web and Windows Forms. The CLR Software Development Kit (SDK) provides the programming APIs in the form of a set of classes for building .NET applications. Collectively, they are referred to the Base Class Library, or BCL.
Through the classes in the BCL, we can interact with the runtime, influencing the way that the runtime's services are provided to us. In addition to giving us an "in" to the runtime, the BCL classes provide a large number of useful utilities. These include things like a new database access library (ADO.NET), ASP.NET, and an XML parser with support for the latest XML specifications. In addition, developers can extend classes by creating their own libraries of classes. All applications (Web, Windows, and XML Web services) access the same .NET Framework class libraries, which are held in namespaces.
Support for multiple programming languages. Having a set of libraries and a runtime is good, but neither one of them is useful if you can't write programs to take advantage of them. In order to do that, you need to use some programming language with a compiler that is runtime-aware. Microsoft currently lists over twenty different languages with which it will be possible to write software that targets the CLR. Microsoft itself ship support for five languages with the SDK: C#, Visual Basic.NET, IL, C++, and JScript.NET. Of these, C# and Visual Basic.NET are likely to be the languages most often used to develop software for this new platform. Any language that conforms to the Common Language Specification (CLS) can run with the common language runtime. Relying on the common language runtime, code compiled with compilers of .NET based languages can interoperate. All .NET-based languages also access the same libraries.
• It is a platform neutral framework.
• It is a layer between the operating system and the programming language.
• It supports many programming languages, including VB.NET, C# etc.
• .NET provides a common set of class libraries, which can be accessed from any .NET based programming language. There will not be separate set of classes and libraries for each language. If you know anyone .NET language, you can write code in any .NET language.
• In future versions of Windows, .NET will be freely distributed as part of operating system and users will never have to install .NET separately.
Since Microsoft .NET is a Multilanguage platform then any.NET based language can be chosen to develop applications. Comfort ability of application programmers, specific requirement of applications may be the major factors in selection of language.
According to the language we can choose its run time aware compiler for .NET platform. Because it is a Multilanguage execution environment, the runtime supports a wide variety of data types and language features.
When compiling your source code, the compiler translates it into an intermediate code represented in Microsoft intermediate language (MSIL). Before code Can be run, MSIL code must be converted to CPU-specific code, usually by a just-in-time (JIT) compiler. When a compiler produces MSIL, it also produces metadata. Metadata includes following information.
• Description of the types in your code, including the definition of each type,
• The signatures of each type's members,
• The members that your code references,
•. This composite file serves as a self describing unit to the .NET Framework Runtime is called Assembly. The runtime locates and extracts the metadata from the file as needed during execution.
The MSIL code is compiled into native code by component of CLR named JIT Compiler.
JIT compiler intelligently guesses and compiles the intermediate code on piece by piece basis. This piece may be a method or a set of methods... Additionally, verification inspects code to determine whether the MSIL has been correctly generated, because incorrect MSIL can lead to a violation of the type safety rules .
The runtime relies on the fact that the following statements are true for code that is verifiably type safe:
• A reference to a type is strictly compatible with the type being referenced.
• Only appropriately defined operations are invoked on an object.
• Identities are what they claim to be.
If type-safe code is required by security policy and the code does not pass verification, an exception is thrown when the code is run.
The common language runtime is responsible for providing following low-level execution services, such as garbage collection, exception handling, security services, and runtime type safety checking. Because of the common language runtime's role in managing execution, programs that target the .NET Framework are sometimes called "managed"
|
http://ecomputernotes.com/csharp/dotnet/dot-net-framework
|
CC-MAIN-2018-05
|
refinedweb
| 915
| 58.18
|
MWI Device Description Working Group News
Supporting Web Content Adaptation through Device Knowledge
Post details: Meeting Summary - 25 June 2007
Thursday, June 28th 2007
06:58:46 pm, Categories:
Meeting Summaries
Meeting Summary - 25 June 2007
[Weekly conference call, 25 June 2007] F2F well subscribed. TP topics. Normative names? Vocab contributions. Two docs to be published soon. Problem with tools for mapping IDL to languages. Details follow: [F2F] We expect 10+ members to attend the London F2F next July. An agenda is being determined this week. [TP Topics] The W3C is holding a Technical Plenary meeting in Boston in November and groups are being asked to consider topics for discussion. The harmonization of markup languages (XHTML-MP/Basic) is one possibility. Another is the (probable) false impression that having one markup language for all browsers is enough to ensure the success of the mobile Web. Other W3C groups need to be educated on the findings of DIWG/UWA, BPWG and DDWG in this regard. [Names] There was a proposed resolution put to the group that only the identifier names in the ontology would be normative. Such names could be used (as strings) in API calls to identify the properties being stored/requested. However, the ontology might contain alternatives that could be useful for convenience in various programming languages (e.g. a camel-case version, an underscore_separated version, etc.). However, it was noted that other vocabularies could operate concurrently with the DDR Core Vocabulary, and the group has yet to consider issues such as namespacing to avoid confusion between vocabularies. Would the name include a namespace? Would the namespace be a separate parameter? Is there an alternative to namespacing? It was decided that the group would keep the proposed resolution on the table and wait until a better understanding of the naming needs of the API was achieved. (Generally, however, there was much sympathy for having normative names as part of the specification.) [Vocab] The public vocabulary contribution process is active, as is the discussion mailing list. Everyone is encouraged to contribute. Rhys and Kevin are already preparing new material, and others are expected to follow. [Publications] There were few wiki updates this week, but two of the legacy documents (Landscape and Ecosystem) are now substantially complete. The group is aiming to agree to publish these as final versions at the F2F. Some extraction from wiki to XMLSpec will be necessary as part of the publication process. Everyone is encouraged to do one last proof-read of the texts. [IDL Tools] The group has identified a problem regarding tools to map IDL to implementation languages. Unfortunately, the tools that W3C has used in the past are no longer supported, and cannot be made to work properly. Nacho proposed that an XML version of IDL be used, from which various language mappings could be obtained via XSLT. Such a tool would be useful to the group to see how the API would appear in different programming languages, without having to develop/maintain these by hand. It was also proposed that the group use Java as a sample target programming language, and possibly even design/prototype via this language, though keeping the IDL as the normative definition. The question of how to determine if an implementation was conformant to the final IDL was also discussed. As the mappings are not unique (i.e. alternative mappings from IDL to Language-X are possible), it was suggested that black-box (functional) behaviour might be the only means to determine conformance, rather than inspection of the mappings. [New Actions] (ACTION-51) Kevin to send e-mail reminder to group to review requirements doc. [Attendees] Jose Manuel Cantera Fonseca (Telefonica) Rafael Casero Escamilla (SATEC) Dimitar Denev (Fraunhofer FIT) Anders Ekstrand (Drutt) Rodrigo Garcia (CTIC) Rotan Hanrahan (MobileAware) Martin Jones (Volantis) Nacho Marin (CTIC) Eman Nkeze (Boeing) Jo Rabin (dotMobi) Mike Smith (W3C) Andrea Trasatti (dotMobi/WURFL)...
|
https://www.w3.org/blog/DDWG/2007/06/28/meeting_summary_25_june_2007
|
CC-MAIN-2017-17
|
refinedweb
| 649
| 53.61
|
things.
The script has two functions. The first is to create a linked Roto node. If you have a tracker node selected, and have installed it as described, press Alt-O, and a Roto node will be created with a layer linked to the selected Tracker node. Sometimes you might want to create a linked layer in an existing Roto, RotoPaint or SplineWarp node. No problem, just select as many target nodes as you want, along with your Tracker node, and press Alt-O to run the script. All selected target nodes will have a linked layer added to them.
The other function is to create a linked Transform node. Sometimes you have a Tracker or a Transform node, and you need to apply the same transformation in many places in your Nuke script. You could create many copies of your original Tracker or Transform node, or you could use this script to create a TransformLink node. Select as many parent Tracker or Transform nodes, and press Alt-L. Linked Transform nodes will be created for each.
The TransformLink node has some extra features compared to a regular Transform node. By default, when it is created, it will be linked using a relative transform. This means that on the identity frame specified, the transformation will be zeroed out. This identity frame is separate from the parent Tracker node. You can switch the node from Matchmove to Stabilize functionality by checking the ‘invert’ knob.
Sometimes, especially if you are linking to a parent node that is a Transform, you will just want to inherit the exact transformation of the parent node. If this is the case, you can click the Delete Rel button, and it will remove the relative transformation. Once the TransformLink node is created, you can also use the Set Target button to link it to a different Tracker or Transform node. You can also bake the expressions on the transform knobs with the Bake Expressions button.
This node might seem redundant to the built-in functionality of Nuke7’s Tracker node that lets you create a Matchmove or Stabilize transform. Unfortunately the transform nodes that are created using this method are burdened by excessive python code on the ‘invert’ knob, which is evaluating constantly, degrading Nuke’s UI performance. Turn on “Echo python commands to output window” in the Preferences under Script Editor. In a heavy script with a few of these nodes, you will probably notice stuttery UI responsiveness and freezing.
Installation:
Put the tracker_link.py file somewhere in your nuke path. You can add the script as commands to your Nodes panel. This code creates a custom menu entry called “Scripts”. I have them set to the shortcuts Alt-O and Alt-L.
import tracker_link
nuke.toolbar('Nodes').addMenu('Scripts').addCommand('Link Roto', 'tracker_link.link_roto()', 'alt+o')
nuke.toolbar('Nodes').addMenu('Scripts').addCommand('Link Transform', 'tracker_link.link_transform()', 'alt+l')
|
http://jedypod.com/tracker-link-tools
|
CC-MAIN-2017-22
|
refinedweb
| 483
| 65.83
|
Extending the VendingMachine
Mick Jones
Greenhorn
Joined: May 21, 2009
Posts: 7
posted
May 27, 2009 03:12:14
1
Hi everyone
I have fixed the error and tidied up the code, so here is the final, working version of the Vending Machine.
I want to carry on developing this as I feel I am learning a lot doing this...
Does anyone have any cool ideas of how to extend it? My ideas are:
- Convert program to Swing so it has a GUI
- Create some kind of Engineer interface, i.e. the vending machine engineer opens the machine up and does clever stuff like gets debug info, reports, reprograms locations and costs. (no idea how to add this interface)
- Crazy idea... Remote Vending Machine administration, i.e. same as above but engineer can access the info remotely via the web to diagnose / re-program the vending machine (like this idea but not sure how to do it)
- Any other ideas?
p.s. i am also interested in the Cattle Drive course, how good is it? Please dont just send me the link as I have already read the info, i would like to hear from others if you feel i would benefit?
Please also comment on how good code is etc as I am trying to learn!
VendingMachine.java
package machine; import inventory.Product; import java.io.*; import java.util.*; public class VendingMachine { public static void main(String[] args) { VendingMachine vm = new VendingMachine(); vm.go(); } public void go() { double moneyIn = 0.00; boolean vend = true; //Create HashMap of products Map<String, Product> productMap = new HashMap<String, Product>(); //Create some dummy products Product productOne = new Product("Mars Bar", 0.35, 10); Product productTwo = new Product("Kit Kat", 0.25, 10); Product productThree = new Product("Peanut butter cups", 0.25, 0); //Add products to HashMap (setup vending machine) productMap.put("A1", productOne); productMap.put("A2", productTwo); productMap.put("A3", productThree); //Get user input BufferedReader reader = new BufferedReader(new InputStreamReader(System.in)); System.out.println("How much money would you like to insert?"); try { String input = reader.readLine(); moneyIn = Double.parseDouble(input); } catch (Exception ex) { System.out.println("Invalid selection, try again."); } while (vend) { System.out.println("Please make your selection"); try { String selection = reader.readLine(); if (selection.equals("Q")) { System.out.println("Your change is " + moneyIn); System.out.println("Bye."); vend = false; } else { //Buy selected product moneyIn = buyProduct(productMap.get(selection), moneyIn); System.out.println("Money remaining " + moneyIn); } } catch (Exception ex) { System.out.println("Invalid selection, try again. " + ex); } } } /** *Displays all Product information within the current Vending Machine * *@param productMap A HashMap of product objects referenced by Key (Vending Machine location) * */ public void getStock(Map<String, Product> productMap) { for (Map.Entry<String, Product> product : productMap.entrySet()) { System.out.println("Location: " + product.getKey()); System.out.println("Name: " + product.getValue().getName()); System.out.println("Stock: " + product.getValue().getStock()); System.out.println(""); } } /** * Buys a product * * @param productMap A HashMap of product objects referenced by Key (Vending Machine location) * @return moneyIn The amount of money remaining after the purchase */ public double buyProduct(Product product, double moneyIn) { if (product.getStock() < 1) { System.out.println(product.getName() + " out of stock."); } else { if (moneyIn >= product.getPrice()) { decrementStock(product); System.out.println(product.getName() + " dispensed."); moneyIn = moneyIn - product.getPrice(); } else { System.out.println("You do not have enough money."); } } return moneyIn; } /** Remove product from stock levels */ public void decrementStock(Product product) { product.setStock(product.getStock()-1); } }
Product.java
package inventory; public class Product { private String name; private double price; private int stock; public Product(String name, double price, int stock) { this.name = name; this.price = price; this.stock = stock; } /** Get product name */ public String getName() { return name; } /** Get product price */ public double getPrice() { return price; } /** Get product stock level */ public int getStock() { return stock; } /** Set product stock level */ public void setStock(int stock) { this.stock = stock; } }
Currently learning Java
Fred Hamilton
Ranch Hand
Joined: May 13, 2009
Posts: 679
posted
May 27, 2009 07:23:36
0
You could experiment with different program designs.
For example here you have a main method in the vending machine class, and a call to the vending machine constructor within that main method of that class. You may wish to consider a vending machine class without a main method, the main and some of the logic would be in a "manager class" to me that seems more logical when you are managing multiple vending machines.
You may wish to create an application with a graphical user interface. In which case the focus might be on designing your vending machine and snack classes to be re-usable, i.e. so they could work without modification in both a GUI based application and a non-GUI based application. That is the essense of modular design, in my opinion.
You could even have it so it works in both an
applet
(browser based application that is accessible over the internet), or in a desktop GUI that uses JFrames.
In all these cases, you should be able to design your product and vending machine classes such that you don't need to make many changes to have them work in a GUI or a non-GUI environment.
p.s. there are always more than one or two ways to get the job done. I could have written a program that did exactly the same thing, but it might not have used a go() method, maybe I might have had more stuff in my main(). Not saying that is betetr or worse, each as its advantages I guess.
Tanweer Noor
Greenhorn
Joined: Sep 22, 2009
Posts: 1
posted
Sep 22, 2009 12:18:42
0
Hello
Your code is giving a lot of compilation error. let me know the fix
thanks
Campbell Ritchie
Sheriff
Joined: Oct 13, 2005
Posts: 44433
33
posted
Sep 22, 2009 14:22:23
0
Welcome to JavaRanch
, Tanweer Noor
Try copying the code with the "view plain" link above, otherwise tell us exactly what errors you are experiencing.
Wim Vanni
Ranch Hand
Joined: Apr 06, 2011
Posts: 96
I like...
posted
May 31, 2011 06:37:52
0
Some quick ideas and remarks. Sadly I don't have the time to go into this extensively. (This does not mean there's a lot 'wrong' with your code ;-)
- Use a seperate class for the productMap
- Get (and store) the data from a 'datastore' (possibly a database or xml, but this can as well be a simple txt file)
- Implement a maximum stock level
- Careful with the Exception handling you've set up; catching an exception doesn't necessarily mean it's 'only' an invalid selection; Create specific/custom exceptions that you throw and catch and handle appropriately
- the attribute moneyIn can become negative (although you capture this in line 91); seems inappropriate
- try implementing toString() to the Product class; could be useful to replace some of the product.getName() calls; this method could already take the stock level into account
I'm not familiar with the vending machine exercise but as far as I can see the code you show here deserves a good grade.
Good luck with your
java
learning track, and remember: the sky is the limit! :-)
Cheers,
Wim
Quick edit: had a little lightbulb moment
: Implement a way to have the vending machine react as it would in real life: sometimes not dispensing the chosen item, sometimes even dropping two!
I agree. Here's the link:
subject: Extending the VendingMachine
Similar Threads
Rate my code
Doubt in <useBean > Standard action
Arrays are confusing
My loop isn't working
Deep Copy
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/447084/java/java/Extending-VendingMachine
|
CC-MAIN-2015-35
|
refinedweb
| 1,275
| 55.44
|
Summary
Lets you set how mosaic dataset overviews are generated. The settings made with this tool are used by the Build Overviews tool.
Usage
This tool is used when there are specific parameters you need to set to generate your overviews, such as
- Defining the location to write the files
- Defining an extent that varies from the boundary
- Defining the properties of the overview images, such as the resampling or compression methods
- Defining the overview sampling factor
Use the Build Overviews tool to generate the overviews after they've been defined with this tool.
You can use a polygon feature class to define the footprint of the overview. If you do not wish to use all the polygons in the feature class you can make a selection on the layer in the table of contents or use a tool such as Select Layer By Attribute or Select Layer By Location to select the desired polygons.
The default tile size is 128 by 128. The tile size can be changed in the Environment Settings.
This tool can take a long time to run if the boundary contains a large number of vertices.
Syntax
DefineOverviews(in_mosaic_dataset, {overview_image_folder}, {in_template_dataset}, {extent}, {pixel_size}, {number_of_levels}, {tile_rows}, {tile_cols}, {overview_factor}, {force_overview_tiles}, {resampling_method}, {compression_method}, {compression_quality})
Derived Output
Code sample
DefineOverviews example 1 (Python window)
This is a Python sample for DefineOverviews.
import arcpy arcpy.DefineOverviews_management("c:/workspace/fgdb.gdb/md01", "c:/temp", "#", "#", "30", "6", "4000", "4000", "2", "CUBIC", "JPEG", "50")
DefineOverviews example 2 (stand-alone script)
This is a Python script sample for DefineOverviews.
#Define Overviews to the default location #Define Overviews for all levels - ignore the primary Raster pyramid #Define Overviews compression and resampling method import arcpy arcpy.env.workspace = "C:/Workspace" arcpy.DefineOverviews_management("DefineOVR.gdb/md", "#", "#", "#", "#", "#", "#", "#", "#", "FORCE_OVERVIEW_TILES", "BILINEAR", "JPEG", "50")
Environments
Licensing information
- Basic: No
- Standard: Yes
- Advanced: Yes
|
https://desktop.arcgis.com/en/arcmap/latest/tools/data-management-toolbox/define-overviews.htm
|
CC-MAIN-2020-50
|
refinedweb
| 302
| 50.77
|
Using Connected Objects To Keep Your Job
We keep hearing about the Internet of Things. The basic idea is simple: Everyday objects (like a watch or even a toothbrush) will soon be connected to Internet, and will be able to communicate with each other.
The last hack day at marmelab allowed me to learn more about this subject, using the range of objects brought by Ninja blocks.
Time for me to suit up my outfit, take my shurikens and transform into a fast & furtive Ninja. Thanks to Ninja Blocks and their multiple captors/actuators & NodeJS.
You can ensure the safety of your defense if you only hold positions that cannot be attacked (Sun Tzu - The Art of War)
Connected Objects to The Rescue of The Ninja – Almost-Perfect
A great Ninja is a ninja who cannot be detected, so when he’s working he shouldn’t be surprised by his sensei opening the door of his office. He must protect his workstation even if he’s going to the restroom.
To be able to detect theses actions, we are going to use 2 captors (proximity & movement). They are packaged in the Ninja Blocks kit. The first one is going to detect opening doors while the second will determine if the sensei is entering or if we are leaving the office.
What’s The Plan ?
Let’s imagine the a Ninja isn’t on a website related to his job. In this - very rare - case, a new tab will be opened on his favorite browser (with a more professional website, like github.com) if the sensei is coming into his office.
To prevent our sensei from discovering the page of our favorite social network, the current session will be locked when we will leave the room.
Finally, a snapshot & facial recognition functionality will be added to ensure that no other Ninja can sit at our place without our permission.
Setup Some Rules
Ninja Blocks offers a great interface to manage rules, install applications created by our fellows Ninja or handle access to the API.
A dashboard will show us the state of our sensors connected to the Block: a graph of temperature & humidity evolution, buttons to change the colors of the Blocks eyes, etc.).
A rule is created in 3 simple steps:
- First, drag & drop paired captors (e.g. movement captor) and a threshold if needed.
- Next, associate an action to launch when the sensor will be triggered: send an SMS, an email, POST request to a configured URL, etc.
- Finally, name the rule & define the frequency of the trigger.
All of theses actions can be natively used with services like Facebook, Dropbox, or a free sending SMS service.
Using Proximity And Movement Sensors
The proximity sensor triggers an action when both parts are separated (at least 2 centimeters). In our installation, this sensor is fixed to the door (for the first part), and to the door frame (for the other one). Each time the door opens, an action is triggered.
The movement sensor is placed in the room pointing to the door. It is used to know if a sensei is coming or if we are leaving the room.
Setup Some Code
Ninja Blocks contains a “web hook” system. In the interface, we can configure rules related to events.
Here we choose to call some URLs:
/detectwhen the movement sensor is triggered
/door-openwhen someone moves in front of the movement sensor.
Node.js will gather all of theses events (with the
express module) in a “server” application:
var app = require('express')(); var http = require('http'); var server = http.createServer(app); var io = require('socket.io').listen(server, { log: false }) app.post('/door-open', function (req, res) { }); app.post('/detect', function (req, res) { }); io.on('connection',function(socket){ currentSocket = socket; }); io.on('disconnect',function(socket){ currentSocket = null; });
Goal
The goal is to apply theses conditions:
- If the door is opening and the movement sensor is triggered, then the sensei is coming
- If the door is opening and the movement sensor is not triggered during 5 seconds, then the Ninja leaves the room.
Observation is a basic principle in the Ninja philosophy.
Then we are going to create 3 statuses in our “server” application:
doorOpened,
exitDetected &
senseiDetected :
var exitDetected = 0; var senseiDetected = 0; var doorOpened = 0; var existStatusInterval = 0; app.post('/door-open', function (req, res) { doorOpened = 1; existStatusInterval = setTimeout(setExitStatus, 5000); dispatchInfos(); res.send(); }); app.post('/detect', function (req, res) { clearTimeout(existStatusInterval); if(doorOpened){ senseiDetected = 1; } doorOpened = 0; dispatchInfos(); }); function setExitStatus(){ exitDetected = 1; dispatchInfos(); } function dispatchInfos(){ if (!currentSocket) { return; } currentSocket.emit('status', { sensei : senseiDetected, exit: exitDetected }); exitDetected = 0; senseiDetected = 0; }
When the door opens, the
doorOpened status is set to 1. Then, a timeout starts to change the
exitDetected status within 5 seconds.
When a movement is detected, the timeout is cleared ; and if the
doorOpened status was set to 1, the
senseiDetected status changes to 1.
The
dispatchInfos() method sends theses 3 statuses to the computer to protect.
Status Retrieving Client Side
During the hack day, my computer wasn’t able to receive data from the Internet (despite my Ninja talents, I couldn’t get access to the Wi-Fi router).
So the application server is hosted on an external service (like an Amazon EC2 instance). The Ninja’s computer is able to connect to this server via socket.io et receive the last 3 statuses:
var io = require('socket.io-client'); var socket = io.connect(''); socket.on('status', function(status){ if(status.exit == 1){ lockScreen(); }else if(status.sensei == 1){ launchSite(); } }); socket.on('error', function(err){ // Do something with the error }); function lockScreen(){ exec('/System/Library/CoreServices/Menu\\ Extras/User.menu/Contents/Resources/CGSession -suspend'); } function launchSite(){ exec('open -a Google\\ Chrome ""'); }
When the distant server sends the status (via
dispatchInfos()), it's directly processed with the socket connection. So there is no need to create a polling system by requesting an URL each second.
When the
exit status changes to 1 ("Ninja is leaving"), we launch the command to lock his session (on OSX):
/System/Library/CoreServices/Menu\ Extras/User.menu/Contents/Resources/CGSession -suspend
When the presence of the sensei is detected, a github.com page if opened in a new tab of Chrome (Ninja’s favorite browser):
open -a Google\ Chrome ""
Theses commands are specific to the OSX environment, a simple search allows to find alternative commands running on another environment, like on gnome:
gnome-screensaver-command -l
Detect Another Ninja in Front Your Desk
To avoid bad tricks of your teammate like duck tape under our mouse or glue on the keyboard, we are going to setup a facial recognition system of the person in front of our computer.
When the alarm is armed and another Ninja sits on our chair (e.g. when the movement sensor placed next to the computer is triggered), a picture is taken with the webcam of the computer.
This picture is resized and uploaded to our server application so it can be analyzed via the Skybiometry webservice to retrieve a matching ratio (from 0 to 100).
If this ratio is under 50, a SMS will be sent to prevent the intrusion.
Enable / Disable The Alarm
The Ninja Blocks remote is used to arm & disarm the alarm..
The first thing to do is to create a rule triggered by the remote which calls the
/button URL on our server.
2 additional rules are also set up to change the color of our Ninja Block's eyes, in order to show if the alarm is armed or not. Theses rules will be launched with a web hook on the
/url-alarm-on &
/url-alarm-off URLs:
var alarmArmed = false; // Called by Ninja Block when the remote button is pressed app.post('/button', function (req, res) { alarmArmed = !alarmArmed; setAlarmStatus(alarmArmed, function(err){ if(err){ console.log('err : '+err); } }); dispatchInfos(); res.send(); }); // Called to change the Ninja Blocks eyes function setAlarmStatus(status, cb){ var path = status ? '/url-alarm-on' : '/url-alarm-off'; var setAlarmOptions = { hostname: 'api.ninja.is', path: path, method: 'POST', headers: {accept: 'text/plain'} }; req = http.request(setAlarmOptions, function(res){ res.on('end', function () { cb(); }); res.on('error', cb); }); req.end(); }
Detecting an Intrusion
Now we can change the method called when a movement is detected to change the intrusion status:
app.post('/detect', function (req, res) { // ... if(alarmArmed){ intrusion = 1; alarmArmed = false; } // ... }); function dispatchInfos(){ // .... currentSocket.emit('status', { // ... intrusion: intrusion }); //... intrusion = 0; }
Taking a Picture With the Webcam
Imagesnap is a command line application used to take picture with the webcam of a *nix computer. This application doesn't include drivers, so you should have the correct drivers for your hardware already installed.
Depending your webcam quality, the picture can weight few Mb, too heavy for a Ninja - especially when it should be uploaded.
In this case, the picture is resized by ImageMagick.
Installing ImageMagic (OSX):
sudo port install ImageMagick
Adding the node module in
package.json:
"imagemagick": "0.1.3",
Then
npm install
Facial Recognition
Setting up a facial recognition system during the second part of the hack day looked ambitious. A Ninja is quite productive but he can’t stop the time.
Another rule of the Ninja philosophy is to hide and seek a better opportunity to reach the victory. So I started to search an API allowing to recognize a person on a picture and I found Skybiometry.
This service brings a free API in the limit of 100 calls per hour and 5000 per day, which is reasonable for our usage.
After sign up, we should create a namespace in the administration interface of Skybiometry. This namespace is called "WorkmateProtection".
Then we need a “tag” to recognize us. A tag allows identifying a person on a picture. So we have to send a "reference" picture of us via the webservice:
This service returns a temporary tag like
TEMP_F@xxx.xxx.xxx.
We can now associate this temporary tag to a name, in the previously created namespace "me@WorkmateProtection":
We can now use this service to retrieve a matching ratio (from 0 to 100) with the reference picture.
Setup Facial Recognition on The Application
When the intrusion status is sent, the application can take a picture with the webcam:
socket.on('status', function(status){ // ... if(status.intrusion){ takeSnapShotAndCompareTo('me@WorkmateProtection'); } }); function takeSnapShotAndCompareTo(tag, done){ var file; var fileName; async.waterfall([ // Take a picture with the webcam function(callback){ takeSnapShot(callback); }, // Resize snapshot function(snapShotRslt, stderr, callback){ file = snapShotRslt.trim().split('...').pop(); fileName = file.split('/').pop(); // Resize file imagemagick.resize({ srcPath: file, dstPath: file, width: 512 }, callback); }, // Upload image function(stdin, stdout, callback){ upload(file, callback); }, // Recognize it function(stdout, stderr, callback){ getMatchValue(''+fileName, tag, callback) }, // Handle result function(result, callback){ if(result < 50){ sendSMS(done); } if(done){ done() } } ], function(err, rslt){ if(err){ console.log('Err : '+err); return; } }); } // Take a snapshow via imagesnap function takeSnapShot(cb){ var now = new Date(); exec('imagesnap ~/Desktop/photo-'+now.getTime()+'.jpg', cb); } // Upload a picture to the public dir of the server function upload(file, cb){ exec('scp -i ~/.ssh/myPem.pem '+file+' user@my-server:/var/app/public', cb); } function getMatchValue(url, tag, cb){ var apiURL = ''+config.skybiometry.key+'&api_secret='+config.skybiometry.secret+'&uids='+tag+'&urls='+url+'&attributes=all'; http.get(apiURL, function(res){ content = ''; res.on('data', function(chunk){ content += chunk; }); res.on('end', function(){ content = JSON.parse(content); // Nobody found : return 0 if(content.photos[0].tags == undefined || content.photos[0].tags.length == 0 || content.photos[0].tags[0].uids.length == 0){ return cb(null, 0); } // Return the ratio of similarity cb(null, content.photos[0].tags[0].uids[0].confidence); }) }) .on('error', cb); }
The
takeSnapShotAndCompareTo(tag, done) method takes the path of a picture taken with the webcam.
Then, this picture is resized and uploaded to the server (using scp). Skybiometry uses only hosted pictures, no raw data.
When the uploading is done, we are going to call the webservice to retrieve the matching ratio. If this ratio is under 50, a SMS is sent to prevent the intrusion. This is done thanks to a new rule with a webhook as a trigger and a SMS sending for the action:
function sendSMS(cb){ var sendSMSOptions = { hostname: 'api.ninja.is', path: '/my-SMS-web-hook', method: 'POST', headers: {accept: 'text/plain'} }; req = http.request(sendSMSOptions, function(res){ res.on('end', function () { cb(); }); res.on('error', function(err){ cb(err); }) }) .on('error', function(e) { cb(e); }); req.end(); }
Conclusion
We have seen during this article that being a Ninja is not a small job. Thanks to the range of sensors brought by Ninja Blocks, we’ve got the ninjustu to do quick application for a more beautiful life.
Rule creation and webhooks allow to plug in every device to Ninja Blocks objects.
The application presented in this post can be enhanced by:
- Adding a Ninja automatically to the recognition process by adding a new Tag with a picture taken from the webcam. So we can handle a list of authorized persons.
- Adding a list of trusted websites that can be launched randomly.
- Adding a client mode when the computer is accessible directly from the Internet. So we don’t have to use a server as a proxy to retrieve sensors status.
It's difficult to speak about all the features in an article; there are a lot of possibility with the API / applications / custom blocks. Theses other features will be discussed in another post later.
Now you can take your own Hattori Hanzō saber and start hacking this Block.
|
https://marmelab.com/blog/2013/07/22/using-connected-objects-to-keep-your-job.html
|
CC-MAIN-2022-27
|
refinedweb
| 2,250
| 56.55
|
# One Day in the Life of PVS-Studio Developer, or How I Debugged Diagnostic That Surpassed Three Programmers
Static analyzers' primary aim is to search for errors missed by developers. Recently, the PVS-Studio team again found an interesting example proving the power of static analysis.
You have to be very attentive while working with static analysis tools. Often the code that triggered the analyzer seems to be correct. So, you are tempted to mark the warning as false positive. The other day, we fell into such a trap. Here's how it turned out.
Recently, we've [enhanced the analyzer core](https://pvs-studio.com/en/blog/posts/cpp/0824/). When viewing new warnings, my colleague found a false one among them. He noted the warning to show the team leader, who glanced through the code and created a task. I took the task. That's what brought together three programmers.
The analyzer warning: [V645](https://pvs-studio.com/en/w/v645/) The 'strncat' function call could lead to the 'a.consoleText' buffer overflow. The bounds should not contain the size of the buffer, but a number of characters it can hold.
The code fragment:
```
struct A
{
char consoleText[512];
};
void foo(A a)
{
char inputBuffer[1024];
....
strncat(a.consoleText, inputBuffer, sizeof(a.consoleText) –
strlen(a.consoleText) - 5);
....
}
```
Before we take a look at the example, let's recall what the *strncat* function does:
```
char *strncat(
char *strDest,
const char *strSource,
size_t count
);
```
where:
* 'destination' — pointer to a string to append to;
* 'source' — pointer to a string to copy from;
* 'count' — maximum number of characters to copy.
At first glance, the code seems great. The code calculates the amount of free buffer space. And it seems that we have 4 extra bytes... We thought the code was written in the right way, so we noted it as an example of a false warning.
Let's see if this is really the case. In the expression:
```
sizeof(a.consoleText) – strlen(a.consoleText) – 5
```
the maximum value can be reached with the minimum value of the second operand:
```
strlen(a.consoleText) = 0
```
Then the result is 507, and no overflow happens. Why does PVS-Studio issue the warning? Let's delve into the analyzer's internal mechanics and try to figure it out.
Static analyzers use data-flow analysis to calculate such expressions. In most cases, if an expression consists of compile-time constants, data flow returns the exact value of the expression. In all other cases, as with the warning, data flow returns only a range of possible values of the expression.
In this case, the *strlen(a.consoleText)* operand value is unknown at compile time. Let's look at the range.
After a few minutes of debugging, we get the whole 2 ranges:
```
[0, 507] U [0xFFFFFFFFFFFFFFFC, 0xFFFFFFFFFFFFFFFF]
```
The second range seems redundant. However, that's not so. We forgot that the expression may receive a negative number. For example, such may happen if *strlen(a.consoleText) = 508*. In this case, an unsigned integer overflow happens. The expression results in the maximum value of the resulting type — *size\_t*.
It turns out that the analyzer is right! In this expression, the *consoleText* field may receive a much larger number of characters than it can store. This leads to [buffer overflow](https://pvs-studio.com/en/blog/terms/0067/) and to [undefined behavior](https://pvs-studio.com/en/blog/terms/0066/). So, we received an unexpected warning because there is no false positive here!
That's how we found new reasons to recall the key advantage of static analysis — the tool is much more attentive than a person. Thus, a thoughtful review of the analyzer's warnings saves developers time and effort while debugging. It also protects from errors and snap judgments.
|
https://habr.com/ru/post/566230/
| null | null | 645
| 59.8
|
,
unfortunately I had to find that (a) you guys did implemented a lot of
features, which means there is quite a bit of work in defining an XML
format (but of course that is good for me in the long run) and (b) I
didn't find as much time as I hoped to have (as usual). But since I
really want a proper way of embedding JFreeChart into ToscanaJ, I am
still working on defining the XML Schema and I have produced a draft
which gives at least some idea where I am heading. I haven't implemented
a parser yet (i.e. a factory creating charts from JDOM Elements) and it
does implement only CategoryPlots yet, but maybe you want to look at it
even without a proof of concept. The main input was the source code, I
tried to follow the structure of the classes and their members.
Here is an HTML version of the schema:
The schema itself can be found here:
Note that you might need to download this file via context menu or view
the source since at least Mozilla renders it as XML by default, which
leaves only some bits of annotation.
The next steps for me will be writing the factory class mentioned above
and extending the XML to other formats than the CategoryPlot. The two
mid-term aims as proofs of concept are:
- write an XMLized version of the demo app (not necessarily supporting
all charts in early versions)
- add some basic chart support into ToscanaJ (using some namespace and
the factory class)
Another thing I still haven't done (and maybe should have done by now)
is to check how all this fits with the XML marshalling stuff coming from
Sun (see references below).
If you guys have some time to spare: please have a quick look at the
HTML documentation and/or the schema and tell me what you think. Since I
haven't use the library myself I am guessing a bit here and there and
although I am quite confident that the approach will work, I am not sure
if it is the best for this library. Any comments are welcome.
Cheers,
Peter
Some references for Java's XML bindings Google found:
-
-
-
-
I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
|
https://sourceforge.net/p/jfreechart/mailman/jfreechart-developers/?style=flat&viewmonth=200304
|
CC-MAIN-2017-30
|
refinedweb
| 418
| 69.15
|
A proxy that applies AES encryption over requests to prevent scrapers from easily accessing our data.
The central idea behind the proxy is that it forwards the requests to the underlying API, encrypts the response, and handles the decryption through WASM. Why WASM? Because nobody knows how to decrypt binary to understand what the fuck we’re doing under the hood—and if they do, they deserve to access the data.
How do I use it?
- First you’ll need to build the decryption VM
$ make build-asma shared-key="15365230-aa22-4f5f-aa46-f86076a0b6b2"
The key
15365230-aa22-4f5f-aa46-f86076a0b6b2will be shared between the VM and the proxy. It will be used to encrypt all the data and it should be kept in secret.
- Configure the proxy. Open
config.tomland figure out what’s good for you. It’s documented;
- Run the proxy!
$ go run main.go
It will listen on
:25259. You can make a request to it using httpie or cURL—whatever. But you can also try to python3 -m http.server and open the index.html we’ve put together that shows how to use the VM to decrypt the proxy responses.
Here’s everything you need:
import init, { proxy, build_info } from "./dist/asma/main.js"; (async () => { // Initialize the decryption VM await init(); // Prints build information. This is useful to put in Sentry metadata and stuff like that... console.log(build_info()); // This is normally the value of the `Authorization` header that you send to the server // to authorize the clients, if you don't want to pass it to the proxy and only keep // the `Shared Key`, it's fine. Otherwise, it adds another layer of security by encrypting // responses individually with everyone's token. const authorization = ""; const response = await fetch(""); // Grab everything that came back from the proxy response as a bytes array const bytes = await response.arrayBuffer(); // ...and send it to the VM for decryption console.log(await proxy(new Uint8Array(bytes), authorization)); })();
How does it work?
The pitch
Why put a proxy if you can encrypt directly on the API?
That’s true. You can. But you should ask yourself the following questions:
- Are you willing to make the PR across your repositories and deploy that solution straight away?
- Are you willing to sacrifice the DX of using your regular API and deal with flags for whether or not you should encrypt the response?
- Do you want to carry over response encryption logic to your existing legacy/already-working-kind-of-thing stuff?
If so…then you’re good to go. Otherwise, feel free not to worry about a proxy in front of your existing APIs.
|
https://golang.ch/a-golang-based-proxy/
|
CC-MAIN-2022-40
|
refinedweb
| 444
| 65.93
|
Handshake Distribution Quadrants
Understanding Handshake’s Asset Distribution
Handshake puts the world’s root zone on a blockchain, making it a public commons. And in order to bootstrap this commons into a valuable decentralized namespace, Handshake must seed ownership into the broadest and most intelligible stakeholder community it can. That’s why Handshake’s distribution was catalyzed by a series of gifts to various communities and stakeholders (more can be found in the Handshake whitepaper).
The best way to understand Handshake’s distribution is by asset and constituency. Consider a Cartesian coordinate system whose horizontal axis runs from new constituents to legacy constituents, and whose vertical axis runs from coins to names. The resulting four quadrants define four types of distribution: new constituent coins, legacy constituent coins, legacy constituent names, and new constituent names.
(1) New Constituent Coins: FOSS Community Airdrop
Two thirds of HNS were distributed via “airdrop allocations” (more info), the majority of which was airdropped to open source developers. Open source developers have created the fertile ground that has allowed for the Internet, crypto, and Handshake to exist. The airdrop is both a thank you to the open source community and a means to empower them to own a piece of the Handshake commons as it matures in price and usage. Putting HNS in the hands of open source developers is also an effective way to source and convert competent stewards of the protocol as it flourishes.
An airdrop of ~4,246 HNS was distributed to over 180k FOSS developers, or roughly ~70% of the genesis HNS supply (which you can verify at anytime under Consensus.js in the HSD codebase). GitHub user with 15+ followers as of Jan. 2019 had your github SSH & PGP keys included in the merkle tree.
Roughly 30,000 keys from the PGP WOT Strongset have also been included in the tree. Lastly, Hacker News accounts which were linked with Keybase accounts are included in the tree — provided they were ~1.5 years old before the crawl.
Further, certain open source projects, non-profits, and hackerspaces were identified and distributed a total of $10.2MM raised from project sponsors as an additional token of gratitude for FOSS contributions. Read more about the community grant here. Directions to claim you airdrop can be found here.
(2) Legacy Constituent Coins: Reserved HNS
Handshake does not replace existing DNS architecture, it extends it. The legacy internet infrastructure community is embraced not just in its architecture, but in its distribution as well. Putting HNS in the hands of existing DNS stakeholders is both a sign of good faith and an effort to build a diverse community of namespace advocates. Given an open mind to the innovations that Handshake offers, these constituents are uniquely positioned to understand the benefits of a decentralized namespace. In fact, several of these legacy registrars have begun to support HNS top-level domains.
Through the genesis “premine” (airdrops & name claims do not add to the total circulating supply until they are claimed on-chain, and their respective maturity period has ended), ten percent of HNS were distributed to these existing DNS stakeholders including Alexa Top 100,000 sites, legacy certificate authorities, and legacy name registrars. Directions to claim your coins can be found here.
(3) Legacy Constituent Names: Reserved TLDs
Removing the artificial constraints on top-level domains opens up all possible character combinations for use. But practically speaking, for Handshake to be an extension of the existing domain name system, it must consider the implications for legacy users here as well. Handshake reserves names for the Alexa Top 100k sites for their current owners: for example, google.com -> google. This means many major internet resource providers will have their own names. This scheme will also prevent names like “Google” from being used maliciously or squatted on. Note that this is an added bonus reserved for popular domains. All existing domains under .com, .net, .org and so on will continue to work normally on Handshake.
These entities can claim their legacy domain name on HNS in a process called a “reserved name claim” that requires a long DNSSEC proof of ownership formatted into a special transaction type and confirmed on the blockchain. This process is opt-in and many recipients may never claim their name. After four years, the names on the reserved list can be won in regular name auctions on the blockchain.
Directions to claim you names can be found here.
(4) New Constituent Names: Weekly Auctions
The set of all other available character combinations after existing TLDs and the 100k reserved names far outweighs what is now unavailable. The balance of names are released weekly over the first year after launch, managing the cadence at which desirable names become available for auction. In doing so, the supply of names can be more widely distributed beyond just the users aware of Handshake in its earliest days. This provides a best effort to allow many users to bid for names desirable to them, while releasing names into circulation in a reasonable amount of time.
Users may submit blinded bids anytime after a name is released for auction. Bidding is open to everyone for roughly 5 days after the reveal period, and have roughly.
For another breakdown of the Handshake’s protocol, see The Case for Handshake.
|
https://medium.com/blockchannel/handshake-distribution-quadrants-f6199ff3cdcd?source=collection_home---4------1-----------------------
|
CC-MAIN-2021-04
|
refinedweb
| 885
| 52.8
|
Hello and welcome to part 9 of the Python for Finance tutorial series. In the previous tutorials, we've covered how to pull in stock pricing data for a large number of companies, how to combine that data into one large dataset, and how to visually represent at least one relationship between all of the companies. Now, we're going to try to take this data and do some machine learning with it!
The idea is to see what might happen if we took data from all of the current companies, and fed this through some sort of machine learning classifier. We know that, over time, various companies have different relationships with eachother, so, if the machine can recognize and fit these relationships, it's possible we could predict from changes in prices today, what will happen tomorrow with a specific company. Let's try!
To begin, all machine learning does is take "featuresets" and attempts to map them to "labels." Whether we're doing K Nearest Neighbors or deep learning with neural networks, this remains the same. Thus, we need to convert our existing data to featuresets and labels.
Our features can be other company's prices, but we're going to instead say the features are the pricing changes that day for all companies. Our label will be whether or not we actually want to buy a specific company. Let's say we're considering Exxon (XOM). What we'll do for featuresets is take into account all company percent changes that day, and those will be our features. Our label will be whether or not Exxon (XOM) rose more than
x% within the next
x days, where we can pick whatever we want for
x. To start, let's say a company is a buy if, within the next 7 days, its price goes up more than 2% and it is a sell if the price goes down more than 2% within those 7 days.
This is something we could also relatively easily make a strategy for. If the algorithm says buy, we can buy, place a 2% drop stop-loss (basically something that tells the exchange is price falls below this number / or goes above if you're shorting the company, then exit my position). Otherwise, sell the company once it has risen 2%, or you could be conservative and sell at 1% rise...etc. Regardless, you could relatively easily build a strategy from this classifier. In order to begin, we need the prices into the future for our training data.
I am going to keep coding in our same script. If this is a problem to you, feel free to create a new file and import the functions we use.
Full code up to this point:
import bs4 as bs import datetime as dt import matplotlib.pyplot as plt from matplotlib import style import numpy as np import os import pandas as pd import pandas_datareader.data as web import pickle import requests style.use('ggplot')) with open("sp500tickers.pickle", "wb") as f: pickle.dump(tickers, f) return tickers # save_sp500(2010, 1, 1) end = dt.datetime.now() for ticker in tickers: # just in case your connection breaks, we'd like to save our progress! if not os.path.exists('stock_dfs/{}.csv'.format(ticker)): df = web.DataReader(ticker, 'morningstar', start, end) df.reset_index(inplace=True) df.set_index("Date", inplace=True) df = df.drop("Symbol", axis=1) df.to_csv('stock_dfs/{}.csv'.format(ticker)) else: print('Already have {}'.format(ticker)) def compile_data(): with open("sp500tickers.pickle", "rb") as f: tickers = pickle.load(f) main_df = pd.DataFrame() for count, ticker in enumerate(tickers): df = pd.read_csv('stock_dfs/{}.csv'.format(ticker)) df.set_index('Date', inplace=True) df.rename(columns={'Adj Close': ticker}, inplace=True) df.drop(['Open', 'High', 'Low', 'Close', 'Volume'], 1, inplace=True) if main_df.empty: main_df = df else: main_df = main_df.join(df, how='outer') if count % 10 == 0: print(count) print(main_df.head()) main_df.to_csv('sp500_joined_closes.csv') def visualize_data(): df = pd.read_csv('sp500_joined_closes.csv') df_corr = df.corr() print(df_corr.head()) df_corr.to_csv('sp500corr.csv') data1 = df_corr.values fig1 = plt.figure() ax1 = fig1.add_subplot(111) heatmap1 = ax1.pcolor(data1, cmap=plt.cm.RdYlGn) fig1.colorbar(heatmap1) ax1.set_xticks(np.arange(data1.shape[1]) + 0.5, minor=False) ax1.set_yticks(np.arange(data1.shape[0]) + 0.5, minor=False) ax1.invert_yaxis() ax1.xaxis.tick_top() column_labels = df_corr.columns row_labels = df_corr.index ax1.set_xticklabels(column_labels) ax1.set_yticklabels(row_labels) plt.xticks(rotation=90) heatmap1.set_clim(-1, 1) plt.tight_layout() plt.show() visualize_data()
Continuing along, let's begin to process some data that will help us to create our labels:
def process_data_for_labels(ticker): hm_days = 7 df = pd.read_csv('sp500_joined_closes.csv', index_col=0) tickers = df.columns.values.tolist() df.fillna(0, inplace=True)
This function will take one parameter: the
ticker in question. Each model will be trained on a single company. Next, we want to know how many days into the future we need prices for. We're choosing 7 here. Now, we'll read in the data for the close prices for all companies that we've saved in the past, grab a list of the existing tickers, and we'll fill any missing with 0 for now. This might be something you want to change in the future, but we'll go with 0 for now. Now, we want to grab the % change values for the next 7 days:
for i in range(1,hm_days+1): df['{}_{}d'.format(ticker,i)] = (df[ticker].shift(-i) - df[ticker]) / df[ticker]
This creates new dataframe columns for our specific
ticker in question, using string formatting to create the custom names. The way we're getting future values is with
.shift, which basically will shift a column up or down. In this case, we shift a negative amount, which will take that column and, if you could see it visually, it would shift that column UP by
i rows. This gives us the future values
i days in advanced, which we can calculate percent change against.
Finally:
df.fillna(0, inplace=True) return tickers, df
We're all set here, we'll return the tickers and the dataframe, and we're well on our way to having some featuresets that our algorithms can use to try to fit and find relationships.
Our full processing function:
def process_data_for_labels(ticker): hm_days = 7 df = pd.read_csv('sp500_joined_closes.csv', index_col=0) tickers = df.columns.values.tolist() df.fillna(0, inplace=True) for i in range(1,hm_days+1): df['{}_{}d'.format(ticker,i)] = (df[ticker].shift(-i) - df[ticker]) / df[ticker] df.fillna(0, inplace=True) return tickers, df
In the next tutorial, we're going to cover how we'll go about creating our "labels."
|
https://pythonprogramming.net/preprocessing-for-machine-learning-python-programming-for-finance/?completed=/stock-price-correlation-table-python-programming-for-finance/
|
CC-MAIN-2021-39
|
refinedweb
| 1,118
| 58.89
|
This.
IEnumerable<T>
IEnumerable
T
In order to access the extension method include the following using directive in your program:
using Rankep.CollectionExtensions;
You will need a collection which implements the IEnumerable<T> interface (for example: List<T>) on which the method will act. Example:
IEnumerable<T>
List<T>
IEnumerable<T> Vehicles = new List();
Now the following code can be used (it looks like the ForEach method of the List<T> class, but it will use our extension method) to loop through the whole collection, and call the Print method (defined by the Vehicle class) on each element:
ForEach
Print
Vehicle
Vehicles.ForEach(v => v.Print());
Looping through the elements which have a given (derived) type (Car, in the example), can be done in two ways:
Car
Vehicles.ForEach<Vehicle, Car>(c => c.Print());
or:
Vehicles.ForEach((Car c) => c.Print());
Both ways use the same ForEach<T, T2> extension method, but in the second example the types can be determined without explicitly defining them, by specifying the type of the lambda parameter.
ForEach<T, T2>
A third extension method is available for collections which implement the non-generic IEnumerable interface. It can be used in the same way as the latter method for the generic interface.
An example for such collection would be the Children property of a WPF Grid element. It has a type called UIElementCollection, which implements the IEnumerable interface.
Children
Grid
UIElementCollection
To change the size of all children elements of a Grid (called grid) that is an Ellipse, we can use the third extension method in the following way:
grid.Children.ForEach((Ellipse c) => { c.Width = 30; c.Height = 30; });
Imagine the following data structure:
And a Vehicles variable which is a collection of Vehicle elements. (It implements the IEnumerable<Vehicle> interface).
Vehicles
IEnumerable<Vehicle>
If you would like to loop through the elements and invoke the Print method on them, then you could use the foreach construct:
foreach (Vehicle item in Vehicles)
{
item.Print();
}
And what if you would like to perform an action only on those elements which share a given derived type (e.g. Car)?
We would like to be able to write something like this:
The first idea would be (at least mine was) that in the foreach statement specify the type of the element as a derived type, and hope for it will automagically work. Well, it doesn’t, it is not how foreach works. Foreach tries to cast each element in the collection to the type we specified, which will result in an InvalidCastException if there is any item which doesn’t belong to that type.
Okay, in the foreach we should use the base type. What if we check in the loop body if the current element is of the desired type and only execute the action when it is? This is exactly what the extension method will do. Let’s see a sample for the previous Vehicle <- Car example.
foreach (Vehicle item in collection)
{
if (item is Car)
{
action((Car)item);
}
}
After generalizing it, here is the code of the extension method:
public static IEnumerable<T> ForEach<T, T2>(this IEnumerable<T> collection, Action<T2> action) where T2 : T
{
//The action can not be null
if (action == null)
throw new ArgumentNullException("action");
//Loop through the collection
foreach (var item in collection)
{
//If the current element belongs to the type on which the action should be performed
if (item is T2)
{
//Then perform the action after casting the item to the derived type
action((T2)item);
}
}
//Return the original collection
return collection;
}
It’s an extension method to the IEnumerable<T> interface. The method takes two generic type parameters (T and T2), from which T is the type of the basic collection elements (e.g. Vehicle), and T2 is a derived type of T (e.g. Car) that will be used to filter on which elements the foreach action will be performed. The method also takes two parameters, the first will be the reference to the collection on which the method is invoked. The other will be an Action<T2> delegate, which is the action that will be performed on the collection elements of type T2.
T2
foreach
Action<T2>
The extension method can be invoked in the following way, using the previous Car and Vehicle classes:
If the type of the two generic types can be determined by the call the then explicit listing of them (<Vehicle, Car>) can be omitted. One way to do this is to specify the type of the c lambda argument:
<Vehicle, Car>
c
In this case the compiler will be able to determine the types, because T is given by the type of the Vehicles collection, and we explicitly specified T2 in the left side of the lambda expression.
If we would like to use this extension method to perform an action on the whole collection, then we would have to specify the type of T2, which in this case will be the same as T. To allow omitting this redundant information we can define a new extension method which has only one generic parameter and simple calls the other extension method:
public static IEnumerable<T> ForEach<T>(this IEnumerable<T> collection, Action<T> action)
{
//Call the other extension method using the one generic type for both generic parameters
return collection.ForEach(action);
}
Both of the shown methods extend the generic IEnumerable<T> interface, but not every collection is generic. To make it possible to use these methods on non-generic ones, one more extension method will be provided, which extends the IEnumerable interface.
The method will only call the first extension method and then return the original collection.
In order to call that method we have to turn the non-generic collection into a generic one. To do this we will use the Cast<T>() method and the fact that everything inherits from the object class.
object
public static IEnumerable ForEach<T>(this IEnumerable collection, Action<T> action)
{
//Cast the collection to an IEnumerable<T> and call the already defined extension method
collection.Cast<object>().ForEach<object, T>(action);
return collection;
}
Thank you for reading, I hope you enjoyed and feel free to comment.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
Vehicles.OfType<Car>().ForEach(c => c.Print());
OfType
Cast
this.LayoutRoot.Children.OfType<System.Windows.Shapes.Ellipse>().ToList().ForEach(el => el.Width = 100);
List
ToList()
foreach (Car c in Vehicles.OfType<Car>())
{
c.Print();
}
Vehicles.ForEach((Car c) => c.Print());
grid.Children.Cast<UIElement>().ForEach((Ellipse c) => { c.Width = 30; c.Height = 30; });
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/380153/ForEach-elements-with-a-derived-type-in-an-IEnumer
|
CC-MAIN-2015-35
|
refinedweb
| 1,135
| 50.57
|
table of contents
- stretch 4.10-2
- testing 4.16-2
- stretch-backports 4.16-1~bpo9+1
- unstable 4.16-2
NAME¶getpwent_r, fgetpwent_r - get passwd file entry reentrantly
SYNOPSIS¶
#include <pwd.h> int getpwent_r(struct passwd *pwbuf, char *buf,
size_t buflen, struct passwd **pwbufp); int fgetpwent_r(FILE ¶The¶On success, these functions return 0 and *pwbufp is a pointer to the struct passwd. On error, these functions return an error value and *pwbufp is NULL.
ERRORS¶
- ENOENT
- No more entries.
- ERANGE
- Insufficient buffer space supplied. Try again with larger buffer.
ATTRIBUTES¶For.
CONFORMING TO¶These);
NOTES¶The function getpwent_r() is not really reentrant since it shares the reading position in the stream with all other threads.
EXAMPLE¶
); }
|
https://manpages.debian.org/stretch/manpages-dev/fgetpwent_r.3.en.html
|
CC-MAIN-2019-51
|
refinedweb
| 119
| 53.78
|
Name | Synopsis | Description | Return Values | Usage | Attributes | See Also | Notes
#include <dlfcn.h> int dladdr(void *address, Dl_info_t *dlip);
int dladdr1(void *address, Dl_info_t *dlip, void **info, int flags);
The dladdr() and dladdr1() functions determine if the specified address is located within one of the mapped objects that make up the current applications address space. An address is deemed to fall within a mapped object when it is between the base address, and the _end address of that object. See NOTES. If a mapped object fits this criteria, the symbol table made available to the runtime linker is searched to locate the nearest symbol to the specified address. The nearest symbol is one that has a value less than or equal to the required address.
The Dl_info are one of a family of functions that give the user direct access to the dynamic linking facilities. These facilities are available to dynamically-linked processes only. See Linker and Libraries Guide.
See attributes(5) for descriptions of the following attributes:
ld(1), dlclose(3C), dldump(3C), dlerror(3C), dlopen(3C), dlsym(3C), attributes(5)
Linker and Libraries Guide
The Dl_info_t pointer elements point to addresses within the mapped objects. These pointers can become invalid if objects are removed prior to these elements use. See dlclose(3C).
If no symbol is found to describe the specified address, both the dli_sname and dli_saddr members are set to 0.
If the address specified exists within a mapped object in the range between the base address and the address of the first global symbol in the object, the reserved local symbol _START_ is returned. This symbol acts as a label representing the start of the mapped object. As a label, this symbol has no size. The dli_saddr member is set to the base address of the associated object. The dli_sname member is set to the symbol name _START_. If the flag argument is set to RTLD_DL_SYMENT, symbol information for _START_ is returned. | Usage | Attributes | See Also | Notes
|
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i098ta/index.html
|
CC-MAIN-2014-23
|
refinedweb
| 331
| 53.51
|
(This article was first published on Thinking inside the box , and kindly contributed to R-bloggers)Two weeks after the Rcpp 0.7.0 release, Romain and I are happy to announce release 0.7.1 of Rcpp. It is currently in the incoming section of CRAN and has been accepted into Debian. Mirrors will catch up over the next few days, in the meantime the local page is available for download too.
A lot has changed under the hood since 0.7.0, and this is the first release that really reflects many of Romain's additions. Some of the changes are
- A new base class.
- New classes
Rcpp::Evaluatorand
Rcpp::Environmentfor expression evaluation and R environment access, respectively.
- A new class
Rcpp::XPtrfor external pointer access and management.
- Enhanced exception handling: exception can be trapped at the R even outside of try/catch blocks, see Romain's blog post for more.
- Namespace support with the addition of a Rcpp namespace; we will be incremental in phasing this in keeping compatibility with the old interface
- Unit test for most all of the above via use of the
RUnitpackage, and several new examples.
- Inline support has been removed and replaced with a Depends: on
inline (>= 0.3.4)as our patch is now part of the current inline package as mentioned ...
|
http://www.r-bloggers.com/rcpp-0-7-1/
|
CC-MAIN-2013-48
|
refinedweb
| 221
| 65.12
|
Framework Test Loading
Registered by Charlie Poole
Earlier releases of NUnit load tests in a hierarchy based on the namespace and may optionally load them as a flat list of fixtures. The test hierarchy is built as the tests are loaded and reflected in the gui display.
With NUnit 3.0, the test loader will only load fixtures and will not create a hierarchy. It will be the responsibility of the Gui to construct whatever display hierarchy the user chooses as a view of the tests.
This will simplify the loading of tests and is compatible with NUnitLite, which already loads tests this way..
|
https://blueprints.launchpad.net/nunit-3.0/+spec/test-loading
|
CC-MAIN-2021-39
|
refinedweb
| 104
| 62.48
|
Camera and photos for Windows Phone 8
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
This section describes how your app can capture photos and video on Windows Phone. It also shows how your app can save photos to the media library and use extensibility to extend the camera and photos experience.
This topic contains the following sections.
If capturing photos is not the central feature of your app, you may want to consider using the built-in camera app to capture photos for your app. This functionality is exposed with the camera capture task. The camera capture task lets your users capture a photo using the built-in camera app. After the photo is captured, it’s passed back to your app in an event handler. For more info, see How to use the camera capture task for Windows Phone 8.
You can also use the photo chooser task to give your users the ability to select photos from the media library. For more info, see How to use the photo chooser task for Windows Phone 8.
For direct access to the camera, your app can use the camera APIs. A Windows Phone can have up to two cameras: one on the front of the phone and one on the back. Although both cameras are fairly common on today’s phones, technically they are optional, so your app should check that they exist before attempting to use them. For more info about capturing photos and video, see the following topics:
Windows Phone 8 introduces a new class of applications: Lenses. Similar to how you can switch to a different physical lens on an SLR camera, you can switch to a lens app on a Windows Phone. From the built-in camera app, use the lens button to switch to another camera app that provides a viewfinder experience. For more info, see Lenses for Windows Phone 8.
Your app can save photos to the phone’s media library or to the app’s local folder (previously known as isolated storage). To save photos to the media library, use the MediaLibrary class in the Microsoft.Xna.Framework.Media namespace. The MediaLibrary SavePictureToCameraRoll() and SavePicture() methods save photos to the Camera Roll and Saved Pictures folders. For an example, see How to create a base camera app for Windows Phone 8.
As with other application data, your app can save photos to the local folder of your app. For an example, see How to create a base camera app for Windows Phone 8. For more info about data storage, see Data for Windows Phone 8
Your app can also save photos to a location off of the phone. Starting in Windows Phone 8, you can write an app that automatically uploads photos to a photo storage service. For more info, see Auto-upload apps for Windows Phone 8.
On Windows Phone, photo extensibility provides ways for your app to extend the photos experience. Your app can integrate with the following extension points:
Photos Hub
Share picker
Rich media app
Photo edit picker
Photo apps picker
For more information, see Photo extensibility for Windows Phone 8
|
https://msdn.microsoft.com/en-us/library/hh202973
|
CC-MAIN-2017-09
|
refinedweb
| 534
| 62.27
|
When something goes wrong with your code instead of using standard debugging techniques such as print statements use debugging tools. I found two great tools for debugging.
1.using code module: This is very useful if your code working with out errors but dint give expected result.The code module has a function interact() which stops the program execution and opens a interactive python console that inherits the local scope of the line where the interact() method is called. In that console you can print variable values(instead of placing print statements in your code),examine the state of your code and fix the bug. Place the following line in your code where you want console to start.
import code code.interact(local=locals())
To exit the interactive console and continue with execution use Ctrl+D or Ctrl+Z or exit(). Lets have a look at following example.
#file: inc.py def testing(): print 'before interact' a=10 b=20 c=a+b import code; code.interact(local=locals()) print 'after interact' testing()
The output is shown below:
ramya@ramya-ws:~/Desktop$ python inc.py before interact Python 2.7.4 (default, Sep 26 2013, 03:20:26) [GCC 4.7.3] on linux2 Type 'help', 'copyright', 'credits' or 'license' for more information. (InteractiveConsole) >>> print c 30 >>> print a 10 >>> print b 20 >>> after interact ramya@ramya-ws:~/Desktop$...
|
https://micropyramid.com/blog/debugging-in-python/
|
CC-MAIN-2020-05
|
refinedweb
| 229
| 57.98
|
The iImage interface is used to work with image objects. More...
#include <igraphic/image.h>
Detailed Description
The iImage interface is used to work with image objects.
You cannot manipulate the pixel data of iImage objects directly. To do this, you need to instantiate a your own copy of the image, e.g. by creating a csImageMemory instance (which allows access to the pixel data).
- Raw and cooked image data
- For images the "raw" and "cooked" data is available. The raw data means the image data as read from the image file with little processing done; this means that the raw format can be a "special" format requiring some special algorithm to be translated into color data. The "cooked" data is the image data already translated into color data and is usually easier to deal with.
Main creators of instances implementing this interface:
Definition at line 104 of file image.h.
Member Function Documentation
Get alpha map for 8-bit paletted image.
RGBA images contains alpha within themself. If image has no alpha map, or the image is in RGBA format, this function will return 0.
Return the "cooked" image data (the image data into which an image of non-"special" format may be processed).
- See also:
- Texture format strings
Return the "cooked" format of the image data (a non-"special" format into which image data may be processed).
- See also:
- Texture format strings
Query image depth (only sensible when the image type is csimg3D).
Qyery image format (see CS_IMGFMT_XXX above).
Query image height..
Get the keycolour stored with the image.
Return a precomputed mipmap.
num specifies which mipmap to return; 0 returns the original image, num <= the return value of HasMipmaps() returns that mipmap.
Get image file name.
Get image palette (or 0 if no palette).
Get the raw data of the image (or 0 if raw data is not provided)..
Query image width.
Check if image has a keycolour stored with it.
Returns the number of mipmaps contained in the image (in case there exist any precalculated mipmaps), in addition to the original image.
0 means there are no precomputed mipmaps.
Returns the number of sub images, in addition to this image.
Subimages are usually used for cube map faces.
Set image file name.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1
|
http://www.crystalspace3d.org/docs/online/api-1.4.1/structiImage.html
|
CC-MAIN-2016-07
|
refinedweb
| 399
| 59.19
|
I looking for the way to get parent ID or name from child thread.
In example, I have main thread as MainThread. In this thread i create a few new threads. Then I use
threading.enumerate() to get references to all running thread, pick one of child threads and somehow get ID or name of MainThread. Is any way to do that?
Make a Thread subclass that sets a
parent attribute on init:
from threading import current_thread class MyThread(threading.Thread): def __init__(self, *args, **kwargs): self.parent = current_thread() Thread.__init__(self, *args, **kwargs)
Then, while doing work inside a thread started with this class, we can access
current_thread().parent to get the spawning Thread object.
You can have a reference to parent thread on child thread.. and then get it's ID
You might want to pass the name of MainThread to the child threads on creation.
Another option, if you're working with a class, is to have the target of the child threads point to a method from the class of MainThread:
class MainThread: name = "MainThreadName" def child_thread_run(self): print self.name # Will print "MainThreadName", even from a child thread def run(self): child = threading.Thread(target=self.child_thread_run) child.start()
|
http://www.dlxedu.com/askdetail/3/921634a84a87310bf6be7500a902669c.html
|
CC-MAIN-2018-22
|
refinedweb
| 204
| 84.27
|
T
rav
November 5, 2007
Halloween Gets Even Stupider
in Lahaina, Maui, Hawaii
Lilo, from the 5th Element, and the Little Mermaid. New Disney
idea?
You do not know Jack.
Ponder the historical possibilities
if Marie Antoinette actually had a
cell phone. Serve the cake!
Chicks dig the mask. Tony
Orlando and Dawn?
Grim reaper at the Japanese
Steakhouse.
There is no explanation except that the camera
just went off. Buy Canon. It has cool camera
benefits. It knows when there are nearly naked
ladies around and just fires off a shot in their
direction.
November 13, 2007
Spot of Difficulty
I have been a patient in a hospital only twice in my life. Neither experience was fun.
Last week, my path veered into the medical arena for a day for hi-res photos of guts,
prodding and general annoyance with purpose. There was no emergency, labor or
delivery involved and they never showed me the photos. I took my camera with me.
At each station that I visited, someone in scrubs told me not to take photographs.
However, I was feeling rebelious and took some anyway. It was funny that some
of those officials in scrubs wanted to stop and talk photography after they told me
not to take pictures and all of them...all of them agreed to let me take their
photographs and enjoyed being in photographs. People are funny.
The nurse sticking me with these was the grumpiest about the
camera. I gave her no response at all to her order not to take photos.
So she repeated it. I assume she decided that I did not understand
English. There was an elderly lady sitting in a little waiting area
nearby who was dropping eaves a little. After the nurse was done
sticking me with an IV setup, I took the nurse's picture. The key was
that I waited until she was done sticking me with the needle before
clicking the shutter. The nurse was not amused. But the elderly lady
across the way snickered quietly when she heard the shutter and
saw the nurse have a little huff about it. I am not usually this defiant
about anything. But I was grumpier than the nurse that day.
This is the sweet lady who was also waiting to be stuck with
needles then scanned with machines the size of mid-size luxury
import automobiles. As we talked, she revealed that her
husband had been a photographer for the US Navy during WWII.
Now she lives alone in a retirement home. She was very
interested in digital photo developments and was surprised at
how few I actually print. She had just learned how to email digital
photos and was sure impressed with being able to do that. I
asked her permission to shoot her photograph. She gladly
granted that and then said she would not tell the nurses. When
she smiled at the thought of joining me in my rebellion, I shot the
photo. The nurse immediately came over to escort her into the
CT scan room. The lady smiled at me as she left for her tests.
She made it a better day for me.
Regardless of where they ushered me that day, I kept my eye on these
signs. It is always good to know the way out. With a ginormous deductible
on my health insurance coverage, the day's expenses were paid by me.
That means no new camera lenses for a while. My rebellion was probably
rooted in the knowledge that I was personally paying the bills. I will take a
photo whenever I want to take a photo. After all my tests, three of the
hospital workers came into the scanning room to discuss photography
with me...nice as pie. Then the grumpy one asked specifically if I had taken
any photos of people while there. Now, I am seldom uncomfortable with
silent moments and am aware that is a rare trait for a human. So I just sat
there and enjoyed one while they all waited for my answer.
The head of the day's rebellion smiled and replied, "I would never take a
people shot without the peoples' specific permission. Thanks for your work
on me today. I know the way out."
Maybe I take more than my fair share of comfort from being a little
obnoxious.
I hope one of them took my photo walking down the hall away from them.
November 21, 2007
Good Turkey To You
There is much to be thankful for.
|
http://www.ericluck.net/November2007.html
|
crawl-001
|
refinedweb
| 757
| 75.71
|
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
Dependency Management8:44 with Chris Ramacciotti
One of the main purposes of a build tool is dependency management. This video discusses how to add external libraries to your projects with Maven.
- 0:00
A dependency is a Java library that this project depends on.
- 0:04
While developing Java applications, you'll almost always be using other developers
- 0:08
libraries, in addition to your own.
- 0:10
So if you want to use code that isn't a part of the Java core library,
- 0:14
you'll need to add that library as a dependency.
- 0:17
Dependencies and Maven are declared inside this dependencies element.
- 0:20
With each dependency represented by its own dependency element.
- 0:25
Naming follows the GAV convention which stands for Group Artifact and Version.
- 0:31
Remember, this application is going to spy on a directory in our file system.
- 0:35
And alert us when files of a certain type are added to the directory.
- 0:39
Detecting the file's type can be a cumbersome process.
- 0:41
And at the very least, it's a problem that other developers have already tackled, so
- 0:45
let's use their code.
- 0:47
Hey, remember that Apache Tika library we found on Maven central in this last video?
- 0:51
Guess what?
- 0:52
We're gonna use it.
- 0:53
So let me click on this version, I'll copy this dependency element here.
- 0:59
And I'm going to paste it into my palm, under the dependencies element, great.
- 1:05
Now, when we write our code and compile with Maven,
- 1:07
we'll be able to reference the classes provided by Apache Tika.
- 1:11
Let's write our code now in app.java.
- 1:15
I'm gonna wipe out most of this class and start from scratch.
- 1:19
Okay, let's start with a couple import statements.
- 1:21
First, we're gonna import java.nio.file and
- 1:25
I'm gonna import all classes in that file package.
- 1:30
And this java.nio package stands for
- 1:32
Java's non-blocking input/output API, more on that in just a bit.
- 1:38
The next thing I wanna import is org.apache.tika.Tika; class.
- 1:42
Now, you'll notice that Intellige can't find this Apache package.
- 1:50
Though this won't be a problem for Maven when we compile,
- 1:53
my IDE will keep complaining.
- 1:55
To fix this and more fully utilize my IDE for coding,
- 1:58
I'll pop open the Maven Projects window here.
- 2:02
And then, I'll click that refresh icon.
- 2:06
After that, my IDE is happy as you can see.
- 2:10
Be sure to check out the options in your own IDE's Maven
- 2:12
tool window to see what's available.
- 2:14
But for this workshop, we'll stick with the command line for all Maven commands.
- 2:19
Okay, let's continue coding our class.
- 2:21
This class is called, public class App.
- 2:27
And I'm gonna drop two constants at the top of the class here for
- 2:30
the file type and the directory we want to watch.
- 2:33
So private static final String and I'll call it FILE_TYPE,
- 2:38
and I'll say text/csv files, cool.
- 2:42
And then, I'll do the same for the directory to watch
- 2:47
private static file String and all say DIR_TO_WATCH.
- 2:52
And also a /Users/Chris/Downloads and I made a tmp directory there.
- 2:59
You can change this to any empty folder on your own system, cool.
- 3:04
And then, I wanna public static void main method here, awesome.
- 3:08
Now ,the code we're gonna write here using the Java NIO, might look a little cryptic.
- 3:13
I'm certainly not asking you to have a complete understanding of how to use this
- 3:16
non-blocking input/output API provided by the Java core library.
- 3:22
Our work here is more about understanding how to package the project
- 3:25
into a distributable jar using Maven.
- 3:27
In any case, it's a nice opportunity for
- 3:29
us to look at Java features you may not have seen.
- 3:32
So we'll start by defining a path object that contains the directory to watch,
- 3:37
as well as a Tika object for detecting files.
- 3:39
So I say, path dir = Paths.get and then,
- 3:43
I'll say, (DIR_TO_WATCH), cool.
- 3:47
And I'll define a Tika object and call its default constructor.
- 3:53
Next, lets add our watch service which will allow us to spy on the directory, so
- 3:58
WatchService.
- 3:59
I'll just say watchService = FileSystems.getDefault.
- 4:07
It gets the current file system, newWatchService, cool.
- 4:11
And then, I'm going to register this watch service with a directory.
- 4:15
So dir.register(watchService,) and
- 4:19
I want to register it for events, for creating new files.
- 4:25
So I only wanna detect events for when new files are added to this directory.
- 4:30
So I'll say, ENTRY_CREATE.
- 4:34
Now, this is a constant that comes from a certain class.
- 4:39
So let's import that,
- 4:41
.StandardWatchEventKinds.ENTRY_CREATE;, cool.
- 4:46
Now, what you're probably going to see at this point are a bunch of warnings saying,
- 4:51
that you have on caught exceptions.
- 4:54
So I'm gonna do something here which I would normally do on a distributed Java
- 4:58
application.
- 4:58
But I'll do it here for purposes of brevity.
- 5:00
I'm just going to say throws Exception and that will silence the compiler, cool.
- 5:06
Now, we can move on.
- 5:08
Okay, let's start our event loop which will continually run until we receive
- 5:12
an invalid watch key.
- 5:13
So I'll define a WatchKey called key, and then, I'll define as do while loop.
- 5:20
while(key.reset());, this will loop as long as the key is valid.
- 5:28
Now, you'll see this little red squiggly here, until we assign key a value.
- 5:32
But we're gonna do that inside the loop.
- 5:35
Let's go ahead and do that.
- 5:37
So now, this watch key is an object that
- 5:40
represents the registration of our path object with the watch service.
- 5:43
When an event occurs, a key will be available through the watch service,
- 5:47
through a call to its take method.
- 5:49
So that's what I will assign this key variable, watchService.take().
- 5:55
Now, at this point in our code, we need to loop through any events that come through.
- 6:00
And instead of using a for loop here which we could do.
- 6:02
We'll use streams to access the events, we'll call the poll of events method and
- 6:07
examine the stream there.
- 6:08
So I'll say, key.pollEvents().stream().
- 6:12
And since, we only care about the create events for a certain file type.
- 6:15
Let's filter our stream().filter and that will accept an event object.
- 6:21
And it will return true, if we want the item to be included in our stream or
- 6:25
false, if we don't.
- 6:28
So in general, what we need to do here is return true when
- 6:30
the file type equals our constant FILE_TYPE, up here.
- 6:35
Looks like a misspelled it, FILE, there we go.
- 6:40
So we wanna return true, when the file associated with
- 6:44
this event is of this file type and false, otherwise.
- 6:49
So let's start with that code and work backwards.
- 6:53
So I will say, return FILE_TYPE.equals, and I'll say, (type).
- 6:59
Well, this must mean, we need a variable declared as type before this line of code,
- 7:04
let's do that.
- 7:05
How are we going to get that, well, String type = tika.,
- 7:09
this is where that library comes in handy.
- 7:12
(filename.toString()) but I don't have a filename yet,
- 7:17
so how am I gonna get the file name of the file associated with this event e?
- 7:22
Well, here's how you can do that.
- 7:23
We'll say, Path filename =, I'm gonna cast to
- 7:28
a path object to e.context();, just like that.
- 7:35
All right, now, you might get a warning like I am,
- 7:38
that Lambda Expressions are not supported it's language level.
- 7:41
Well, let's change the language level to eight in our IDE.
- 7:47
Okay, now that error is gone.
- 7:49
That's not needed for Maven but just needed for our IDE.
- 7:53
So now, we have a filtered stream, cool.
- 7:55
Now, what do we wanna do with each one of these events?
- 7:57
Let's use the forEach method to perform an action on each one of these events.
- 8:02
So we'll say, forEach and then an event will be the parameter.
- 8:07
And what do I wanna do, I'll do a single line here.
- 8:11
That way, I don't have to enclose it in curly braces.
- 8:13
I'll do System.out.println, now, I'll say printf.
- 8:18
And I'll save file found, and I'll drop the name of the file.
- 8:22
And then, I'll drop a new line there, cool.
- 8:25
And e.context()), will give me that filename.
- 8:30
Oops, don't need a semicolon because I haven't used the curly braces here,
- 8:32
Eexcellent.
- 8:34
Now, with our code in place,
- 8:35
we're ready to start running Maven commands from the terminal.
- 8:38
So next, we'll learn about Maven build life cycles.
- 8:41
And we'll begin to build our app through Maven commands.
|
https://teamtreehouse.com/library/dependency-management
|
CC-MAIN-2017-39
|
refinedweb
| 1,774
| 83.56
|
Web Dynpro is a standard SAP UI technology that allows you to develop web applications using graphical tools and development environment integrated with ABAP workbench. Using graphical tools reduces the implementation effort and you can better reuse and maintain components in ABAP workbench.
To access Web Dynpro runtime environment and graphical tools in ABAP workbench, you can use Transaction code − SE80
Following are the key benefits of using Web Dynpro for developers in ABAP environment −
Web Dynpro ABAP is the same as Web Dynpro Java and supports the same set of functions for the application development.
Once.
Web Dynpro is an ABAP environment for web development and is based on the Model View Controller (MVC) concept of UI programming. It is available for both Java and ABAP as per the platform, and supports similar functions.
Web Dynpro has the following features −
Following are the key concepts as part of Web Dynpro architecture −
Web Dynpro provides you with an environment for the development of web-based applications and you can use graphical tools to define web Dynpro application in the form of metadata in application development. You can also define your own events; however, event handling should be defined in a separate code and that has to be executed when an event is triggered.
The user interface in Web Dynpro application consists of small elements defined by using Web Dynpro tools. You can also change or enhance the user interface by changing these elements at run time or integrate the elements again.
There are a wide range of graphical Web Dynpro tools that you can use to generate webbased applications. You don’t need to create source code for this. Following are the key features of graphical tools in Web Dynpro application −
For all these properties, you can use graphical tools without creating a source code.
Web Dynpro allows you to run your application on the front-end and the back-end system can be accessed using service locally or via a remote connection. Your user interface is maintained in Dynpro application and persistent logic runs in the back-end system.
You can connect Web Dynpro application to the back-end system using an adaptive RFC service or by calling a web service.
Web Dynpro applications are based on MVC model −
Model − This allows the access to back end data in a Web Dynpro application.
View − This is used to ensure the representation of data in a web browser.
Controller − This is used to control communication between Model and View where it takes input from the users and gets the processes data from the model and displays the data in the browser..
Web Dynpro component is an entity used to create a Dynpro application. These are reusable entities, which are combined together to create application blocks.
Each Web Dynpro component contains a window, view, and controller pages. You can also embed a Web Dynpro component to other Web Dynrpo component in an application and communication takes place using the component interface.
Lifetime of a component starts when you call it first at runtime and ends with Web Dynpro application.
Each Web Dynpro application contains at least one view and it is used to define the layout of a user interface. Each view consists of multiple user elements and a controller and context.
The controller is used to process the user request and processing of data. Context contains data to which the elements of view are bound.
Each view also contains an inbound and outbound plug so you can connect views to each other. Plugs can be linked to each other using navigation links.
You can navigate between different views using inbound and outbound plugs. The inbound and outbound plugs are part of the view controller. The inbound plug defines the starting point of view while the outbound plug tells the subsequent view to be called.
A view set is defined as a predefined section where you can embed different views in a Web Dynpro application. View set allows you to display more than one view in a screen.
Following are a few advantages of view set in designing an application −
In Web Dynpro, the window is for multiple views or view sets. A view can only be displayed when it is embedded in a view and a window always contain one or more views connected by navigation links.
Each window contains an inbound and an outbound plug and they can be included in the user’s interaction.
In Dynpro application, you can define mapping between two global controller contexts or from the view context to.
Context element can be defined to link a node to another node of context.
In the above diagram, you can see mapping between Node 1 from the context of View 1 and the node of the same name in the context of the component controller. It also shows the mapping from Node 2 from the context of View 2, also to a node with the same name in the component controller context.
The context of the component controller is available to both the view controllers with readwrite access to all the attributes.
To display the context data in the browser, you can also bind UI elements properties in a view to the attributes of the view context. You can bind multiple properties to one context element.
In a view context, all data types are available to bind with different attributes of a view.
Internal mapping is defined as the mapping between contexts of a single component.
External mapping is defined as the mapping between multiple components using the interface controller.
You can create events to enable communication between the controllers. You can allow one controller to trigger events in a different controller. All events that you create in the component controller are available in the component.
Inbound plugs can also act as an event, thus when you call a view using the inbound plug, an event handler is called first.
You can also use some special events like Button to link with the user actions.
Button element like pushbutton can react to a user interaction by clicking on the corresponding pushbutton that can trigger a handling method to be called in the view controller. These UI elements contain one or several general events, which can be linked with a specific action that executes at design time.
When an action is created, an event handler is created automatically. You can associate a UI element with different actions.
You can also reuse actions within a view by linking an action to several UI elements.
An onAction event for the button click or onEnter event for the Input field, when the user presses the "Enter" key in the field.
Actions can be created for any UI elements in Web Dynpro framework. To set an action, go to Properties tab → Event section.
You can also create Actions from the actions tab of the view controller. An Event handler is automatically created with naming convention onaction<actionname>
Action name is SET_ATTRIBUTES and the event handler for an action would be ON_SET_ATTRIBUTES.
A Web Dynpro application can be accessed by the user using a URL with a window in the Dynpro component. A Web Dynpro application connects to an interface view using an inbound plug, which is further connected to the Dynpro component that contains Model View and Controller to process the data for the Web Dynpro application.
MVC model enables you to separate the user interface and application logic. Model is used to get the data from the back-end system as per application logic.
The following image depicts a high level diagram of a Web Dynpro application −
You can use different data sources for a Web Dynpro application −
To develop a Web Dynpro application, you can use Web Dynpro explorer, which is easily integrated to ABAP workbench.
In a Web Dynpro application, the URL is automatically generated. You can find the URL of an application in the Properties tab. The URL structure can be of two types −
SAP namespace −
<schema>://<host>.<domain>.<extension>:<port>/sap/bc/webdynpro/<namespace>/<application name>
<schema>://<host>.<domain>.<extension>:<port>/abc/klm/xyz/<namespace>/webdynpro/<application name> <schema>://<host>.<domain>.<extension>:<port>/namespace>/webdynpro/<application name>
where,
<schema> − Defines the protocol to access application http/https
<host> − Defines the name of the application server
<domain><extension> − Defines several hosts under a common name
<port> − It can be omitted if the standard port 80 (http) or 443 (https) is used
You should specify Fully Qualified Domain Name (FQDN) in Web Dynpro application URL.
Application 1
Application 2 mySecondApp/
To check fully qualified domain name, go to Web Dynpro explorer in the ABAP development environment use T-code − SE80 and select the Web Dynpro application from the navigation tree for your Web Dynpro component/interface and check the URL in the administration data. You also need to check the path details in the field URL. It should contain the full domain and host name.
Full Domain name should be used for the following reasons −
To create a Web Dynpro application, we will create a Web Dynpro component that consists of one view. We will create a view context → linked to a table element on the view layout and contains the data from the table.
The table will be shown in the browser at runtime. A Web Dynpro application for this simple Web Dynpro component, which can be run in the browser will be created.
Step 1 − Go to T-Code − SE80 and select Web Dynpro component/intf from the list.
Step 2 − Create a new component as the following.
Step 3 − Enter the name of the new component and click on display.
Step 4 − In the next window, enter the following details −
Step 5 − Assign this component to Package $TMP and click the Save button.
When you click Save, you can see this new component under the object tree and it contains −
When you expand the component interface, you can see the interface controller and interface views.
Step 1 − Click on the Web Dynpro component and go to the context menu (right click) → Create → View
Step 2 − Create a view MAINVIEW as the following and click on the tick mark.
This will open view editor in ABAP workbench under the name − MAINVIEW
Step 3 − If you want to open the layout tab and view designer, you may need to enter the application server user name and password.
Step 4 − Click the save icon at the top.
When you save, it comes under the object tree and you can check by expanding the view tab.
Step 5 − To assign the window to this view, select the window ZZ_00_TEST under the window tab and click on Change mode at the top of the screen.
Step 6 − You can right-click → Display → In Same Window.
Step 7 − Now open the view structure and move the view MAINVIEW inside the window structure on the right hand side by Drag and Drop.
Step 8 − Open the window structure on the right hand side and you will see the embedded MAINVIEW.
Step 9 − Save by clicking the Save icon on top of the screen.
Step 1 − Open the View Editor to view MAINVIEW and switch to tab Context. Create a context node in the View Controller by opening the corresponding context menu.
Step 2 − Select the View in the object tree and click Display.
Step 3 − Maintain the Properties in the next window. Select the cardinality and dictionary structure (table). Select Add Attribute from Structure and select the components of the structure.
Step 4 − To select all the components, click Select all option at the top and then click the tick mark at the bottom of the screen.
A context node TEST_NODE has been created, which refers to the data structure of the table and which can contain 0 → n entries at runtime. The context node has been created in the view context, since no data exchange with other views is planned hence component controller context usage is not necessary.
Step 5 − Save the changes to MAINVIEW by clicking the Save icon.
Step 6 − Go to the Layout tab of MAINVIEW. Insert a new UI element of the type table under ROOTUIELEMENT CONTAINER and assign the properties in the given table.
Step 7 − Enter the name of the element and type.
Step 8 − Create the binding of TEST_TABLE with context node TEST_NODE. Select Text View as Standard Cell Editors and activate bindings for all cells.
Step 9 − Click the Context button. Select the context node as TEST_NODE from the list.
Step 10 − You can see all the attributes by selecting it.
Step 11 − Activate all the checkboxes under Binding for all context attributes by selecting them. Confirm Entry by pressing the Enter key.
The result should look like this −
Step 12 − Save the changes.
Step 13 − To supply data to TEST table, go to Methods tab and double-click method WDDOINIT. Enter the following code −
method WDDOINIT . * data declaration data: Node_TEST type REF TO IF_WD_CONTEXT_NODE, Itab_TEST type standard table of TEST. * get data from table TEST select * from TEST into table Itab_TEST. * navigate from <CONTEXT> to <TEST> via lead selection Node_TEST = wd_Context->get_Child_Node( Name = `TEST_NODE` ). * bind internal table to context node <TEST> Node_TEST->Bind_Table( Itab_TEST ). endmethod.
Web Dynpro applications, you should not access database tables directly from Web Dynpro methods, however, you should use supply functions or BAPI calls for data access.
Step 14 − Save the changes by clicking the save icon on top of the screen.
Step 1 − Select the ZZ_00_TEST component in the object tree → right-click and create a new application.
Step 2 − Enter the application name and click continue.
Step 3 − Save the changes. Save as a local object.
Next is activating objects in Web Dynpro component −
Step 4 − Double-click on the component ZZ_00_TEST and click Activate.
Step 5 − Select all the objects and click continue.
Step 6 − To run the application, select Web Dynpro application → Right-click and Test.
A browser will be started and Web Dypro application will be run.
In a Web Dynpro application, the.
To create a new inbound plug, specify plug as a startup and data type should be a string. Activate the component.
Next is to specify the component to be called, parameters, window, and start-up plug.
Call the application and URL parameters overwrite application parameters.
When you create a Web Dynpro component, the creation procedure creates a component interface. Each component interface contains exactly one interface controller and one interface view. The interface view has no direct connection with the interface controller and are created automatically.
Using the component interface, you can define the interface structure and you can use in different application components.
The interface controller of a component interface definition and the interface controller of a component are different.
You can add multiple number of interface views to a component interface definition.
Consider the same screenshot as in the previous chapter.
Step 1 − Enter the name of the new component and click display.
Step 2 − In the next window, enter the following details −
Step 3 − Assign this component to Package $TMP and click the Save button.
When you click on save, you can see this new component under the object tree and it contains −
Faceless components in Web Dynpro do not contain any graphical components, no views and no windows. It only contains a component controller and you can add an additional custom controller.
Faceless components are specifically used for receiving and structuring the data. Faceless components can be embedded to other components using the component usage and you can supply the required data to these components.
Step 1 − Create a new Web Dynpro component.
Step 2 − Select the package and click save button.
Step 3 − To create a Faceless component, delete the two elements − View and Window.
In Web Dynpro component, you can create a uniquely assigned class inherited from the abstract class. Assistance class can store the coding that is required in a component but is not linked with the layout.
You can store dynamic text in assistance class, text combined at run time or contains variable that can be stored in the text pool.
In Assistance class, you can also save a code that is not directly linked with the layout of the application or with the controller.
Using the method _WD_COMPONENT_ASSISTANCE~GET_TEXT( ) allows you to access text symbols of the assistance class in the controller of your component. When you call the method, 3-digit id of the text symbol is used −
method MY_CONTROLLER_METHOD . data: my_text type string. my_text = WD_ASSIST->IF_WD_COMPONENT_ASSISTANCE~GET_TEXT( KEY = ‘001’ ). Endmethod
You can maintain text symbols in assistance class using each controller. Click on Go to → Text Symbols in the menu.
Note − Each ABAP class can act as assistance class but service integrated with Web Dynpro application is only available if assistance class is derived from class − CL_WD_COMPONENT_ASSISTANCE.
You can call an existing functional module in a Web Dynpro component using a service call. To create a service call, you can use easy-to-use wizard in Web Dynpro tools.
You can launch the wizard in ABAP workbench to create a service call.
Run T-Code − SE80
Step 1 − Select Web Dynpro component → Right-click to open the context menu. Go to create → Service call.
It will open Web Dynpro wizard − Start screen.
Step 2 − You can select if you want service call to be embedded in an existing controller or you want to create a new controller.
Note − The service calls should be embedded in global controllers and it can’t be used with the view controllers in Web Dynpro.
Step 3 − In the next window, select the service type. Click the Continue button.
Step 4 − In the next window, select a function module as a service. You can use the input help for this.
If you choose a remote capable function module, you can optionally specify an RFC destination that is to be used when calling the function module. If you do not specify a destination, the function module will be called locally.
Note − The function module must exist in the current system! The wizard does not support to call a remote capable function module that does not exist in the current system.
Step 5 − Click Continue.
Step 6 − In the next window, you can choose which object type to use to represent the service function parameters in Web Dynpro controller −
To do this, select the required object type from the list box in the relevant lines.
Note − Only UI-relevant data should be stored in the context.
You can also individually name the controller attributes and the context nodes to be created.
The following proposal is generated −
The root node receives the name of the service.
The nodes for grouping the parameters according to their declaration types receive appropriate names such as IMPORTING, EXPORTING, ...
The node names and attribute names for the parameters themselves are identical to the parameter names.
As the length of the node and the attribute names is limited to 20 characters, they are abbreviated accordingly, if necessary.
In the next window, selected service uses types from type groups as parameter types and/or defines implicit table parameters.
For all the types listed below, define (table) types with the same equal structure in the Data Dictionary. These will then be used for typing of controller attributes or method parameters created by the wizard.
Step 7 − Enter Attribute Type − TEST and click Continue.
Step 8 − In the next window, specify the name of the method that should execute the service. The wizard generates coding for calling the service and for the context binding.
The method must not yet exist in the controller.
You have now entered all the necessary information for the creation of the model-oriented controller.
Step 9 − Click ‘Complete’ to create the controller, or enhance it respectively, to generate the service call.
You can also cancel the wizard at this position. However, data entered before are lost.
When.
There.
You.
In ABAP Workbench, you can also create and show messages that contain information for end users of Dynpro application. These messages are displayed on the screen. These are user interactive messages that display important information about Web Dynpro application.
To provide users with information, warning or error details, you can program these methods in ABAP workbench using runtime service.
These messages are configured under Setting on Web Dynpro application. You can assign different settings for handling messages in Web Dynpro application −
Show message component − In this case, if the message exists, it will be displayed.
Always show message component − Even if there is no message, the message component is shown at the top.
The message is displayed without the component − In this setting, one message is displayed and no message log exists.
All these user messages are shown in the status bar. The user can navigate to the UI element to remove the error in the error message.
Messages in popup window − In this configuration, you can set the message to display in the popup window, irrespective of what is configured in Web Dynpro application. You can configure the following popup messages to display −
You can use the message manager to integrate messages into the message log. You can open the message manager using Web Dynpro code wizard.
You can open Web Dynpro code wizard from the tool bar. It is available when your ABAP workbench is in change mode or while editing a view or a controller.
To set ABAP workbench in the change mode, select the view and go to context to Change.
You can use the following methods for triggering messages −
IS_EMPTY − This is used to query if there are any messages.
CLEAR_MESSAGES − This is used to deletes all messages.
REPORT_ATTRIBUTE_ERROR_MESSAGE − This is used to report a Web Dynpro exception to a context attribute.
REPORT_ATTRIBUTE_EXCEPTION − This is used to report a Web Dynpro exception to a context attribute.
REPORT_ERROR_MESSAGE − This is used to report a Web Dynpro message with optional parameters.
REPORT_EXCEPTION − This is used to report a Web Dynpro exception that may come back.
REPORT_FATAL_ERROR_MESSAGE − This is used to report a fatal Web Dynpro message with optional parameters.
REPORT_FATAL_EXCEPTION − This is used to report a fatal Web Dynpro exception.
REPORT_SUCCESS − This is used to report a success message.
REPORT_T100_MESSAGE − This is used to report a message using a T100 entry.
REPORT_WARNING − This is used to report a warning.
As per the business requirement, you can implement many standard applications and the UI of Web Dynpro application can vary as per the requirement.
To configure a Web Dynpro application, you first configure data records for individual Web Dynpro components.
Using the component configuration, it allows you to manage the behavior.
Next is to configure the application. All the components that are created require to be used in the specific configuration. The configuration of Web Dynpro application defines which component is configured in an application.
In ABAP object list, select a Web Dynpro component −
Right-click → Create/Change configuration.
This opens a browser with the dialog window of the configurator. The mode Component Configurator is active and you enter a name for your new component configuration.
You can also define implicit and explicit configuration. Save the configuration and close the window.
Note − You can save a new configuration only when it actually contains values. An empty configuration file that doesn’t contain any data and has a name is not stored.
As this configurator is not part of the ABAP Workbench and runs separately in the browser, you need to update the hierarchy of the object list in the workbench after completion of the creation or change procedure in a configuration.
This allows you to store different configurations for each object.
When you save the application configuration, you can’t check the changes made by an administrator and an end user. There is a need to store customization and personalization data that allows merged data to be managed.
The following points should be considered −
Application users and administrators should be able to reverse the changes.
Customization changes of an application should be visible to the user for all the pages.
Application administrator should have access to mark the report as final and this should be valid for all users. When an administrator flags a property final, any changes to the value as a personalization of a single user must no longer be permitted.
You can integrate an ABAP application into the enterprise portal. You can also manage portal functions from a Web Dynpro application.
You can call Web Dynpro code wizard to access portal manager methods. This can be used to perform the following functions −
Portal Events − To navigate between Web Dynpro application within the portal or portal content.
Following navigation types are supported −
Work Protect Mode − For portal integration, following Web Dynpro applications are available in package SWDP_TEST −
WDR_TEST_PORTAL_EVENT_FIRE
Trigger event
WDR_TEST_PORTAL_EVENT_FIRE2
Trigger free event
WDR_TEST_PORTAL_NAV_OBN
Object-based navigation
WDR_TEST_PORTAL_NAV_PAGE
Page navigation
WDR_TEST_PORTAL_WORKPROTECT
Security monitoring
WDR_TEST_PORTAL_EVENT_REC
Receive portal event
WDR_TEST_PORTAL_EVENT_REC2
Receive free portal event
Following are the steps to integrate Web Dynpro ABAP (WDA) in the portal.
Step 1 − Go to ABAP workbench using T-code − SE80 and create Web Dynpro component.
Step 2 − Save the component and activate it.
Step 3 − Define data binding and context mapping. Create a Web Dynpro application and save it.
Step 4 − Login to SAP NetWeaver portal.
Step 5 − Go to Portal Content → Content Administration tab.
Step 6 − Right-click on the portal content and create a new folder.
Step 7 − Enter the folder name and click Finish.
Step 8 − Right-click on the created folder and create a new iView.
Step 9 − Select iView template. Create an iView from an existing iView template and click Next.
Step 10 − Select SAP Web Dynpro iView as template and click Next.
Step 11 − Enter iView name, iView ID, iView prefix ID and click Next. Enter definition type as ABAP and click Next.
Step 12 − Enter Web Dynpro details and ECC system is created.
Step 13 − Enter application parameter in the same screen and click Next. You will be prompted to see the summary screen. Click Finish.
You can create forms based on Adobe software and can use in context for Web Dynpro user interfaces. You can integrate Adobe lifecycle development tool with ABAP editor to ease the development of user interface. Interactive forms using Adobe software allows you to efficiently and easily develop UI elements.
Following scenarios can be used for creating interactive forms −
You can create forms independently using form editor. Go to T-code − SFP
When you click Create, you will be prompted to enter the form name, form description, and interface.
The example component for the interactive scenario in the system are available in the package SWDP_TEST → WDR_TEST_IA_FORMS.
In a Dynpro application, both scenarios - print scenario and interactive scenario − for inserting interactive forms is similar. The form that contains the static components can be used to display data in a Dynpro application using Print scenario.
Using interactive forms, you can reuse entries in Web Dynpro context for Web Dynpro application.
Step 1 − Create a view of your Web Dynpro component.
Step 2 − Right-click on View and create a node. This node will be bound to form.
Step 3 − Drag the interactive form from Adobe library to Designer window.
Step 4 − Design the form, enter the name, and bound the attributes.
Step 5 − Once you are done with the form design, go to edit mode in the workbench and define if the form is static content, PDF-based print form, or an interactive form.
SAP List Viewer is used to add an ALV component and provides a flexible environment to display lists and tabular structure. A standard output consists of header, toolbar, and an output table. The user can adjust the settings to add column display, aggregations, and sorting options using additional dialog boxes.
Following are the key features of ALV −
It supports many properties of the table element as it is based on Web Dynpro table UI element.
ALV output can be filtered, sorted, or you can apply calculations.
The user can perform application specific functions using UI elements in the toolbar.
Allows the user to save the setting in different views.
Allows to configure special areas above and below ALV output.
Allows to define the extent to which ALV output can be edited.
Following are the steps to create an ALV.
Step 1 − Use T-code: SE80. Select Web Dynpro comp/intf from the list and enter the name. Click on display. You will be prompted create the component. Click on Yes.
Step 2 − Select type as Web Dynpro component. Enter the Window name and the View name.
Step 3 − Click the tick mark.
Step 4 − In the change window, enter the component use as ALV, component as SALV_WD_TABLE and description as ALV component.
Step 5 − Go to Component Controller and right-click the context. Then select Create Node MAKT with the dictionary structure MAKT.
Step 6 − Select the required attributes from MAKT by using Add Attribute from Structure.
Step 7 − Remove the dictionary structure MAKT from the node MAKT and set the properties as follows (Cardinality, Lead selection, etc.)
Step 8 − Right-click on Component usage in the Object tree → Create Controller Usage.
Step 9 − Go to View → Context tab and drag MAKT node to the view.
After mapping, it will appear as shown in the following screenshot.
Step 10 − Go to Layout and right-click Insert Element.
The layout will appear as shown in the following screenshot −
Step 11 − Go to Properties tab, click create controller usage to add the following to View.
Step 12 − Go to method, use WDDOINIT to write code.
Step 13 − Double-click on the method to enter the code. Enter the following code and initiate the used component ALV.
Use GET_MODEL method in the controller.
Step 14 − Bind the table to the context node using BIND_TABLE method as follows −
Step 15 − Go to Window in the Object tree and right-click C1 to embed ALV table to the view.
Once you embed the ALV table, it will appear like this −
Step 16 − The last step is to create a Web Dynpro application under the object tree. Enter the name of the application.
Step 17 − To execute application, double-click and you will see the output.
Using filters, you can limit the data in ALV output. You can create multiple number of filter conditions for each field. To create or delete a filter condition, you can use the method of interface class IF_SALV_WD_FILTER.
You can use the following methods for creating, getting, and deleting filter conditions −
In Web Dynpro ABAP administration, you can perform various administration tasks using different tools −
Web Dynpro trace tool can be used for checking the errors and problems in Dynpro application. You can activate Web Dynpro trace tool for a specific user.
Step 1 − To activate trace tool in SAP GUI client, use T-code − WD_TRACE_TOOL
Step 2 − Click on Activate for this user. This allows to set the trace active for the user.
Step 3 − Select Trace features in the new window and click OK.
Step 4 − Start Web Dynpro application that you want to trace. You can see a new area Web Dynpro trace tool in Web application.
Step 5 − Execute the application. Enter the details of problem → Choose Continue.
Step 6 − You can also send it with Insert and add a screenshot or you insert a file with additional information. Go to Browse → Select File and click Add File.
Step 7 − You can download the trace file in Zip format and end tracing by clicking on Save Trace as Zip file and Stop Trace.
This file can be uploaded to SAP portal and can be sent to SAP for debugging.
To analyze the problem, you can also trace the data stream in SAP Web Application server.
Step 1 − Use T-Code − SMICM. In the next window, click on GOTO → Trace File → Display file or start.
You will see ICM trace result as shown in the following screenshot −
Step 2 − You can also increase the trace level from default level 1. To increase the trace level, GOTO → Trace Level → Increase.
This is used to analyze the dynamic behavior of your code. This can be used as an alternative to ICM tracing.
To use browser tracing, you need to install proxy tool on your local system.
You can monitor Web Dynpro application using ABAP monitor. Information is stored about Web Dynpro application. You can view this information using T-code − RZ20.
You can check the following information in Web Dynpro ABAP monitor −
To view the report, use T-code − RZ20
Step 1 − Go to SAP CCMS Monitor template.
Step 2 − Click the sub node Entire System.
Step 3 − Enter the system ID of the current SAP system where the application you want to monitor is installed.
Step 4 − Select Application Server.
Step 5 − Select the name of the relevant application server. For instance, select Web Dynpro ABAP as shown in the following screenshot −
The result will be displayed with the following information when a Web Dynpro application will be called −
|
https://www.tutorialspoint.com/sap_web_dynpro/sap_web_dynpro_quick_guide.htm
|
CC-MAIN-2019-51
|
refinedweb
| 5,532
| 55.74
|
This is your resource to discuss support topics with your peers, and learn from each other.
05-11-2014 04:46 PM
I have few lines of code that is common for cases such as onCreationCompleted, onSubmitted and onClicked. Is there a way I put this common code in a function or in a file or as a section within QML and call that function/code whenever needed?
05-11-2014 06:13 PM
common used functions I'm placing in my root
per ex
root is TabbedPane wih some Tabs and each tab is a Page or a NavigationPane with a stack of Pages
all these Pages placed anywhere on top "know" their root object (their parents)
my TabbedPane is always named as
id:rootPane
then from anywhere you can use
rootPane.myCommonFunction()
the only drawback: while coding QML doesn't know about the hierarchy at runtime,
so you get no help from editor
and there's an exception of this rule:
if using a Sheet the path is broken. pages on top of a Sheet cannot "see" what's the root
05-11-2014 09:36 PM
Thank you. I though have two questions:
05-12-2014 02:45 AM
to your second question: yes there is
while it is possible to write all common functions into your root navigation pane, it will spam your root, making it very unreadable, and less reusable if you what to use the same code in different projects
what you can do instead of writing all common functions/constants/properties into your root, is to create a custom QtObject, which you add as attachedObject to your root.
like this
//in your root
import QtQuick 1.0
//...root definition...
attachedObjects: [ Utility { //custom qtobject id: utility }, Strings { //another custom qtobject id: strings } ]
//definition of your custom qtobject
import bb.cascades 1.2 QtObject { //store to not call create() all the time property variant orange: Color.create("#ffF87A17") function getColor(type) { return orange; //example for testing } }
and if you want to call it, just call utility.getColor() / utility.orange from any object under root
05-12-2014 02:50 AM
pm_jd wrote:
Thank you. I though have two questions:
- ...
- Is there any alternative approach to QML for Blackberry 10 that is as versatile as say Javascript/Java/PHP where code can be reused more extensively and MVC model can be better implemented
to implement MVC from my POV it's the best to use a combination of C++/Qt and QML
search the Cascades Documentation to learn how to exchange data between C++ and QML
I never would place business logic in QML
05-12-2014 03:18 AM
Thank you all! This helps.
|
https://supportforums.blackberry.com/t5/Native-Development/Replace-repetitive-lines-of-code-in-Blackberry-QML/m-p/2878552
|
CC-MAIN-2017-04
|
refinedweb
| 449
| 50.2
|
Examples of overlapping quota policies
With the ability to define a quota policy on namespaces and tables, you have to define how the policies are applied. A table quota should take precedence over a namespace quota.
Scenario 1
For example, consider Scenario 1, which is outlined in the following table. Namespace n has the following collection of tables: n1.t1, n1.t2, and n1.t3. The namespace quota is 100 GB. Because the total storage required for all tables is less than 100 GB, each table can accept new WRITEs.
Scenario 2
In Scenario 2, as shown in the following table, WRITEs to table n1.t1 are denied because the table quota is violated, but WRITEs to tablen1.t2 and table n1.t3 are still allowed because they are within the namespace quota. The violation policy for the table quota on table n1.t1 is enacted.
Scenario 3
In the Scenario 3 table below, WRITEs to all tables are not allowed because the storage utilization of all tables exceeds the namespace quota limit. The namespace quota violation policy is applied to all tables in the namespace.
Scenario 4
In the Scenario 4 table below, table n1.t1 violates the quota set at the table level. The table quota violation policy is enforced. In addition, the disk utilization of table n1t1 plus the sum of disk utilization for table n1t2 and table n1t3 exceeds the 100 GB namespace quota. Therefore, the namespace quota violation policy is also applied.
|
https://docs.cloudera.com/runtime/7.2.8/configuring-hbase/topics/hbase-overlapping-quota-policies.html
|
CC-MAIN-2021-17
|
refinedweb
| 247
| 59.3
|
Attribute::GlobalEnable - Enable Attrubutes and flags globally across all code.
package Attribute::GlobalEnable::MyPackage; use Attibute::GlobalEnable( ENABLE_CHK => \%ENV, ENABLE_ATTR => { Debug => 'DEBUG_PERL' } ); ## see Attribute::Handlers for more info on these variables. Note ## that this_package is not included in the list (because we're ## calling it as a package method) sub attrDebug_1 { my $this_package = shift(); my $caller_package = shift(); my $code_symbol = shift(); my $code_ref = shift(); my $atribute = shift(); ## will be Debug ## my $attribute_data = shift(); my $phase = shift(); ## lets see what comes in and out ## return sub { warn "IN TO ". scalar( *$code_symbol ) . join "\n", @_; my @data = &code_ref(@_); warn "OUT FROM ". scalar( *$code_symbol ) . join "\n", @data; return @data; } } sub ourTest_1 { my $message = shift(); } 1; ... ... ## now, in your code: test_me.pl sub my_funky_function : Debug { my $self = shift(); my $var1 = shift(); my $var2 = shift(); ## do some stuff ## Debug( "VAR1: $var1" ); Debug( "VAR2: $var2" ); } ## since you've tied any debugging checks in to your env ## you can turn MyPackage functionality on or off by setting ## env vars with the special tag: DEBUG_PERL ## set it to level 1 for everything %> ALL_DEBUG_PERL=1 ./test_me.pl ## or %> DEBUG_PERL=1 ./test_me.pl ## just for package 'main' %> DEBUG_PERL_main=1 ./test_me.pl ## just for a single function %> DEBUG_PERL_main__my_funky_function ./test_me.pl ## force it off for everyone %> NO_DEBUG_PERL=1 ./test_me.pl
Attribute::GlobalEnable provides switchable attribute hooks for all packages in your namespace. It's primarily been developed with the idea of providing debugging hooks that are very unobtrusive to the code. Since attributes trigger their functionality at compile time (or at the least very early on, before execution time), not enabling (or having your flags all off) does nothing to the code. All the special functionality will be skipped, and your code should operate like it wasn't there at all. It is, however, not specific to debugging, so you can do what you wish with your attributes.
Since all of the functionality of what your attributes do is defined by the user (you), you MUST subpackage Attribute::GlobalEnable. It handles all of the exporting for you, but you must format your hooks as explained below.
Along with the special attribute functionality, the package also builds special functions named the same as your attributes, and exports them to which ever package 'use's your sub-package. Along with this, you can define special flags that will turn this function on or off, and the flags play with the rest of the system as one would expect.
This package does not inherit from the Attribute class.
There are no functions to use directly with this package. There are, however, some special function names that YOU will define when subpackaging this, and a package constructor where you do just that.
This package is NOT an object. It is functional only. However, you must initialize the package for use. The package is (more or less) a singleton, so you can only initialize it once. DO NOT try to have multiple packages set values, as it will just skip subsequent attempts to import past the first one.
There are 2 required keys, and 1 optional:
This key is really the meat of it all, and the data you supply initializes the attributes, and what functions it expects to see in your sub-package. The structure of the hash is laid out as:
{'Attribute_name' => 'SPECIAL_KEY', 'Attribute_name_2'... }
The attribute name must be capitalized (see Attribute::Handlers), the SPECIAL_KEY can be any string. You can have as many key => value pairs as you deem necessary.
Setting this value has multiple effects. First, it assigns the attribute 'Attribute_name' to a subroutine in the callers namespace, named:
attr'Attribute_name'_# ## ex: attrDebug_1
The # should be an integer, and represents the number the SPECIAL_KEY has been set to. More on that in a second tho. The attribute name is set in the UNIVERSAL namespace, so now it can be utilized by everything under your particular perl sun.
What ever packages 'use' your sub-package, have another special subroutine named 'Attribute_name' exported to their namespace. This subroutine points to your sub-package subroutine named (similarly to above):
our'Attribute_name'_# ## ex: ourDebug_1
The # should be an integer (see below for proper values) This function can be turned on by the regular SPECIAL KEY, but also by any ENABLE_FLAGS that you've defined as well... but more on that later.
The 'SPECIAL_KEY' is the distinct identifier to trigger this attributes functionality. It is not really meant to be used on it's own, (but it can). It is mostly an identifier string that allows you to add stuff to it to easily customize what you want to see (or do or whatever). There are 2 special pre-strings that you can slap on to the begining of the key:
This turns the attributes functionality on for ALL of those subroutines that have the attribute. This trumps all other settings, except for the NO_ pre-string.
This is essentially the default behaviour, turning the attribute stuff off. This trumps everything... Other 'SPECIAL_KEY's, and any ENABLE_FLAGS.
You can append package names, or even subroutines to the end of the 'SPECIAL_KEY', in order to turn the attribute functionality on for a specific package or subroutine. Just separate the 'SPECIAL_KEY' and your specific string with an underscore. Neato eh? There is one caveat to this. The regular perl package (namespace) separator is replaced with two underscores, so if you wanted to turn on attribute behaviour for MyPackage::ThisPackage, your key would look like so:
'SPECIAL_KEY'_MyPackage__ThisPacakge
I did this so that you can just pass in the %ENV hash, and set your attribute 'SPECIAL_KEY's on the command line or whathave you.
Finally, the '#'s that you must name each of your special subs with, represent a level for a particular functionality. This level is checked each time, and the appropriate subroutine will be called, or it will try the next level down. So, forexample: If you just have attr'Attribute_name'_1, but you set your 'SPECIAL_KEY' to 3, then attr'Attribute_name'_1 will be executed. if you had an attr'Attribute_name'_2, then that subroutine would be executed instead of 1. This will not call each subroutine as it goes, it simply executes the first one it finds.
This must be set to a hash ref whos structure is laid out as:
SOME_FLAG => $integer,
$integer should be positive, and represents the attribute level you wish to do attribute stuff at. (see ENABLEL_ATTR above for more info on that). The actual hash can be empty, but the reference must exist.
This represents the actual user set triggers for the attributes. Telling GlobalEnable which to... well... enable, and which to skip.
See the previous section for a description on special characters etc...
The $hash_ref structure must be:
{ Attribute_name => [ list of flags ], Attribute_name_2 ... }
The ENABLE_FLAG is optional, and describes flags that can be set for the exported 'Attribute_name' subroutines. These are exported as global constants, so it looks nice and neat in your code. This essentially links that sub call to that flag. The flag is still set like it would normally be set in the ENABLE_CHK hash, however, you still must use the 'SPECIAL_KEY' (see above) in the assignment, so your assignment will look like:
'SPECIAL_KEY'_'FLAG'
See ENABLE_ATTR above for a description on the layout naming scheme for this particular subroutine name.
This is your attribute hook for a particular level. This must return a subroutine. The subroutine that it returns replaces the one the attribute is currently assigned to. You can do anything you wish at this point, as you'll have access to everything that's being passed in, everything that's being passed out, and whatever else you want.
It will always get these variables when it's called:
See perldoc Attribute::Handlers for more descirption on what these values are, or how to utilize them.
This is the sub that's pointed to from our exported 'Attribute_name' subroutine. If you pass in a valid flag, it'll clear that out before it sends the rest of the arguments your way. There is no need to return a sub, as this is the actual subroutine that's executed when you trigger this special sub.
For right now, see the tests for some examples. There's a test module in the test dir as well. I'll fill in some examples a little later.
perldoc perlsub, Attribute::Handlers
Craig Monson (cmonson [at the following]malachiarts com)
You tried to init the package with nuttin. Gotta pass in some args.
This isn't meant to be run on it's own.
your ENABLE_CHK wasn't a hash ref. Please read this doc ;)
your ENABLE_ATTR was in the wrong format.
Your key or value for ENABLE_ATTR wasn't in the right format.
If you're gonna set ENABLE_FLAG, the values for the keys must be array refs.
If you get this, then it's prolly a bug in the package. Please report it to me.
<none as of yet>
I suppose I (Craig Monson) own it. All Rights Reserved. This module is 100% free software, and may be used, reused, redistributed, messed with, subclassed, deleted, printed, compiled, and pooped on, just so long as you follow the terms described by Perl itself.
|
http://search.cpan.org/~cmonson/Attribute-GlobalEnable-0.01/lib/Attribute/GlobalEnable.pm
|
CC-MAIN-2014-15
|
refinedweb
| 1,536
| 63.49
|
Ok, so i am actually going through K&R C book (I know it is old and it has a lot of outdated stuff specially on the security side but i am just trying to do the excercises) Ive been playing with excercise 5-2 where i need to implement my own strcat with pointers. My code is the following:
#include <stdio.h>
#include <stdlib.h>
char *Strcat(char *string1, const char *string2);
int main(void){
char string1[100]="hello";
char string2[100]="1234";
printf("%s",Strcat(string1,string2));
return 0;
}
char *Strcat (char *string1, const char *string2){
int i=0;
char *temp=string1;
while(*string1){// move the pointer to find the end of the string
++string1;
}
while(*string1++=*string2++)//copy string 2 at the end of string 1
;
puts(string1);//print string 1 concatenated with string 2
return temp;//send back temp pointing to string1 for printing
}
++string1 has the effect on the variable equivalent to
string1 = string1 + 1. So by the time you try to print
string1 it no longer points to the start of the original string.
|
https://codedump.io/share/U8Usw1mcbjJk/1/learning-c-pointers-i-cant-figure-out-why-this-is-not-working-kampr-excercise-5-2
|
CC-MAIN-2017-34
|
refinedweb
| 180
| 64.98
|
The Webpack 4 docs state that:
Webpack is a module bundler. Its main purpose is to bundle JavaScript files for usage in a browser, yet it is also capable of transforming, bundling, or packaging just about any resource or asset.
Webpack has become one of the most important tools for modern web development. It’s primarily a module bundler for your JavaScript, but it can be taught to transform all of your front-end assets like HTML, CSS, even images. It can give you more control over the number of HTTP requests your app is making and allows you to use other flavors of those assets (Pug, Sass, and ES. It’s a little old now, but the principles are still the same, and it’s a great introduction.
To follow along at home, you’ll need to have Node.js installed. You can also download the demo app from our GitHub repo.
Setup
Let’s initialize a new project with npm and install
webpack and
webpack-cli:
mkdir webpack-demo && cd webpack-demo npm init -y npm install --save-dev webpack webpack-cli
Next we’ll create the following directory structure and contents:
webpack-demo |- package.json + |- webpack.config.js + |- /src + |- index.js + |- /dist + |- index.html
dist/index.html
<!doctype html> <html> <head> <title>Hello Webpack</title> </head> <body> <script src="bundle.js"></script> </body> </html>
src/index.js
const root = document.createElement("div") root.innerHTML = `<p>Hello Webpack.</p>` document.body.appendChild(root)
webpack.config.js
const path = require('path') module.exports = { entry: './src/index.js', output: { filename: 'bundle.js', path: path.resolve(__dirname, 'dist') } }
This tells Webpack to compile the code in our entry point
src/index.js and output a bundle in
/dist/bundle.js. Let’s add an npm script for running Webpack.
package.json
{ ... "scripts": { - "test": "echo \"Error: no test specified\" && exit 1" + "develop": "webpack --mode development --watch", + "build": "webpack --mode production" }, ... }
Using the
npm run develop command, we can create our first bundle!
Asset Size Chunks Chunk Names bundle.js 2.92 KiB main [emitted] main
You should now be able to load
dist/index.html in your browser and be greeted with “Hello Webpack”. using ES Modules, and Webpack will be able to produce a bundle for production that will work in all browsers.
Restart the build with Ctrl + C and run
npm run build to compile our bundle in production mode.
Asset Size Chunks Chunk Names bundle.js 647 bytes main [emitted] main
Notice that the bundle size has come down from 2.92 KiB to 647 bytes.
Take another look at
dist/bundle.js and you’ll see an ugly mess of code. Our bundle has been minified with UglifyJS: the code will run exactly the same, but it’s done with the smallest file size possible.
--mode developmentoptimizes for build speed and debugging
--mode productionoptimizes for execution speed at runtime and output file size.
Modules
Using ES Modules, you can split up your large programs into many small, self-contained programs.
Out of the box, Webpack knows how to consume ES Modules using
import and
export statements. As an example, let’s try this out now by installing lodash-es and adding a second module:
npm install --save-dev lodash-es
src/index.js
import { groupBy } from "lodash-es" import people from "./people" const managerGroups = groupBy(people, "manager") const root = document.createElement("div") root.innerHTML = `<pre>${JSON.stringify(managerGroups, null, 2)}</pre>` document.body.appendChild(root)
src/people.js
const people = [ { manager: "Jen", name: "Bob" }, { manager: "Jen", name: "Sue" }, { manager: "Bob", name: "Shirley" } ] export default people
Run
npm run develop to start Webpack and refresh
index.html. You should see an array of people grouped by manager printed to the screen.
Note: Imports without a relative path like
'es-lodash' are modules from npm installed to
/node_modules. Your own modules will always need a relative path like
'./people', as this is how you can tell them apart.
Notice in the console that our bundle size has increased to 1.41 MiB! This is worth keeping an eye on, though in this case there’s no cause for concern. Using
npm run build to compile in production mode, all of the unused lodash modules from lodash-es are removed from bundle. This process of removing unused imports is known as tree-shaking, and is something you get for free with Webpack.
> npm run develop Asset Size Chunks Chunk Names bundle.js 1.41 MiB main [emitted] [big] main
> npm run build Asset Size Chunks Chunk Names bundle.js 16.7 KiB 0 [emitted] main
Loaders
Loaders let you run preprocessors on files as they’re imported. This allows you to bundle static resources beyond JavaScript, but let’s look at what can be done when loading
.js modules first.
Let’s keep our code modern by running all
.js files through the next-generation JavaScript transpiler Babel:
npm install --save-dev "babel-loader@^8.0.0-beta" @babel/core @babel/preset-env
webpack.config.js
const path = require('path') module.exports = { entry: './src/index.js', output: { filename: 'bundle.js', path: path.resolve(__dirname, 'dist') }, + module: { + rules: [ + { + test: /\.js$/, + exclude: /(node_modules|bower_components)/, + use: { + loader: 'babel-loader', + } + } + ] + } }
.babelrc
{ "presets": [ ["@babel/env", { "modules": false }] ], "plugins": ["syntax-dynamic-import"] }
This config prevents Babel from transpiling
import and
export statements into ES5, and enables dynamic imports — which we’ll look at later in the section on Code Splitting.
We’re now free to use modern language features, and they’ll be compiled down to ES5 that runs in all browsers.
Sass
Loaders can be chained together into a series of transforms. A good way to demonstrate how this works is by importing Sass from our JavaScript:
npm install --save-dev style-loader css-loader sass-loader node-sass
webpack.config.js
module.exports = { ... module: { rules: [ ... + { + test: /\.scss$/, + use: [{ + loader: 'style-loader' + }, { + loader: 'css-loader' + }, { + loader: 'sass-loader' + }] + } ] } }
These loaders are processed in reverse order:
sass-loadertransforms Sass into CSS.
css-loaderparses the CSS into JavaScript and resolves any dependencies.
style-loaderoutputs our CSS into a
<style>tag in the document.
You can think of these as function calls. The output of one loader feeds as input into the next:
styleLoader(cssLoader(sassLoader("source")))
Let’s add a Sass source file and import is a module.
src/style.scss
$bluegrey: #2b3a42; pre { padding: 8px 16px; background: $bluegrey; color: #e1e6e9; font-family: Menlo, Courier, monospace; font-size: 13px; line-height: 1.5; text-shadow: 0 1px 0 rgba(23, 31, 35, 0.5); border-radius: 3px; }
src/index.js
import { groupBy } from 'lodash-es' import people from './people' + import './style.scss' ...
Restart the build with Ctrl + C and
npm run develop. Refresh
index.html in the browser and you should see some styling.
CSS in JS
We just imported a Sass file from our JavaScript, as a module.
Open up
dist/bundle.js and search for “pre {”. Indeed, our Sass has been compiled to a string of CSS and saved as a module within our bundle. When we import this module into our JavaScript,
style-loader outputs that string into an embedded
<style> tag.
Why would you do such a thing?
I won’t delve too far into this topic here, but here are a few reasons to consider:
- A JavaScript component you may want to include in your project may depend on other assets to function properly (HTML, CSS, Images, SVG). If these can all be bundled together, it’s far easier to import and use.
- Dead code elimination: When a JS component is no longer imported by your code, the CSS will no longer be imported either. The bundle produced will only ever contain code that does something.
- CSS Modules: The global namespace of CSS makes it very difficult to be confident that a change to your CSS will not have any side effects. CSS modules change this by making CSS local by default and exposing unique class names that you can reference in your JavaScript.
- Bring down the number of HTTP requests by bundling/splitting code in clever ways.
Images
The last example of loaders we’ll look at is the handling of images with
file-loader.
In a standard HTML document, images are fetched when the browser encounters an
img tag or an element with a
background-image property. With Webpack, you can optimize this in the case of small images by storing the source of the images as strings inside your JavaScript. By doing this, you preload them and the browser won’t have to fetch them with separate requests later:
npm install --save-dev file-loader
webpack.config.js
module.exports = { ... module: { rules: [ ... + { + test: /\.(png|svg|jpg|gif)$/, + use: [ + { + loader: 'file-loader' + } + ] + } ] } }
Download a test image with this command:
curl --output src/code.png
Restart the build with Ctrl + C and
npm run develop and you’ll now be able to import images as modules!
src/index.js
import { groupBy } from 'lodash-es' import people from './people' import './style.scss' + import './image-example' ...
src/image-example.js
import codeURL from "./code.png" const img = document.createElement("img") img.src = codeURL img.style = "background: #2B3A42; padding: 20px" img.width = 32 document.body.appendChild(img)
This will include an image where the
src attribute contains a data URI of the image itself:
<img src=" style="background: #2B3A42; padding: 20px" width="32">
Background images in our CSS are also processed by
file-loader.
src/style.scss
$bluegrey: #2b3a42; pre { padding: 8px 16px; - background: $bluegrey; + background: $bluegrey url("code.png") no-repeat center center / 32px 32px; color: #e1e6e9; font-family: Menlo, Courier, monospace; font-size: 13px; line-height: 1.5; text-shadow: 0 1px 0 rgba(23, 31, 35, 0.5); border-radius: 3px; }
See more examples of Loaders in the docs:
Dependency Graph
You should now be able to see how loaders help to build up a tree of dependencies amongst your assets. This is what the image on the Webpack home page is demonstrating.
Though JavaScript is the entry point, Webpack appreciates that your other asset types — like HTML, CSS, and SVG — each have dependencies of their own, which should be considered as part of the build process.
Code Splitting
From the Webpack docs:.
So far, we’ve only seen a single entry point —
src/index.js — and a single output bundle —
dist/bundle.js. When your app grows, you’ll need to split this up so that the entire codebase isn’t downloaded at the start. A good approach is to use Code Splitting and Lazy Loading to fetch things on demand as the code paths require them.
Let’s demonstrate this by adding a “chat” module, which is fetched and initialized when someone interacts with it. We’ll make a new entry point and give it a name, and we’ll also make the output’s filename dynamic so it’s different for each chunk.
webpack.config.js
const path = require('path') module.exports = { - entry: './src/index.js', + entry: { + app: './src/app.js' + }, output: { - filename: 'bundle.js', + filename: '[name].bundle.js', path: path.resolve(__dirname, 'dist') }, ... }
src/app.js
import './app.scss' const button = document.createElement("button") button.textContent = 'Open chat' document.body.appendChild(button) button.onclick = () => { import(/* webpackChunkName: "chat" */ "./chat").then(chat => { chat.init() }) }
src/chat.js
import people from "./people" export function init() { const root = document.createElement("div") root.innerHTML = `<p>There are ${people.length} people in the room.</p>` document.body.appendChild(root) }
src/app.scss
button { padding: 10px; background: #24b47e; border: 1px solid rgba(#000, .1); border-width: 1px 1px 3px; border-radius: 3px; font: inherit; color: #fff; cursor: pointer; text-shadow: 0 1px 0 rgba(#000, .3), 0 1px 1px rgba(#000, .2); }
Note: Despite the
/* webpackChunkName */ comment for giving the bundle a name, this syntax is not Webpack specific. It’s the proposed syntax for dynamic imports intended to be supported directly in the browser.
Let’s run
npm run build and see what this generates:
Asset Size Chunks Chunk Names chat.bundle.js 377 bytes 0 [emitted] chat app.bundle.js 7.65 KiB 1 [emitted] app
As our entry bundle has changed, we’ll need to update our path to it as well.
dist/index.html
<!doctype html> <html> <head> <title>Hello Webpack</title> </head> <body> - <script src="bundle.js"></script> + <script src="app.bundle.js"></script> </body> </html>
Let’s start up a server from the dist directory to see this in action:
cd dist npx serve
Open in the browser and see what happens. Only
bundle.js is fetched initially. When the button is clicked, the chat module is imported and initialized.
With very little effort, we’ve added dynamic code splitting and lazy loading of modules to our app. This is a great starting point for building a highly performant web app.
Plugins
While loaders operate transforms on single files, plugins operate across larger chunks of code.
Now that we’re bundling our code, external modules and static assets, our bundle will grow — quickly. Plugins are here to help us split our code in clever ways and optimize things for production.
Without knowing it, we’ve actually already used many default Webpack plugins with “mode”
development
- Provides
process.env.NODE_ENVwith value “development”
- NamedModulesPlugin
production
- Provides
process.env.NODE_ENVwith value “production”
- UglifyJsPlugin
- ModuleConcatenationPlugin
- NoEmitOnErrorsPlugin
Production
Before adding additional plugins, we’ll first split our config up so that we can apply plugins specific to each environment.
Rename
webpack.config.js to
webpack.common.js and add a config file for development and production.
- |- webpack.config.js + |- webpack.common.js + |- webpack.dev.js + |- webpack.prod.js
We’ll use
webpack-merge to combine our common config with the environment-specific config:
npm install --save-dev webpack-merge
webpack.dev.js
const merge = require('webpack-merge') const common = require('./webpack.common.js') module.exports = merge(common, { mode: 'development' })
webpack.prod.js
const merge = require('webpack-merge') const common = require('./webpack.common.js') module.exports = merge(common, { mode: 'production' })
package.json
"scripts": { - "develop": "webpack --watch --mode development", - "build": "webpack --mode production" + "develop": "webpack --watch --config webpack.dev.js", + "build": "webpack --config webpack.prod.js" },
Now we can add plugins specific to development into
webpack.dev.js and plugins specific to production in
webpack.prod.js.
Split CSS
It’s considered best practice to split your CSS from your JavaScript when bundling for production using ExtractTextWebpackPlugin.
The current
.scss loaders are perfect for development, so we’ll move those from
webpack.common.js into
webpack.dev.js and add
ExtractTextWebpackPlugin to
webpack.prod.js only.
npm install --save-dev extract-text-webpack-plugin@4.0.0-beta.0
webpack.common.js
... module.exports = { ... module: { rules: [ ... - { - test: /\.scss$/, - use: [ - { - loader: 'style-loader' - }, { - loader: 'css-loader' - }, { - loader: 'sass-loader' - } - ] - }, ... ] } }
webpack.dev.js
const merge = require('webpack-merge') const common = require('./webpack.common.js') module.exports = merge(common, { mode: 'development', + module: { + rules: [ + { + test: /\.scss$/, + use: [ + { + loader: 'style-loader' + }, { + loader: 'css-loader' + }, { + loader: 'sass-loader' + } + ] + } + ] + } })
webpack.prod.js
const merge = require('webpack-merge') + const ExtractTextPlugin = require('extract-text-webpack-plugin') const common = require('./webpack.common.js') module.exports = merge(common, { mode: 'production', + module: { + rules: [ + { + test: /\.scss$/, + use: ExtractTextPlugin.extract({ + fallback: 'style-loader', + use: ['css-loader', 'sass-loader'] + }) + } + ] + }, + plugins: [ + new ExtractTextPlugin('style.css') + ] })
Let’s compare the output of our two build scripts:
> npm run develop Asset Size Chunks Chunk Names app.bundle.js 28.5 KiB app [emitted] app chat.bundle.js 1.4 KiB chat [emitted] chat
> npm run build Asset Size Chunks Chunk Names chat.bundle.js 375 bytes 0 [emitted] chat app.bundle.js 1.82 KiB 1 [emitted] app style.css 424 bytes 1 [emitted] app
Now that our CSS is extracted from our JavaScript bundle for production, we need to
<link> to it from our HTML.
dist/index.html
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>Code Splitting</title> + <link href="style.css" rel="stylesheet"> </head> <body> <script type="text/javascript" src="app.bundle.js"></script> </body> </html>
This allows for parallel download of the CSS and JavaScript in the browser, so will be faster-loading than a single bundle. It also allows the styles to be displayed before the JavaScript finishes downloading.
Generating HTML
Whenever our outputs have changed, we’ve had to keep updating
index.html to reference the new file paths. This is precisely what
html-webpack-plugin was created to do for us automatically.
We may as well add
clean-webpack-plugin at the same time to clear out our
/dist directory before each build.
npm install --save-dev html-webpack-plugin clean-webpack-plugin
webpack.common.js
const path = require('path') + const CleanWebpackPlugin = require('clean-webpack-plugin'); + const HtmlWebpackPlugin = require('html-webpack-plugin'); module.exports = { ... + plugins: [ + new CleanWebpackPlugin(['dist']), + new HtmlWebpackPlugin({ + title: 'My killer app' + }) + ] }
Now every time we build, dist will be cleared out. We’ll now see
index.html output too, with the correct paths to our entry bundles.
Running
npm run develop produces this:
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>My killer app</title> </head> <body> <script type="text/javascript" src="app.bundle.js"></script> </body> </html>
And
npm run build produces this:
<!DOCTYPE html> <html> <head> <meta charset="UTF-8"> <title>My killer app</title> <link href="style.css" rel="stylesheet"> </head> <body> <script type="text/javascript" src="app.bundle.js"></script> </body> </html>
Development
The webpack-dev-server provides you with a simple web server and gives you live reloading, so you don’t need to manually refresh the page to see changes.
npm install --save-dev webpack-dev-server
package.json
{ ... "scripts": { - "develop": "webpack --watch --config webpack.dev.js", + "develop": "webpack-dev-server --config webpack.dev.js", } ... }
> npm run develop ℹ 「wds」: Project is running at ℹ 「wds」: webpack output is served from /
Open up in the browser and make a change to one of the JavaScript or CSS files. You should see it build and refresh automatically.
HotModuleReplacement
The
HotModuleReplacement plugin goes one step further than Live Reloading and swaps out modules at runtime without the refresh. When configured correctly, this saves a huge amount of time during development of single page apps. Where you have a lot of state in the page, you can make incremental changes to components, and only the changed modules are replaced and updated.
webpack.dev.js
+ const webpack = require('webpack') const merge = require('webpack-merge') const common = require('./webpack.common.js') module.exports = merge(common, { mode: 'development', + devServer: { + hot: true + }, + plugins: [ + new webpack.HotModuleReplacementPlugin() + ], ... }
Now we need to accept changed modules from our code to re-initialize things.
src/app.js
+ if (module.hot) { + module.hot.accept() + } ...
Note: When Hot Module Replacement is enabled,
module.hot is set to
true for development and
false for production, so these are stripped out of the bundle.
Restart the build and see what happens when we do the following:
- Click Open chat
- Add a new person to the
people.jsmodule
- Click Open chat again.
Here’s what’s happening:
- When Open chat is clicked, the
chat.jsmodule is fetched and initialized
- HMR detects when
people.jsis modified
module.hot.accept()in
index.jscauses all modules loaded by this entry chunk to be replaced
- When Open chat is clicked again,
chat.init()is run with the code from the updated module.
CSS Replacement
Let’s change the button color to red and see what happens:
src/app.scss
button { ... - background: #24b47e; + background: red; ... }
Now we get to see instant updates to our styles without losing any state. This is a much-improved developer experience! And it feels like the future.
HTTP/2
One of the primary benefits of using a module bundler like Webpack is that it can help you improve performance by giving you control over how the assets are built and then fetched on the client. It has been considered best practice for years to concatenate files to reduce the number of requests that need to be made on the client. This is still valid, but HTTP/2 now allows multiple files to be delivered in a single request, so concatenation isn’t a silver bullet anymore. Your app may actually benefit from having many small files individually cached. The client could then fetch a single changed module rather than having to fetch an entire bundle again with mostly the same contents.
The creator of Webpack, Tobias Koppers, has written an informative post explaining why bundling is still important, even in the HTTP/2 era.
Read more about this over at Webpack & HTTP/2.
Over to You
I hope you’ve found this introduction to Webpack helpful and are able to start using it to great effect. It can take a little time to wrap your head around Webpack’s configuration, loaders and plugins, but learning how this tool works will pay off.
The documentation for Webpack 4 is currently being worked on, but is really well put together. I highly recommend reading through the Concepts and Guides for more information. Here’s a few other topics you may be interested in:
- Source Maps for development
- Source Maps for production
- Cache busting with hashed filenames
- Splitting a vendor bundle
Is Webpack 4 your module bundler of choice? Let me know in the comments below.
|
https://www.sitepoint.com/beginners-guide-webpack-module-bundling/
|
CC-MAIN-2018-22
|
refinedweb
| 3,531
| 58.69
|
facebook ◦ twitter ◦
View blog authority
View blog top tags
Last Friday I posed the question of being able to create "more than SharePoint but less than InfoPath" input forms without having to resort to writing an entire facade on top of a SharePoint List. Didn't get any response to things but I did some digging into the WebPartPages namespace. Microsoft creates a raft of WebParts that are used to create the SharePoint UI itself. The ListFormWebPart and ListViewWebPart are particularly interesting as you just point them at a list and they're supposed to handle all the heavy lifting of generating forms and views. And they do. All of the .aspx pages for lists use them so that's how all the extra columns you add to lists and document libraries appear when you add/edit an item or document. The ListFormWebPart just automagically creates the form on the fly by interrogating the list, getting the fields and their datatypes, and creating the various controls (TextBox, CheckBox, Dropdown, Calendar, etc.) for you.
Anyways, I thought I would be clever (I always think I'm clever when I do these things) to create one of these things (they're sealed so you can't inherit your own Web Part from them) and render it in my own Web Part. Before the render (in the pre-render event) I could just grab the contents of the Web Part and make some "on-the-fly" modifications (like joining up dropdowns, etc.). That idea is a bust because as hard as I try, I can't do anything with the ListFormWebPart once it's been created. I can create it fine, set the list name and all that jazz but it always seems to render an empty Web Part. Poking around inside Reflector didn't show me much either (is SharePoint really that proprietary that Microsoft has to obfuscate the code?) so I'm at a loss how to tap these classes. Even though they're sealed, I don't recall anywhere Microsoft saying you can't use them (some classes are marked for internal use only, but not these ones). Just use them for good, not evil. Anyways, the quest goes on and I'll continue to post my findings here as I try to discover a value for them outside of what SharePoint does with them.
Tzunami SharePoint Designer. I spent most of my Memorial/Victoria day looking at this new tool. Tzunami used to be the old K-Wise guys (there were some tools for SharePoint 2001 from them like K-Doc if I recall). The Designer is meant to be a tool that you can bascially build up your entire portal structure offline, saving it in various iterations to a local file (after sucking down the initial site from a SharePoint server), then committing those changes back to SharePoint to create everything. The tool is just coming out and has some growing to do but the initial reaction is pretty good. There are a couple of things that I noticed with this version:
The biggest thing that is mulling in my noodle is the fact that you can alter creation and modification dates on things like lists and list items. I need to do some digging to see how they're doing this because the properties through the Object Model are read-only so methinks they're doing something down at the database layer because I can't see how else they accomplish this. That bothers me for two reasons. First, never ever ever ever ever (did I say ever) touch the database directly. Period. Do not pass Go. Do not collect your versioned documents. There's just so much going on behind the scenes with transactions and logging and synchronization and other SQL stuff oh my. Even if you go through the documented stored procs Microsoft just won't support you and there are so many things that can go very, very bad doing this.
Second, while there are a lot of people that jump up and scream when they try to migrate their data into SharePoint and generally rant about how it can't retain the original dates and such, I personally believe there's good reason why those things are read-only for developers in the Object Model. Basically it's a CYA thing. Yes, your developers do need access to be able to make changes to a site but do you really want them with the ability to alter that information once it's set? With all the SOX stuff going on, it's a good thing that when you write an item to a list you'll always know who created it and when it was last modified. Hold on, now with a single tool I can alter history! I can say "No, you didn't create that document on Monday May 13, 2001. Bob created it on Friday June 3, 2002." No, that's not a feature I want to enable. As well, there's a question of data integrity. SharePoint doesn't have a lot of it. I can delete a lookup value in a list or a user from Active Directory and depending on the phase of the moon, the information may or may not be there later. However I can always be rest assured that something silly will never happen like a modification date being set earlier than a documents creation date. Well, that's out the window now with this tool and if I did have any reports I created on aging those are probably going to take some explaining now.
Anyways, if you into cutting edge tools and don't want to wait you can contact them to get a trial version. Like I said, there are some features missing and some features you might not want to have. The support guys are excellent and responsive so if you do have any questions they'll be happy to answer. The trial runs for something like 7 days and has a limited number of commits you can make to the server.
|
http://weblogs.asp.net/bsimser/archive/2005/05/24/408727.aspx
|
crawl-002
|
refinedweb
| 1,022
| 67.18
|
#831 – Embedding a Cursor in Your Project as a Resource
If you have a custom cursor that your application uses, you can load it at run-time from an external file. But this requires that you distribute the cursor file with your application. To avoid distributing the .cur file, you can embed this file into your project as a resource.
First, add the .cur file to your project and set its Build Action to Resource.
To load the cursor at run-time, you use the Application.GetResourceStream method, which returns a StreamResourceInfo object. You can then use the Stream property of this object to create the cursor. (You’ll need a using statement for the System.Windows.Resources namespace).
private void Button_Click_1(object sender, RoutedEventArgs e) { StreamResourceInfo sriCurs = Application.GetResourceStream( new Uri("captain-america-arrow.cur", UriKind.Relative)); this.Cursor = new Cursor(sriCurs.Stream); }
|
https://wpf.2000things.com/tag/cursor/
|
CC-MAIN-2019-35
|
refinedweb
| 144
| 53.07
|
Egg cooking
Since Trac 0.9 it has been possible to write plugins for Trac to extend Trac functionality. Even better, you can deploy plugins as Python eggs that really makes plugin development fun.
This tutorial shows how to make an egg, successfully load an egg in Trac and in advanced topics how to serve templates and static content from an egg.
You should be familiar with component architecture and plugin development. This plugin is based on example in that plugin development article we just extend it a bit further.
Required items
First you need setuptools. For instructions and files see EasyInstall page.
Then you need of course Trac 0.9. Currently, it means source checkout from Subversion repository. Instructions for getting it done are located at TracDownload page.
Directories
To develop a plugin you need to create few directories to keep things together.
So let's create following directories:
./helloworld-plugin/ ./helloworld-plugin/helloworld/ ./helloworld-plugin/TracHelloworld.egg-info/
Main plugin
First step is to generate main module for this plugin. We will construct simple plugin that will display "Hello world!" on screen when accessed through /helloworld URL. Plugin also provides "Hello" button that is by default rendered on far right as a module
Since this is not enough, we need to make our simple plugin as a module. To do so, you simply create that magic __init__.py into ./helloworld-plugin/helloworld/:
# Helloworld module from helloworld import *
Make it as an egg
Now it's time to make it as an egg. For that we need a chicken called setup.py that is created into ./helloworld-plugin/:
from setuptools import setup PACKAGE = 'TracHelloworld' VERSION = '0.1' setup(name=PACKAGE, version=VERSION, packages=['helloworld'])
To make egg loadable in Trac we need to create one file more. in ./helloworld-plugin/TracHelloworld.egg-info/ create file trac_plugin.txt:
helloworld
First deployment
Now you could try to build your first plugin. Run command python setup.py bdist_egg in directory where you created it. If everthing went OK you should have small now reading EggCookingTutorial/AdvancedEggCooking to really integrate plugin into Trac layout.
Attachments (2)
- helloworld-plugin-1.tar.gz (775 bytes) - added by rede 10 years ago.
First version of tutorial plugin
- helloworld-plugin-1-trac-0.9.5.tar.gz (774 bytes) - added by maxb 9 years ago.
Tutorial files, updated for Trac 0.9.5
Download all attachments as: .zip
|
http://trac-hacks.org/wiki/EggCookingTutorial/BasicEggCooking?version=3
|
CC-MAIN-2015-14
|
refinedweb
| 401
| 52.26
|
Install boto
My machine is running Ubuntu 10.04 with Python 2.6. I ran 'easy_install boto', which installed boto-2.0rc1. This also installs several utilities in /usr/local/bin, of interest to this article being /usr/local/bin/route53 which provides an easy command-line-oriented way of interacting with Route 53.
Create boto configuration file
I created ~/.boto containing the Credentials section with the AWS access key and secret key:
# cat ~./boto [Credentials] aws_access_key_id = "YOUR_ACCESS_KEY" aws_secret_access_key = "YOUR_SECRET_KEY"
Interact with Route 53 via the route53 utility
If you just run 'route53', the command will print the help text for its usage. For our purpose, we'll make sure there are no errors when we run:
# route53 ls
If you don't have any DNS zones already created, this will return nothing.
Create a new DNS zone with route53
We'll create a zone called 'mytestzone':
# route53 create mytestzone.com Pending, please add the following Name Servers: ns-674.awsdns-20.net ns-1285.awsdns-32.org ns-1986.awsdns-56.co.uk ns-3.awsdns-00.com
Note that you will have to properly register 'mytestzone.com' with a registrar, then point the name server information at that registrat to the name servers returned when the Route 53 zone was created (in our case the 4 name servers above).
At this point, if you run 'route53 ls' again, you should see your newly created zone. You need to make note of the zone ID:
root@m2:~# route53 ls ================================================================================ | ID: MYZONEID | Name: mytestzone.com. | Ref: my-ref-number ================================================================================ {}
You can also get the existing records from a given zone by running the 'route53 get' command which also takes the zone ID as an argument:
# route53 get MYZONEID Name Type TTL Value(s) mytestzone.com. NS 172800 ns-674.awsdns-20.net.,ns-1285.awsdns-32.org.,ns-1986.awsdns-56.co.uk.,ns-3.awsdns-00.com. mytestzone.com. SOA 900 ns-674.awsdns-20.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
Adding and deleting DNS records using route53
Let's add an A record to the zone we just created. The route53 utility provides an 'add_record' command which takes the zone ID as an argument, followed by the name, type, value and TTL of the new record, and an optional comment. The TTL is also optional, and defaults to 600 seconds if not specified. Here's how to add an A record with a TTL of 3600 seconds:
# route53 add_record MYZONEID test.mytestzone.com A SOME_IP_ADDRESS 3600 {u'ChangeResourceRecordSetsResponse': {u'ChangeInfo': {u'Status': u'PENDING', u'SubmittedAt': u'2011-06-20T23:01:23.851Z', u'Id': u'/change/CJ2GH5O38HYKP0'}}}
Now if you run 'route53 get MYZONEID' you should see your newly added record.
To delete a record, use the 'route53 del_record' command, which takes the same arguments as add_record. Here's how to delete the record we just added:
# route53 del_record Z247A81E3SXPCR test.mytestzone.com. A SOME_IP_ADDRESS {u'ChangeResourceRecordSetsResponse': {u'ChangeInfo': {u'Status': u'PENDING', u'SubmittedAt': u'2011-06-21T01:14:35.343Z', u'Id': u'/change/C2B0EHROD8HEG8'}}}
Managing Route 53 programmatically with boto
As useful as the route53 command-line utility is, sometimes you need to interact with the Route 53 service from within your program. Since this post is about boto, I'll show some Python code that uses the Route 53 functionality.
Here's how you open a connection to the Route 53 service:
from boto.route53.connection import Route53Connection conn = Route53Connection()
(this assumes you have the AWS credentials in the ~/.boto configuration file)
Here's how you retrieve and walk through all your Route 53 DNS zones, selecting a zone by name:
ROUTE53_ZONE_NAME = "mytestzone.com." zones = {} conn = Route53Connection() results = conn.get_all_hosted_zones() zones = results['ListHostedZonesResponse']['HostedZones'] found = 0 for zone in zones: print zone if zone['Name'] == ROUTE53_ZONE_NAME: found = 1 break if not found: print "No Route53 zone found for %s" % ROUTE53_ZONE_NAME
(note that you need the ending period in the zone name that you're looking for, as in "mytestzone.com.")
Here's how you add a CNAME record with a TTL of 60 seconds to an existing zone (assuming the 'zone' variable contains the zone you're looking for). You need to operate on the zone ID, which is the identifier following the text '/hostedzone/' in the 'Id' field of the variable 'zone'.
from boto.route53.record import ResourceRecordSets zone_id = zone['Id'].replace('/hostedzone/', '') changes = ResourceRecordSets(conn, zone_id) change = changes.add_change("CREATE", 'test2.%s' % ROUTE53_ZONE_NAME, "CNAME", 60) change.add_value("some_other_name") changes.commit()
To delete a record, you use the exact same code as above, but with "DELETE" instead of "CREATE".
I leave other uses of the 'route53' utility and of the boto Route 53 API as an exercise to the reader.
5 comments:
Great post. Too bad its all python! Nerd to finish up work on a lwrp for route 53. Surprised something like that doesn't already exist.
Dont put double quotes (") in the .boto config file, or it will break
Hello, I would like to introduce new application,DNS30 Professional Edition.
Route 53 is designed to automatically scale to handle very large query volumes without any intervention from user.We have developed a UI tool for this interface - DNS30 Professional Edition.We also have online interface for this application.
Hi Grig,
Your article helped me out when I was trying to learn route53 with boto.
I did some work to try and simplify the boto interface, and I think it turned out pretty well. Check out the code on github.
I also wrote a little blog intro, here.
Let me know what you think!
Hi Brad -- thanks for the comments. Slick53 seems like a very useful layer on top of boto and route53 -- congrats!
|
http://agiletesting.blogspot.com/2011/06/managing-amazon-route-53-dns-with-boto.html?showComment=1308797023222
|
CC-MAIN-2014-52
|
refinedweb
| 957
| 65.01
|
Firslty my setup is an Ubuntu laptop and an Ubuntu server.
I have a program on my local laptop which needs to access a certain web-service, (lets call it). Now this service has a firewall which only allows access from my server's IP.
Is there some type of SSH tunnel I could use between my laptop and server so that when a python script on my laptop sends a request to that service sees the request coming from my server's IP?
I know it would look something like:
ssh -N -R 80:localhost:80 user@myserver
but I'm not sure exactly.
What you want is not a reverse tunnel but a regular tunnel.
ssh -L 80:someserver.com:80 user@myserver
This will create a listening socket on port 80 of your laptop (localhost) that will go to someserver.com through the SSH server on myserver.
I usually combine tunnels with the options -CfN, -C will enable compression (speeds things up a bit), -f sends the SSH to the background once the authentication is complete (so you still have a chance to enter the password if needed), -N will make sure no command is executed on the SSH server (it's not really safe to have an SSH running in the background that could hypothetically be used send commands to the server, it's a bit of healthy paranoia/a good practice).
If you don't care about having a very secure connection between your laptop and myserver, you can also change the cipher to something fast, like blowfish using -c blowfish, or arcfour128 (which is faster still).
So what I would use is this:
ssh -CfNc arcfour128 -L 80:someserver.com:80 user@myserver
Which will give you nice, fast tunnel that goes straight into the background instead of leaving an open command prompt on your server.
Keep in mind that if you send it to the background, when you want to break the tunnel, you'll have to first find the process id by doing something like ps -ef | grep ssh and kill the correct process id.
ps -ef | grep ssh
netstat -an | grep 127.0.0.1:80.*LISTEN
tcp 0 0 127.0.0.1:80 0.0.0.0:* LISTEN
You could use the SOCKS proxy that ssh provides. Connect via
ssh -D 9999 user@myserver
and then you could use this SOCKS proxy in your python script as described in How can I use a SOCKS 4/5 proxy with urllib2:
import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 9999)
socket.socket = socks.socksocket
(Put this code at the top of your script)
My suggestion would be to use:
ssh -L 8080:localhost:80 user@server
ssh -L 8080:localhost:80 user@server
This way, you don't run afoul with port 80 being a privileged port, by connecting your program to.
Also, I would use options -C for compression, -f for backgrounding, and -N for not opening a terminal.
So ssh -CfN -L 8080:localhost:80 user@server should do the
-C
-f
-N
ssh -CfN -L 8080:localhost:80 user@server
Following up on your comment, please allow me to quote from tutorialspoint.com:
A simple client:
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 8080 # Reserve a port for your service.
s.connect((host, port))
s.close
A simple client:
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 8080 # Reserve a port for your service.
s.connect((host, port))
s.close
As you are using Python's means of connecting to networks, I am assuming there is a connect() statement in your code.
connect()
(Follow the link for more in-depth info.) ;-)
wget
By posting your answer, you agree to the privacy policy and terms of service.
asked
2 years ago
viewed
552 times
active
|
http://serverfault.com/questions/443620/use-servers-ip-via-ssh-tunnel/443622
|
CC-MAIN-2014-52
|
refinedweb
| 666
| 70.53
|
Table of Contents.
The.0 FAQ — MySQL, DRBD, and Heartbeat”.
Because DRBD is a Linux Kernel module it is currently not supported on platforms other than Linux.
To.
To file.
Although you can rely on the DNS or NIS system for host resolving, in the event of a major network failure these services may not be available. If possible, add the IP address and host name of each DRBD node into the /etc/hosts file for each machine. This will. to rebuild
the kernel with this option. The best way to do this is to
use genkernel with the
--menuconfig option to select the option
and then rebuild the kernel. For example, at the command
line as
root:
root-shell> genkernel --menuconfig all
Then through the menu options, select, and finally press 'y' or 'space' to select the option. Then exit the menu configuration. The kernel will be rebuilt and installed. If this is a new kernel, make sure you update your bootloader accordingly. Now reboot to enable the new kernel.
To install DRBD you can choose either the pre-built binary installation packages or you can use the source packages and build from source. If you want to build from source you must have installed the source and development packages.
If you are installing using a binary distribution then you must ensure that the kernel version number of the binary package matches your currently active kernel. You can use uname to find out this information:
shell> uname -r 2.6.20-gentoo-r6
Once DRBD has been built and installed, you need to edit the
/etc/drbd.conf file and then run a number
of commands to build the block device and set up the
replication.
Although the steps below are split into those for the primary node and the secondary node, it should be noted that
You must enable the universe component for your preferred
Ubuntu mirror in
/etc/apt/sources.list,
and has been downloaded and
installed, you need to decompress and copy the default
configuration file from
/usr/share/doc/drbd-8.0.7/drbd.conf.bz2
into
/etc/drbd.conf.
To set up a DRBD primary node you need to. You should you 14. Remember that
you must have the node information for the primary and
secondary nodes in the
drbd.conf file
on each host. You need to configure the following
information for each node:
device — the. You
can set this to
internal and DRBD
will use the physical block device to store the
information, by recording the metadata within the last
sections of the disk. The exact size will depend, you should will you need to. You should now configure your secondary node or nodes.
The configuration process for setting up a secondary node is the same as for the primary node, except that you do not have to create the file system on the secondary node device, as this information will automatically be will start the copy the data from the primary node to the secondary node. Even with an empty file system this will take'
Once.
For administration, the main command is drbdadm. There are a number of commands supported by this tool the control the connectivity and status of the DRBD devices.
For convenience, a bash completion script is available. This
will provide when you want.
Additional options you may want — if you do not want to use
the entire partition space with your DRBD block device then
you can, you now need to populate the lower-level device with the metadata information, and then start the DRBD service.
Once.
Because.
For a set-up where there is a high-throughput of information being written, you may want to use it will be at a lower speed
To enable bonded connections you must enable bonding within the kernel. You then need to configure the module to specify the bonded devices and then configure each new bonded device just as you would a standard network device:
To configure the bonded devices, you need to edit the
/etc/modprobe.conf file (RedHat) or add
a file to the
/etc/modprobe.d
directory.. In each case you will define the parameters for
the kernel module. First, you need to will automatically failover. There are two parts to this, you need to setup the bonded device configuration, and then configure the original network interfaces as 'slaves' of the new bonded interface.
For RedHat that you want you need to you should
The being synchronized while they are actively being used by the primary node. Any I/O that updates on the primary node will automatically trigger replication of the modified block. In the event of a failure within an HA environment, it is highly likely that synchronization and replication will take place at the same time.
Unfortunately, if the synchronization rate is set too high, then the synchronization process will use up all the available network bandwidth between the primary and secondary nodes. In turn, the bandwidth available for replication of changed blocks is zero, which means replication will stall and I/O will block, and ultimately the application will fail or degrade.
To avoid enabling the
syncer rate to consume
the available network bandwidth and prevent the replication of
changed blocks you should set the
syncer rate
to less than the maximum network bandwidth.
You should configered as
33M
(33MB/s). If your disk system works at a rate lower than your
network interface, use 30% of your disk interface speed.
Depending on the application, you may wish to limit the synchronization rate. For example, on a busy server you may wish to configure a significantly slower synchronization rate to ensure the replication rate is not affected.
The
al-extents parameter controls the number
of 4MB blocks of the underlying disk that can be written to at
the same time. Increasing this parameter lowers the frequency of
the meta data) will need to be completely resynchronized before
replication can continue.
The.
Heartbeat
node for each server.
An optional additional set of information provides the
configuration for a ping test that will check the connectivity to
another host. You should should be
executed (using the
apiauth). The failure will.
To..
Because.
Using”.
There”.
Often.
Due.
When., allowing you to configure and install your AMI in any way you choose.
To use EC2, you create an AMI based on the configuration and applications that you want here. 14.3.2.1, “Setting Up MySQL on an EC2 AMI”.
For tips and advice on how to create a scalable EC2 environment using MySQL, including guides on setting up replication, see Section 14.3.2.3, “Deploying a MySQL Database Using EC2”.
There website.”.
There are some limitations of the EC2 instances that you should be aware of will only remain in place while the machine is running. The data will survive a reboot. If you shut down the instance, any data it contained will, then the instance will.
Because you cannot guarantee the uptime and availability of your EC2 instances, when deploying MySQL within the EC2 environment you should means that you should treat the EC2 instances as temporary, cache-based solutions, rather than as a long-term, high availability solution. In addition to using multiple machines, you should also take advantage of other services, such as memcached to provide additional caching for your application to help reduce the load on the MySQL server so that it can concentrate on writes. On the large and extra large instances within EC2, the RAM available can be used to provide a large memory cache for data.
Most types of scale out topology that you would use with your own hardware can be used and applied within the EC2 environment. However, you should be use the limitations and advice already given to ensure that any potential failures do not lose you any data. Also, because the relative power of each EC2 instance is so low, you should. You should always use these internal addresses and names when communicating between instances. Only use public IP addresses when communicating with the outside world - for example, when publicizing your application.
To ensure reliability of your database, you should you should you should be careful that the potential for failure of an instance does not affect your application. If the EC2 instance that provides the MySQL server for a particular shard fails, then all of the data on that shard will be unavailable.
For more information on virtualization, see the following links:
To.
Because.
Config.
When using ZFS replication to provide a constant copy of your data, you should ensure that you can recover your tables, either manually or automatically, in the event of a failure of the original system.
In the event of a failure, you should or
Maria you should may need to run
REPAIR
TABLE, and you might even have lost some information.
You should use a recovery-capable storage engine and a regular
synchronization schedule to reduce the risk for significant data
loss.
The.
You can build and install memcached from the source code directly, or you can use an existing operating system package or installation.
Installing memcached from a Binary Distribution
To install memcached on a RedHat, Fedora or CentOS host, use yum:
root-shell> yum install memcached website.
To 0 .
-h
Print the help message and exit.
-i
Print the memcached and
libevent license.
-b
Run a managed instance.
-P pidfile
Save the process ID of the memcached
instance into
file.
-f
Set the chunk size growth factor. When allocating new memory chunks, the allocated size of new chunks will be determined by multiple the default slab size by this factor.
.
When using memcached you can use a number of different potential deployment strategies and topologies. The exact strategy you use will depend on your application and environment. When developing a system for deploying memcached within your system, you should you should.
Using a single memcached instance, especially for multiple clients, is generally a bad idea as it introduces a single point of failure. Instead provide at least two memcached instances so that a failure can be handled appropriately. If possible, you should create as many memcached nodes as possible. When adding and removing memcached instances from a pool, the hashing and distribution of key/value pairs may be affected. For information on how to avoid problems, see Section 14.5.2.4, “memcached Distribution Types”.
The will run into issues because the same ID will probably.
There you will see a lot of expired items being removed from the cache even though they are in active use. You use the statistics mechanism to get a better idea of the level of evictions (expired objects). For more information, see Section 14 website..
The selected server will be the same during both set and get operations.
For example, if you have three servers, A, B, and C, and you set
the value
myid, then the
memcached client will create a hash based on
the ID and select server B. When the same key is requested, the
same hash is generated, and the same server, B, will be selected
to request the value.
Because the hashing mechanism is part of the client interface, not the server interface, the hashing process and selection is very fast. By performing the hashing on the client, it also means that if you want to access the same data by the same ID from the same list of servers but from different client interfaces, you must use the same or compatible hashing mechanisms.) are
using the same client library interface, they will always
generate the same hash code from the ID.
One issue with the client-side hashing mechanism is that when
using multiple servers and extending or shrinking the list of
servers that you have configured for use with
memcached, the resulting hash may change. For
example, if you have servers A, B, and C; the computed hash for
key
myid may equate to server B. If you add
another server, D, into this list, then computing the hash for
the same ID again may result in the selection of server D for
that key.
This means that servers B and D both contain the information for
key
myid, but there may be differences
between the data held by the two instances. A more significant
problem is that you will get a much higher number of
cache-misses when retrieving data as the addition of a new
server will change the distribution of keys, and this will in
turn require rebuilding the cached data on the
memcached instances and require an increase
in database reads.
For this reason, there are two common types of hashing algorithm, consistent and modula.
With consistent hashing algorithms, the
same key when applied to a list of servers will always use.
There are some limitations with any consistent hashing algorithm. When adding servers to an existing list of configured servers, then keys will be distributed to the new servers as part of the normal distribution. When removing servers from the list, the keys will be re-allocated to another server within the list, which will mean that the cache will need will select a server by first computing the hash and then choosing a server from the list of configured servers. As the list of servers changes, so the server selected when using a modula hashing algorithm will also change. The result is the behavior described above; changes to the list of servers will mean you will not notice the effect.
If you change your servers regularly, or you use a common set of servers that are shared among a large number of clients, then using a consistent hashing algorithm should help to ensure that your cache data is not duplicated and the data is evenly distributed., Mac OS X 10.5 and
FreeBSD. To enable the DTrace probes in
memcached, you should build from source and
use the
--enable-dtrace option. For more
information, see Section 14.5 — pointer to the connection
object
conn-destroy(ptr)
Fired when a connection object is being destroyed.
Arguments:
ptr — will
be fulfilled in this class
slabsize — the size of each
item in this class
ptr — pointer to allocated
memory
slabs-allocate-failed(size, slabclass)
Failed to allocate memory (out of memory)
Arguments:
size — the requested size
slabclass — the class that
failed to fulfill the request
slabs-slabclass-allocate(slabclass)
Fired when a slab class needs more space
Arguments:
slabclass — class that needs
more memory
slabs-slabclass-allocate-failed(slabclass)
Failed to allocate memory (out of memory)
Arguments:
slabclass — the class that
failed grab more memory
slabs-free(size, slabclass, ptr)
Release memory
Arguments:
size — the size of the memory
slabclass — the class the
memory belongs to
ptr — pointer to the memory to
release
assoc-find(key, depth)
Fired when the when we have searched the hash table for a named key. These two elements provide an insight in how well the hash function operates. Traversals are a sign of a less optimal function, wasting cpu capacity.
Arguments:
key — the key searched for
depth — the depth in the list
of hash table
assoc-insert(key, nokeys)
Fired when a new item has been inserted.
Arguments:
key — the key just inserted
nokeys — the total number of
keys currently being stored, including the key for which
insert was called.
assoc-delete(key, nokeys)
Fired when a new item has been removed.
Arguments:
key — the key just deleted
nokeys — the total number of
keys currently being stored, excluding the key for which
delete was called.
item-link(key, size)
Fired when an item is being linked in the cache
Arguments:
key — the items key
size — the size of the data
item-unlink(key, size)
Fired when an item is being deleted
Arguments:
key — the items key
size — the size of the data
item-remove(key, size)
Fired when the refcount for an item is reduced
Arguments:
key — the items key
size — the size of the data
item-update(key, size)
Fired when the "last referenced" time is updated
Arguments:
key — the items key
size — the size of the data
item-replace(oldkey, oldsize, newkey,
newsize)
Fired when an item is being replaced with another item
Arguments:
oldkey — the key of the item to
replace
oldsize — the size of the old
item
newkey — the key of the new
item
newsize — the size of the new
item
process-command-start(connid, request,
size)
Fired when the processing of a command starts
Arguments:
connid — the connection id
request — the incoming request
size — the size of the request
process-command-end(connid, response,
size)
Fired when the processing of a command is done
Arguments:
connid — the connection id
respnse — the response to send
back to the client
size — the size of the response
command-get(connid, key, size)
Fired for a get-command
Arguments:
connid — connection id
key — requested key
size — size of the key's data
(or -1 if not found)
command-gets(connid, key, size, casid)
Fired for a gets command
Arguments:
connid — connection id
key — requested key
size — size of the key's data
(or -1 if not found)
casid — the casid for the item
command-add(connid, key, size)
Fired for a add-command
Arguments:
connid — connection id
key — requested key
size — the new size of the
key's data (or -1 if not found)
command-set(connid, key, size)
Fired for a set-command
Arguments:
connid — connection id
key — requested key
size — the new size of the
key's data (or -1 if not found)
command-replace(connid, key, size)
Fired for a replace-command
Arguments:
connid — connection id
key — requested key
size — the new size of the
key's data (or -1 if not found)
command-prepend(connid, key, size)
Fired for a prepend-command
Arguments:
connid — connection id
key — requested key
size — the new size of the
key's data (or -1 if not found)
command-append(connid, key, size)
Fired for a append-command
Arguments:
connid — connection id
key — requested key
size — the new size of the
key's data (or -1 if not found)
command-cas(connid, key, size, casid)
Fired for a cas-command
Arguments:
connid — connection id
key — requested key
size — size of the key's data
(or -1 if not found)
casid — the cas id requested
command-incr(connid, key, val)
Fired for incr command
Arguments:
connid — connection id
key — the requested key
val — the new value
command-decr(connid, key, val)
Fired for decr command
Arguments:
connid — connection id
key — the requested key
val — the new value
command-delete(connid, key, exptime)
Fired for a delete command
Arguments:
connid — connection id
key — the requested key
exptime — the expiry time want want..
If you enable the thread implementation within when building
memcached from source, then
memcached will use distribution will read the request and send the response. This implementation can lead to increased CPU load as threads will wake from sleep to potentially process the request.
Using threads can increase the performance on servers that have multiple CPU cores available, as the requests to update the hash table can be spread between the individual threads. However, because of the locking mechanism employed you may want to experiment with different thread values to achieve the best performance based on the number and type of requests within your given workload.
libmemcached will be available to the next client that requests it from the cache.
For a flow diagram of this sequence, see Figure 14.6, ).
libmemcachedBase Functions
libmemcachedServer Functions
libmemcachedSet Functions
libmemcachedGet Functions
libmemcachedBehaviors
The
libmemcached library provides both C and
C++ interfaces to memcached and is also the
basis for a number of different additional API implementations,
including Perl, Python and Ruby. Understanding the core
libmemcached functions can help when using
these other interfaces.
The C library is the most comprehensive interface library for
memcached and provides a provide extended functionality, such as appending and prepending data.
To build and install
libmemcached, download
the
libmemcached package, run configure, and
then build and install:
shell> tar xjf libmemcached-0.21.tar.gz shell> cd libmemcached-0.21 shell> ./configure shell> make shell> make install
On many Linux operating systems, you can install the
corresponding
libmemcached package through
the usual yum, apt-get or
similar commands. this list to the
memcached_st structure. The latter method is
used in the following example. Once the server list has been
set, you can call the functions to store or retrieve data. A
simple application for setting a preset value to localhost is
provided here:
#include <stdio.h> #include <string.h> #include <unistd.h> #include <libmemcached/memcached.h> int main(int argc, char *argv[]) { memcached_server_st *servers = NULL; memcached_st *memc; memcached_return rc; char *key= "keystring"; char *value= "keyvalue"; memcached_server_st *memcached_servers_parse (char *server_strings);)); rc= memcached_set(memc, key, strlen(key), value, strlen(value), (time_t)0, (uint32_t)0); if (rc == MEMCACHED_SUCCESS) fprintf(stderr,"Key stored successfully\n"); else fprintf(stderr,"Couldn't store key: %s\n",memcached_strerror(memc, rc)); return 0; }
The base
libmemcached functions allow you
to create, destroy and clone the main
memcached_st structure that is used to
interface to the
memcached servers. The
main functions are defined below:
memcached_st *memcached_create (memcached_st *ptr);
Creates a new
memcached_st structure for
use with the other
libmemcached API
functions. You can supply an existing, static,
memcached_st structure, or
NULL to have a new structured allocated.
Returns a pointer to the created structure, or
NULL on failure.
void memcached_free (memcached_st *ptr);
Free the structure and memory allocated to a previously
created
memcached_st structure.
memcached_st *memcached_clone(memcached_st *clone, memcached_st *source);
Clone an existing
memcached structure from
the specified
source, copying the defaults
and list of servers defined in the structure.
The
libmemcached API uses a list of
servers, stored within the
memcached_server_st structure, to act as
the list of servers used by the rest of the functions. To use
memcached, you first create the server
list, and then apply the list of servers to a valid
libmemcached object.
Because the list of servers, and the list of servers within an
active
libmemcached object can be
manipulated separately, you can update and manage server lists
while an active
libmemcached interface is
running.
The functions for manipulating the list of servers within a
memcached_st structure are given below:
memcached_return memcached_server_add (memcached_st *ptr, char *hostname, unsigned int port);
Add a server, using the given
hostname and
port into the
memcached_st structure given in
ptr.
memcached_return memcached_server_add_unix_socket (memcached_st *ptr, char *socket);
Add a Unix socket to the list of servers configured in the
memcached_st structure.
unsigned int memcached_server_count (memcached_st *ptr);
Return a count of the number of configured servers within the
memcached_st structure.
memcached_server_st * memcached_server_list (memcached_st *ptr);
Returns an array of all the defined hosts within a
memcached_st structure.
memcached_return memcached_server_push (memcached_st *ptr, memcached_server_st *list);
Pushes an existing list of servers onto list of servers
configured for a current
memcached_st
structure. This adds servers to the end of the existing list,
and duplicates are not checked.
The
memcached_server_st structure can be
used to create a list of
memcached servers
which can then be applied individually to
memcached_st structures.
memcached_server_st * memcached_server_list_append (memcached_server_st *ptr, char *hostname, unsigned int port, memcached_return *error);
Add a server, with
hostname and
port, to the server list in
ptr. The result code is handled by the
error argument, which should point to an
existing
memcached_return variable. The
function returns a pointer to the returned list.
unsigned int memcached_server_list_count (memcached_server_st *ptr);
Return the number of the servers in the server list.
void memcached_server_list_free (memcached_server_st *ptr);
Free up the memory associated with a server list.
memcached_server_st *memcached_servers_parse (char *server_strings);
Parses a string containing a list of servers, where individual
servers are separated by a comma and/or space, and where
individual servers are of the form
server[:port]. The return value is a server
list structure.
The set related functions within
libmemcached provide the same functionality
as the core functions supported by the
memcached protocol. The full definition for
the different functions is the same for all the base functions
(add, replace, prepend, append). For example, the function
definition for
memcached_set() is:
memcached_return memcached_set (memcached_st *ptr, const char *key, size_t key_length, const char *value, size_t value_length, time_t expiration, uint32_t flags);
The
ptr is the
memcached_st structure. The
key and
key_length
define the key name and length, and
value
and
value_length the corresponding value
and length. You can also set the expiration and optional
flags. For more information, see
Section 14.5.3.1.5, “
libmemcached Behaviors”.
The following table outlines the remainder of the set-related functions.
The
by_key methods add two further
arguments, the master key, to be used and applied during the
hashing stage for selecting the servers. You can see this in
the following definition:
memcached_return memcached_set_by_key(memcached_st *ptr, const char *master_key, size_t master_key_length, const char *key, size_t key_length, const char *value, size_t value_length, time_t expiration, uint32_t flags);
All the functions return a value of type
memcached_return, which you can compare
against the
MEMCACHED_SUCCESS constant.
The
libmemcached functions provide both
direct access to a single item, and a multiple-key request
mechanism that provides much faster responses when fetching a
large number of keys simultaneously.
The main get-style function, which is equivalent to the
generic
get() is
memcached_get(). The functions a string
pointer to the returned value for a corresponding key.
char *memcached_get (memcached_st *ptr, const char *key, size_t key_length, size_t *value_length, uint32_t *flags, memcached_return *error);
A multi-key get,
memcached_mget(), is also
available. Using a multiple key get operation is much quicker
to do in one block than retrieving the key values with
individual calls to
memcached_get(). To
start the multi-key get, you need to call
memcached_mget():
memcached_return memcached_mget (memcached_st *ptr, char **keys, size_t *key_length, unsigned int number_of_keys);
The return value is the success of the operation. The
keys parameter should be an array of
strings containing the keys, and
key_length
an array containing the length of each corresponding key.
number_of_keys is the number of keys
supplied in the array.
To fetch the individual values, you need to use
memcached_fetch() to get each corresponding
value.
char *memcached_fetch (memcached_st *ptr, const char *key, size_t *key_length, size_t *value_length, uint32_t *flags, memcached_return *error);
The function returns the key value, with the
key,
key_length and
value_length parameters being populated
with the corresponding key and length information. The
function returns
NULL when there are no
more values to be returned. A full example, including the
populating of the key data and the return of the information
is provided here.
#include <stdio.h> #include <sstring.h> #include <unistd.h> #include <libmemcached/memcached.h> int main(int argc, char *argv[]) { memcached_server_st *servers = NULL; memcached_st *memc; memcached_return rc; char *keys[]= {"huey", "dewey", "louie"}; size_t key_length[3]; char *values[]= {"red", "blue", "green"}; size_t value_length[3]; unsigned int x; uint32_t flags; char return_key[MEMCACHED_MAX_KEY]; size_t return_key_length; char *return_value; size_t return_value_length;)); for(x= 0; x < 3; x++) { key_length[x] = strlen(keys[x]); value_length[x] = strlen(values[x]); rc= memcached_set(memc, keys[x], key_length[x], values[x], value_length[x], (time_t)0, (uint32_t)0); if (rc == MEMCACHED_SUCCESS) fprintf(stderr,"Key %s stored successfully\n",keys[x]); else fprintf(stderr,"Couldn't store key: %s\n",memcached_strerror(memc, rc)); } rc= memcached_mget(memc, keys, key_length, 3); if (rc == MEMCACHED_SUCCESS) { while ((return_value= memcached_fetch(memc, return_key, &return_key_length, &return_value_length, &flags, &rc)) != NULL) { if (rc == MEMCACHED_SUCCESS) { fprintf(stderr,"Key %s returned %s\n",return_key, return_value); } } } return 0; }
Running the above application:
shell> memc_multi_fetch Added server successfully Key huey stored successfully Key dewey stored successfully Key louie stored successfully Key huey returned red Key dewey returned blue Key louie returned green
The);
In addition to the main C library interface,
libmemcached also includes a number of
command line utilities that can be useful when working with
and debugging memcached applications.
All of the command line tools accept a number of arguments,
the most critical of which is
servers,
which specifies the list of servers to connect to when
returning information.
The main tools are:
memcat — display the value for each ID given on the command line:
shell> memcat --servers=localhost hwkey Hello world
memcp — copy the contents of a file into the cache, using the file names as the key:
shell> echo "Hello World" > hwkey shell> memcp --servers=localhost hwkey shell> memcat --servers=localhost hwkey Hello world
memrm — remove an item from the cache:
shell> memcat --servers=localhost hwkey Hello world shell> memrm --servers=localhost hwkey shell> memcat --servers=localhost hwkey
memslap — test the load on one or more memcached servers, simulating get/set and multiple client operations. For example, you can simulate the load of 100 clients performing get operations:
shell> memslap --servers=localhost --concurrency=100 --flush --test=get memslap --servers=localhost --concurrency=100 --flush --test=get Threads connecting to servers 100 Took 13.571 seconds to read data
memflush — flush (empty) the contents of the memcached cache.
shell> memflush --servers=localhost
The
Cache::Memcached module provides a native
interface to the Memcache protocol, and provides support for the
core functions offered by memcached. You
should install the module using your hosts native package
management system. Alternatively, you can install the module
using
CPAN:
root-shell> perl -MCPAN -e 'install Cache::Memcached'
To use memcached from Perl through
Cache::Memcached module, you first need to). You can also
specify the server with a weight (indicating how much more
frequently the server should be used during hashing) by
specifying:
root-shell>!cache. When calling it for a film does not exist, you should get this result:
shell> memcached-sakila.pl "ROCK INSTINCT" Film data loaded from database and cached
When accessing a film that has already been added to the cache:
shell> memcached-sakila.pl "ROCK INSTINCT" Film data loaded from Memcached
The.:
The data is automatically serialized using
cPickle/
pickle. This means, but be aware that serialization of Python data may be incompatible with other interfaces and languages. You can change the serialization module used during initialization, for example to use JSON, which will be more easily exchanged.
PHP provides support for the Memcache functions through a PECL
extension. To enable the PHP
memcache
extensions, you must build PHP using the
--enable-memcache option to
configure when building from source.
If you are installing on a RedHat based server, you can install
the
php-pecl-memcache RPM:
root-shell> yum --install php-pecl-memcache
On Debian based distributions, use the
php-memcache package.
You can set global runtime configuration options by specifying
the values in the following table within your
php.ini file.
To create a connection to a memcached server,
you need to create a new
Memcache object and
then specifying the connection options. For example:
<?php $cache = new Memcache; $cache->connect('localhost',11121); ?>. You can enable persistent connections to
memcached instances by setting the
$persistent argument to true. This is the
default setting, and will cause the connections to remain open.
To help control the distribution of keys to different instances,
you should use the global
memcache.hash_strategy setting. This sets the
hashing mechanism used to select. You can also add an additional a removing servers from the list in a running instance (for example, when starting another script that mentions additional servers), the connections will be shared, but the script will only select among the instances explicitly configured within the script.
To ensure that changes to the server list within a script do not cause problems, make sure to use the consistent hashing mechanism.
There.
The will not be compatible. If
this is a problem, use JSON or a similar nonbinary serialization
format.
On most systems you can download the package and use the
jar directly. On OpenSolaris, use
pkg to install the
SUNWmemcached-java package. and that you want you can use
pool.setHashingAlg():
pool.setHashingAlg( SockIOPool.NEW_COMPAT_HASH );
Valid values ares
NEW_COMPAT_HASH,
OLD_COMPAT_HASH and
NATIVE_HASH are also basic modula hashing
algorithms. For a consistent hashing algorithm, use
CONSISTENT_HASH. These constants are
equivalent to the corresponding hash settings within
libmemcached.
The memcached MySQL User Defined Functions (UDFs) enable you to set and retrieve objects from within MySQL 5.0 or greater.
To install the MySQL memcached UDFs, download
the UDF package from.
You will need to unpack the package and run
configure to configure the build process.
When running configure, use the
--with-mysql option and specify the location
of the mysql_config command. Note that you
must be running :
shell>
tar zxf memcached_functions_mysql-0.5.tar.gzshell>
cd memcached_functions_mysql-0.5shell>
./configure --with-mysql-config=/usr/local/mysql/bin/mysql_config
Now build and install the functions:
shell>
makeshell>
make install
You may want to copy the MySQL memcached UDFs into your MySQL plugins directory:
shell>
cp /usr/local/lib/libmemcached_functions_mysql* /usr/local/mysql/lib/mysql/plugins/
Once installed, you must initialize the function within MySQL
using
CREATE and specifying the return value
and library. For example, to add the
memc_get() function:
mysql> CREATE FUNCTION memc_get RETURNS STRING SONAME "libmemcached_functions_mysql.so";
You must repeat this process for each function that you want to
provide access to within MySQL. Once you have created the
association, the information will be retained, even over
restarts of the MySQL server. You can simplify the process by
using the SQL script provided in the
memcached UDFs package:
shell> mysql <sql/install_functions.sql
Alternatively, if you have Perl installed, then you can use the supplied Perl script, which will check for the existence of each function and create the function/library association if it has not already been defined:
shell> utils/install.pl --silent
The
--silent option installs everything
automatically. Without this option, the script will ask whether
you want to install each of the available functions.
The interface remains consistent with the other APIs and
interfaces. To set up a list of servers, use the
memc_servers_set() function, which accepts a
single string containing and comma-separated list of servers:
mysql> SELECT memc_servers_set('192.168.0.1:11211,192.168.0.2:11211');
The list of servers used by the memcached UDFs is not persistent over restarts of the MySQL server. If the MySQL server fails, then you must re-set the list of memcached servers.
To set a value, use
memc_set:
mysql> SELECT memc_set('myid', 'myvalue');
To retrieve a stored value:
mysql> SELECT memc_get('myid');
The list of functions supported by the UDFs, in relation to the standard protocol functions, is shown in the following table.
The respective
*_by_key() functions are
useful when you want to store a specific value into a specific
memcached server, possibly based on a
differently calculated or constructed key.
The
memcached UDFs include some additional
functions:
memc_server_count()
Returns a count of the number of servers in the list of registered servers.
memc_servers_set_behavior(behavior_type,
value),
memc_set_behavior(behavior_type,
value)
Set behaviors for the list of servers. These behaviors are
identical to those provided by the
libmemcached library. For more
information on
libmemcached behaviors,
see Section 14.5.3.1, “Using
libmemcached”.
You can use the behavior name as the
behavior_type:
mysql> SELECT memc_servers_behavior_set("MEMCACHED_BEHAVIOR_KETAMA",1);
memc_servers_behavior_get(behavior_type),
memc_get_behavior(behavior_type, value)
Returns the value for a given behavior.
memc_list_behaviors()
Returns a list of the known behaviors.
memc_list_hash_types()
Returns a list of the supported key-hashing algorithms.
memc_list_distribution_types()
Returns a list of the supported distribution types to be used when selecting a server to use when storing a particular key.
memc_libmemcached_version()
Returns the version of the
libmemcached
library.
memc_stats()
Returns the general statistics information from the server.
Communicating — is a unique
64-bit value of an existing entry. This will be used
to compare against the existing value. You should will be one line, specifying the status or error information. For more information, see Table 14.2, “memcached Protocol Responses”.
Retrieval commands:
get,
gets
Retrieval commands take the form:
get key1 [key2 .... keyn] gets key1 [key2 ... keyn]
You can supply multiple keys to the commands, with each requested key separated by whitespace.
The server will respond will immediately be followed by the value data block. For example:
get xyzkey\r\n VALUE xyzkey 0 6\r\n abcdef\r\n
If you have requested multiple keys, an information line and data block will will fail during this period.
set operations will succeed. After
this period, the key will be deleted permanently and
all commands will be accepted.
If not supplied, the value is assumed to be zero (delete immediately).
noreply — tells the server
not to reply to the command.
Responses to the command will either be will be:
NOT_FOUND — the specified key
could not be located.
value — the new value of the
specified key.
Values are assumed to be unsigned. For
decr operations the value will never be
decremented below 0. For
incr
operations, the value will be wrap around the 64-bit
maximum.
Statistics commands:
stats
The
stats command provides detailed
statistical information about the current status of the
memcached instance and the data it is
storing.
Statistics commands take the form:
STAT [name] [value]
Where:
name — is 14.5.4, “Getting memcached Statistics”.
For reference, a list of the different commands supported and their formats is provided below.
When sending a command to the server, the response from the
server will be one of the settings in the following table. All
response values from the server are terminated by
\r\n: prove be very will obtain the stats and dump 14.5.4.1, “memcached General Statistics”.
Slab statistics (
slabs), see
Section 14.5.4.2, “memcached Slabs Statistics”.
Item statistics (
items), see
Section 14.5.4.3, “memcached Item Statistics”.
Size statistics (
sizes), see
Section 14.5.4.4, “memcached Size Statistics”.
The32.
To get the
slabs statistics, use the
stats slabs command, or the API equivalent.
The slab statistics provide you with information about the slabs that have created and allocated for storing information within the cache. You get information both on each individual slab-class and total statistics for the whole slab.
STAT 1:chunk_size 104 STAT 1:chunks_per_page 10082 STAT 1:total_pages 1 STAT 1:total_chunks 10082 STAT 1:used_chunks 10081 STAT 1:free_chunks 1 STAT 1:free_chunks_end 10079 STAT 9:chunk_size 696 STAT 9:chunks_per_page 1506 STAT 9:total_pages 63 STAT 9:total_chunks 94878 STAT 9:used_chunks 94878 STAT 9:free_chunks 0 STAT 9:free_chunks_end 0 STAT active_slabs 2 STAT total_malloced 67083616 END
Individual stats for each slab class are prefixed with the slab ID. A unique ID is given to each allocated slab from the smallest size up to the largest. The prefix number indicates the slab class number in relation to the calculated chunk from the specified growth factor. Hence in the example, 1 is the first chunk size and 9 is the 9th chunk allocated size.
The different parameters returned for each chunk size and the totals are shown in the following table.
The key values in the slab statistics are the
chunk_size, and the corresponding
total_chunks and
used_chunks parameters. These given an
indication of the size usage of the chunks within the system.
Remember that one key/value pair will be placed into a chunk of
a suitable size.
From these stats you can get an idea of your size and chunk allocation and distribution. If you are storing many items with a number of largely different sizes, then you may want to adjust the chunk size growth factor to increase in larger steps to prevent chunk and memory wastage. A good indication of a bad growth factor is a high number of different slab classes, but with relatively few chunks actually in use within each slab. Increasing the growth factor will create fewer slab classes and therefore make better use of the allocated pages.
To get the
items statistics, use the
stats items command, or the API equivalent.
The
items statistics give information about
the individual items allocated within a given slab class.
STAT items:2:number 1 STAT items:2:age 452 STAT items:2:evicted 0 STAT items:2:outofmemory 0 STAT items:27:number 1 STAT items:27:age 452 STAT items:27:evicted 0 STAT items:27:outofmemory 0
The prefix number against each statistics relates to the
corresponding chunk size, as returned by the
stats
slabs statistics. The result is a display of the
number of items stored within each chunk within each slab size,
and specific statistics about their age, eviction counts, and
out of memory counts. A summary of the statistics is given in
the following table.
Item level statistics can be used to determine how many items are stored within a given slab and their freshness and recycle rate. You can use this to help identify whether there are certain slab classes that are triggering a much larger number of evictions that others.
To get size statistics, use the
stats sizes
command, or the API equivalent.
The size statistics provide information about the sizes and number of items of each size within the cache. The information is returned as two columns, the first column is the size of the item (rounded up to the nearest 32 byte boundary), and the second column is the count of the number of items of that size within the cache:
96 35 128 38 160 807 192 804 224 410 256 222 288 83 320 39 352 53 384 33 416 64 448 51 480 30 512 54 544 39 576 10065
Running this statistic will lock up your cache as each item is read from the cache and its size calculated. On a large cache, this may take some time and prevent any set or get operations until the process completes.
The item size statistics are useful only to determine the sizes of the objects you are storing. Since the actual memory allocation is relevant only in terms of the chunk size and page size, the information will only be useful during a careful debugging or diagnostic session.
Questions
14.5.5.1: How does an event such as a crash of one of the memcached servers handled by the memcached client?
14.5.5.2: What's a recommended hardware config for a memcached server? Linux or Windows?
14.5.5.3: memcached is fast - is there any overhead in not using persistent connections? If persistent is always recommended, what are the downsides (for example, locking up)?
14.5.5.4: How expensive is it to establish a memcache connection? Should those connections be pooled?
14.5.5.5: How will the data will be handled when the memcached server is down?
14.5.5.6: Can memcached be run on a Windows environment?
14.5.5.7: What is the max size of an object you can store in memcache and is that configurable?
14.5.5.8: What are best practices for testing an implementation, to ensure that it is an improvement over the MySQL query cache, and to measure the impact of memcached configuration changes? And would you recommend keeping the configuration very simple to start?
14.5.5.9: Can MySQL actually trigger/store the changed data to memcached?
14.5.5.10: So the responsibility lies with the application to populate and get records from the database as opposed to being a transparent cache layer for the db?
14.5.5.11: Is compression available?
14.5.5.12: File socket support for memcached from the localhost use to the local memcached server?
14.5.5.13: Are there any, or are there any plans to introduce, a framework to hide the interaction of memcached from the application; that is, within hibernate?
14.5.5.14: What are the advantages of using UDFs when the get/sets are manageable from within the client code rather than the db?
14.5.5.15: Is memcached typically a better solution for improving speed than MySQL Cluster and\or MySQL Proxy?.
14.5.5.17:
Does the
-L flag automatically sense how much
memory is being used by other memcached?
14.5.5.18:
Is the data inside of
memcached secure?
14.5.5.19: Can we implement different types of memcached as different nodes in the same server - so can there be deterministic and non deterministic in the same server?
14.5.5.20:
How easy is it to introduce
memcached to an
existing enterprise application instead of inclusion at project
design?
14.5.5.21:
Can memcached work with
ASPX?
14.5.5.22: If I have an object larger then a MB, do I have to manually split it or can I configure memcached to handle larger objects?
14.5.5.23: How does memcached compare to nCache?
14.5.5.24: Doing a direct telnet to the memcached port, is that just for that one machine, or does it magically apply across all nodes?
14.5.5.25: Is memcached more effective for video and audio as opposed to textual read/writes?
14.5.5.27: Do the memcache UDFs work under 5.1?
14.5.5.28:
Is it true
memcached will be much more
effective with db-read-intensive applications than with
db-write-intensive applications?
14.5.5.29: How are auto-increment columns in the MySQL database coordinated across multiple instances of memcached?
14.5.5.30: If you log a complex class (with methods that do calculation etc) will the get from Memcache re-create the class on the way out?
Questions and Answers
14.5.5.1: How does an event such as a crash of one of the memcached servers handled by the memcached client?
There is no automatic handling of this. If your client fails to get a response from a server then it should fall back to loading the data from the MySQL database.
The client APIs all provide the ability to add and remove memcached instances on the fly. If within your application you notice that memcached server is no longer responding, your can remove the server from the list of servers, and keys will automatically be redistributed to another memcached server in the list. If retaining the cache content on all your servers is important, make sure you use an API that supports a consistent hashing algorithm. For more information, see Section 14.5.2.4, “memcached Distribution Types”.
14.5.5.2: What's a recommended hardware config for a memcached server? Linux or Windows?
memcached is only available on Unix/Linux, so using a Windows machine is not an option. Outside of this, memcached has a very low processing overhead. All that is required is spare physical RAM capacity. The point is not that you should necessarily deploy a dedicated memcached server. If you have web, application, or database servers that have spare RAM capacity, then use them with memcached.
If you want to build and deploy a dedicated memcached servers, then you use a relatively low-power CPU, lots of RAM and one or more Gigabit Ethernet interfaces.
14.5.5.3: memcached is fast - is there any overhead in not using persistent connections? If persistent is always recommended, what are the downsides (for example, locking up)?
If you don't use persistent connections when communicating with memcached then there will be a small increase in the latency of opening the connection each time. The effect is comparable to use nonpersistent connections with MySQL.
In general, the chance of locking or other issues with persistent connections is minimal, because there is very little locking within memcached. If there is a problem then eventually your request will timeout and return no result so your application will need to load from MySQL again.
14.5.5.4: How expensive is it to establish a memcache connection? Should those connections be pooled?
Opening the connection is relatively inexpensive, because there is no security, authentication or other handshake taking place before you can start sending requests and getting results. Most APIs support a persistent connection to a memcached instance to reduce the latency. Connection pooling would depend on the API you are using, but if you are communicating directly over TCP/IP, then connection pooling would provide some small performance benefit.
14.5.5.5: How will the data will be handled when the memcached server is down?
The behavior is entirely application dependent. Most applications will fall back to loading the data from the database (just as if they were updating the memcached) information. If you are using multiple memcached servers, you may also want to remove a server from the list to prevent the missing server affecting performance. This is because the client will still attempt to communicate the memcached that corresponds to the key you are trying to load.
14.5.5.6: Can memcached be run on a Windows environment?
No. Currently memcached is available only on the Unix/Linux platform. There is an unofficial port available, see.
14.5.5.7: What is the max size of an object you can store in memcache and is that configurable?
The default maximum object size is 1MB. If you want to increase
this size, you have to re-compile memcached.
You can modify the value of the
POWER_BLOCK
within the
slabs.c file within the source.
14.5.5.8: What are best practices for testing an implementation, to ensure that it is an improvement over the MySQL query cache, and to measure the impact of memcached configuration changes? And would you recommend keeping the configuration very simple to start?
The best way to test the performance is to start up a memcached instance. First, modify your application so that it stores the data just before the data is about to be used or displayed into memcached.Since the APIs handle the serialization of the data, it should just be a one line modification to your code. Then, modify the start of the process that would normally load that information from MySQL with the code that requests the data from memcached. If the data cannot be loaded from memcached, default to the MySQL process.
All of the changes required will probably amount to just a few lines of code. To get the best benefit, make sure you cache entire objects (for example, all the components of a web page, blog post, discussion thread, etc.), rather than using memcached as a simple cache of individuals rows of MySQL tables. You should see performance benefits almost immediately.
Keeping the configuration very simple at the start, or even over the long term, is very easy with memcached. Once you have the basic structure up and running, the only addition you may want to make is to add more servers into the list of servers used by your clients. You don't need to manage the memcached servers, and there is no complex configuration, just add more servers to the list and let the client API and the memcached servers make the decisions.
14.5.5.9: Can MySQL actually trigger/store the changed data to memcached?
Yes. You can use the MySQL UDFs for memcached and either write statements that directly set the values in the memcached server, or use triggers or stored procedures to do it for you. For more information, see Section 14.5.3.7, “Using the MySQL memcached UDFs”
14.5.5.10: So the responsibility lies with the application to populate and get records from the database as opposed to being a transparent cache layer for the db?
Yes. You load the data from the database and write it into the cache provided by memcached. Using memcached as a simple database row cache, however, is probably inefficient. The best way to use memcached is to load all of the information from the database relating to a particular object, and then cache the entire object. For example, in a blogging environment, you might load the blog, associated comments, categories and so on, and then cache all of the information relating to that blog post. The reading of the data from the database will require multiple SQL statements and probably multiple rows of data to complete, which is time consuming. Loading the entire blog post and the associated information from memcached is just one operation and doesn't involve using the disk or parsing the SQL statement.
14.5.5.11: Is compression available?
Yes. Most of the client APIs support some sort of compression, and some even allow you to specify the threshold at which a value is deemed appropriate for compression during storage.
14.5.5.12: File socket support for memcached from the localhost use to the local memcached server?
You can use the
-s option to
memcached to specify the location of a file
socket. This automatically disables network support.
14.5.5.13: Are there any, or are there any plans to introduce, a framework to hide the interaction of memcached from the application; that is, within hibernate?
There are lots of projects working with memcached. There is a Google Code implementation of Hibernate and memcached working together. See.
14.5.5.14: What are the advantages of using UDFs when the get/sets are manageable from within the client code rather than the db?
Sometimes you want to be able to be able to update the information within memcached based on a generic database activity, rather than relying on your client code. For example, you may want to update status or counter information in memcached through the use of a trigger or stored procedure. For some situations and applications the existing use of a stored procedure for some operations means that updating the value in memcached from the database is easier than separately loading and communicating that data to the client just so the client can talk to memcached.
In other situations, when you are using a number of different clients and different APIs, you don't want to have to write (and maintain) the code required to update memcached in all the environments. Instead, you do this from within the database and the client never gets involved.
14.5.5.15: Is memcached typically a better solution for improving speed than MySQL Cluster and\or MySQL Proxy?
Both MySQL Cluster and MySQL Proxy still require access to the underlying database to retrieve the information. This implies both a parsing overhead for the statement and, often, disk based access to retrieve the data you have selected.
The advantage of memcached is that you can store entire objects or groups of information that may require multiple SQL statements to obtain. Restoring the result of 20 SQL statements formatted into a structure that your application can use directly without requiring any additional processing is always going to be faster than building that structure by loading the rows from a database..
In general, the time difference between getting data from the MySQL Query Cache and getting the exact same data from memcached is very small.
However, the benefit of memcached is that you can store any information, including the formatted and processed results of many queries into a single memcached key. Even if all the queries that you executed could be retrieved from the Query Cache without having to go to disk, you would still be running multiple queries (with network and other overhead) compared to just one for the memcached equivalent. If your application uses objects, or does any kind of processing on the information, with memcached you can store the post-processed version, so the data you load is immediately available to be used. With data loaded from the Query Cache, you would still have to do that processing.
In addition to these considerations, keep in mind that keeping data in the MySQL Query Cache is difficult as you have no control over the queries that are stored. This means that a slightly unusual query can temporarily clear a frequently used (and normally cached) query, reducing the effectiveness of your Query Cache. With memcached you can specify which objects are stored, when they are stored, and when they should be deleted giving you much more control over the information stored in the cache.
14.5.5.17:
Does the
-L flag automatically sense how much
memory is being used by other memcached?
No. There is no communication or sharing of information between memcached instances.
14.5.5.18:
Is the data inside of
memcached secure?
No, there is no security required to access or update the information within a memcached instance, which means that anybody with access to the machine has the ability to read, view and potentially update the information. If you want to keep the data secure, you can encrypt and decrypt the information before storing it. If you want to restrict the users capable of connecting to the server, your only choice is to either disable network access, or use IPTables or similar to restrict access to the memcached ports to a select set of hosts.
14.5.5.19: Can we implement different types of memcached as different nodes in the same server - so can there be deterministic and non deterministic in the same server?
Yes. You can run multiple instances of memcached on a single server, and in your client configuration you choose the list of servers you want to use.
14.5.5.20:
How easy is it to introduce
memcached to an
existing enterprise application instead of inclusion at project
design?
In general, it is very easy. In many languages and environments the changes to the application will be just a few lines, first to attempt to read from the cache when loading data and then fall back to the old method, and to update the cache with information once the data has been read.
memcached is designed to be deployed very easily, and you shouldn't require significant architectural changes to your application to use memcached.
14.5.5.21:
Can memcached work with
ASPX?
There are ports and interfaces for many languages and environments. ASPX relies on an underlying language such as C# or VisualBasic, and if you are using ASP.NET then there is a C# memcached library. For more information, see .
14.5.5.22: If I have an object larger then a MB, do I have to manually split it or can I configure memcached to handle larger objects?
You would have to manually split it. memcached is very simple, you give it a key and some data, it tries to cache it in RAM. If you try to store more than the default maximum size, the value is just truncated for speed reasons.
14.5.5.23: How does memcached compare to nCache?
The main benefit of memcached is that is very easy to deploy and works with a wide range of languages and environments, including .NET, Java, Perl, Python, PHP, even MySQL. memcached is also very lightweight in terms of systems and requirements, and you can easily add as many or as few memcached servers as you need without changing the individual configuration. memcached does require additional modifications to the application to take advantage of functionality such as multiple memcached servers.
14.5.5.24: Doing a direct telnet to the memcached port, is that just for that one machine, or does it magically apply across all nodes?
Just one. There is no communication between different instances of memcached, even if each instance is running on the same machine.
14.5.5.25: Is memcached more effective for video and audio as opposed to textual read/writes
memcached doesn't care what information you are storing. To memcached, any value you store is just a stream of data. Remember, though, that the maximum size of an object you can store in memcached without modifying the source code is 1MB, so it's usability with audio and video content is probably significantly reduced. Also remember that memcached is a solution for caching information for reading. It shouldn't be used for writes, except when updating the information in the cache.?
You would need to test your application using the different methods to determine this information. You may find that the default serialization within PHP may allow you to store DOM objects directly into the cache.
14.5.5.27: Do the memcache UDFs work under 5.1?
Yes.
14.5.5.28:
Is it true
memcached will be much more
effective with db-read-intensive applications than with
db-write-intensive applications?
Yes. memcached plays no role in database writes, it is a method of caching data already read from the database in RAM.
14.5.5.29: How are auto-increment columns in the MySQL database coordinated across multiple instances of memcached?
They aren't. There is no relationship between MySQL and memcached unless your application (or, if you are using the MySQL UDFs for memcached, your database definition) creates one.
If you are storing information based on an auto-increment key into multiple instances of memcached then the information will only be stored on one of the memcached instances anyway. The client uses the key value to determine which memcached instance to store the information, it doesn't store the same information across all the instances, as that would be a waste of cache memory.
14.5.5.30: If you log a complex class (with methods that do calculation etc) will the get from Memcache re-create the class on the way out?
In general, yes. If the serialization method within the API/language that you are using supports it, then methods and other information will be stored and retrieved.
The.
MySQL Proxy is currently an Alpha release and should not be used within production environments.
MySQL Proxy is compatible with MySQL 5.0.x or later. Testing has not been performed with Version 4.1. Please provide feedback on your experiences via the MySQL Proxy Forum.
MySQL Proxy is currently available as a pre-compiled binary for the following platforms:
Linux (including RedHat, Fedora, Debian, SuSE) and derivatives.
Mac OS X
FreeBSD
IBM AIX
Sun Solaris
Microsoft Windows (including Microsoft Windows XP, and Microsoft Windows Server 2003).
You Subversion repository”.
If you download the binary packages then you need only to extract the package and then copy the mysql-proxy file to your desired location. For example:
shell> tar zxf
mysql-proxy-0.5.0.tar.gzshell> cp ./mysql-proxy-0.5.0/sbin/mysql-proxy /usr/local/sbin.5.0.tar.gzshell> cd mysql-proxy-0.5.0 shell> .:
shell>-
within the current directory.
0.5.0.tar.gz
To start mysql-proxy you can just run the command directly. However, for most situations you will want to specify at the very least the address/host name and port number of the backend MySQL server to which the MySQL Proxy should pass on queries.
You can get a list of the supported command-line options using the
--help-all command-line option. The majority of
these options set up the environment, either in terms of the
address/port number that mysql-proxy should
listen on for connections, or the onward connection to a MySQL
server. A full description of the options is shown below:
--help-all — show all help options.
--help-admin — show options for the
admin-module.
--help-proxy — Show options for the
proxy-module.
--admin-address=host:port — specify
the host name (or IP address) and port for the administration
port. The default is
localhost:4041.
--proxy-address=host:port — the
listening host name (or IP address) and port of the proxy
server. The default is
localhost:4040.
--proxy-read-only-backend-address-skip-profiling — disables
profiling of queries (tracking time statistics). The default
is for tracking to be enabled.
--proxy-fix-bug-25371 — gets round
an issue when connecting to a MySQL server later than 5.1.12
when using a MySQL client library of any earlier version.
--proxy-lua-script=file —.
--daemon — starts the proxy in daemon
mode.
--pid-file=file — sets the name of
the file to be used to store the process ID.
--version —”.
connect Website., then inject queries within the
read_query() function then this function
is not triggered. You can use this to edit the result set, or
to remove or filter the result sets generated from additional
queries you injected into the queue when using
read_query().
The table below describes the direction of flow of information at the point when the function is triggered.
By default, all functions return a result that indicates that resultset
directly to the client without ever sending the original query to
the server.
In addition to these functions, a number of built-in structures provide control over how MySQL Proxy forwards on queries and returns the results by providing a simplified interface to elements such as the list of queries and the groups of result sets that are returned.
The figure below gives an example of how the proxy might be used when injecting queries into the query queue. Because the proxy sits between the client and MySQL server, what the proxy sends to the server, and the information that the proxy ultimately returns to the client do not have to match or correlate. Once the client has connected to the proxy, the following sequence occurs for each individual query sent by the client.
The client submits one query to the proxy, the
read_query() function within the proxy
is triggered. The function adds the query to the query
queue.
Once manipulation by
read_query() has
completed, the queries are submitted, sequentially, to the
MySQL server.
The MySQL server returns the results from each query, one
result set for each query submitted. The
read_query_result() function is
triggered for each result set, and each invocation can
decide which result set to return to the client
For example, you can queue additional queries into the global query queue to be processed by the server. This can be used to add statistical information by adding queries before and after the original query, changing the original query:
SELECT * FROM City;
Into a sequence of queries:
SELECT NOW(); SELECT * FROM City; SELECT NOW();
You can also modify the original statement, for example to add
EXPLAIN to each statement
executed to get information on how the statement was processed,
again altering our original SQL statement into a number of
statements:
SELECT * FROM City; EXPLAIN SELECT * FROM City;
In both of these examples, the client would have received more result sets than expected. Regardless of how you manipulate the incoming query and the returned result, the number of queries returned by the proxy must match the number of original queries sent by the client.
You could adjust the client to handle the multiple result sets
sent by the proxy, but in most cases you will want the existence
of the proxy to remain transparent. To ensure that the number of
queries and result sets match, you can use the MySQL Proxy
read_query_result() to extract the
additional result set information and return only the result set
the client originally requested back to the client. You can
achieve this by giving each query that you add to the query
queue a unique ID, and then filter out queries that do not match
the original query ID when processing them with
read_query_result().
There
proxy.connection object is read only, and
provides information about the current connection.
The
proxy.backends table are shown in this table.
The
proxy.queries object is a queue
representing the list of queries to be sent to the server. The
queue is not populated automatically, but if you do not
explicitly populate the queue then queries are passed on to the
backend server verbatim. Also, if you do not populate the query
queue by hand, then the
read_query_result()
function is not triggered.
The following methods are supported for populating the
proxy.queries object.
For example, you could append a query packet to the
proxy.queries queue by using the
append():
proxy.queries:append(1,packet)
The
proxy.response structure is used when you
want to return your own MySQL response, instead of forwarding a
packet that you have received a backend server. The structure
holds the response type information, an optional error message,
and the result set (rows/columns) that you want to return.
When using
proxy.response you either set
proxy.response.type to
proxy.MYSQLD_PACKET_OK and then build
resultset to contain the results that you
want to return, or set
proxy.response.type to
proxy.MYSQLD_PACKET_ERR and set the
proxy.response.errmsg to a string with the
error message. To send the completed resultset that you want to
return. The structure contains the information about the entire
result set, with the individual elements of the data shown in
the table below.
For an example of the population of this table, see Section 14.6.4.2, “Internal Structures”. of
the backend server (the MySQL server to which the proxy is
connected) or the type of backend server. These items are
entries within the main
proxy table.
The following values.
When.backends[proxy.connection.backend_ndx].address) end
In this example the IP address/port combination is also
displayed by accessing the information from the internal
proxy.backends table.
Handshake a Lua table as the only argument
to the function.
mysqld_version — the version of the
MySQL server.
thread_id — the thread ID.
scramble — the password scramble
buffer.
server_addr — the IP address of the
server.
client_addr — the IP address of the
client.
For example, you can print out the handshake data and refuse clients by IP address with the following function:
function read_handshake( auth ) print("<-- let's send him some information about us") print(" mysqld-version: " .. auth.mysqld_version) print(" thread-id : " .. auth.thread_id) print(" scramble-buf : " .. string.format("%q", auth.scramble)) print(" server-addr : " .. auth.server_addr) print(" client-addr : " .. auth.client_addr) if not auth.client_addr:match("^127.0.0.1:") then proxy.response.type = proxy.MYSQLD_PACKET_ERR proxy.response.errmsg = "only local connects are allowed" print("we don't like this client"); return proxy.PROXY_SEND_RESULT end end
Note that you have to return an error packet to the client by
using
proxy.PROXY_SEND_RESULT. user name and password supplied during authorization using:
function read_auth( auth ) print(" username : " .. auth.username) print(" password : " .. string.format("%q", auth
The
The
Section 14.6.4.2, “Internal Structures”),
then we extract the query from the packet and print it) and then execute
the queries that you have placed into the queue. If you do not
modify the original query or the queue, then resultset. When operating in a passive mode, during profiling for example, you want to identify the original query and the corresponding resultset so that the results expect().
The
read_query_result() is called for each
result set returned by the server only if you have manually
injected queries into the query queue. If you have not
manipulated the query queue then this function is not called.
The function supports a single argument, the result packet,
which provides a number of properties:
id — the ID of the result set,
which corresponds to the ID that was set when the query
packet was submitted to the server when using
append(id) on the query queue. (that is, the time to execute the query and the time to return the data for the query) for each query sent to the server:
function read_query( packet ) if packet:byte() == proxy.COM_QUERY then print("we got a normal query: " .. packet:sub(2)) proxy.queries:append(1, packet ) return proxy.PROXY_SEND_QUERY end end function read_query_result(inj) print("query-time: " .. (inj.query_time / 1000) .. "ms") print("response-time: " .. (inj.response_time / 1000) .. "ms") end
You can access the rows of returned results from the resultset
by accessing the rows property of the resultset property of the
result that is exposed through
read_query_result(). For example, you can
iterate over the results showing the first column from each row
using this Lua fragment:
for row in inj.resultset.rows do print("injected query returned: " .. row[1]) injects additional
NOW() statements into the query queue, giving them a
different ID to the ID of the original query. Within
read_query_result(), if the ID for the
injected queries is identified, we display the result row, and
return the
proxy.PROXY_IGNORE_RESULT from the
function so that the result is not returned to the client. If
the result is from any other query, we print out the query time
information for the query and return the default, which passes
on the result set unchanged. We could also have explicitly
returned
proxy.PROXY_IGNORE_RESULT to the
MySQL client.
function read_query( packet ) if packet:byte() == proxy.COM_QUERY then proxy.queries:append(2, string.char(proxy.COM_QUERY) .. "SELECT NOW()" ) proxy.queries:append(1, packet ) proxy.queries:append(2, string.char(proxy.COM_QUERY) .. "SELECT NOW()" )”.
There.
The mysql-proxy administration interface can be accessed using any MySQL client using the standard protocols. You can use the administration interface to gain information about the proxy server as a whole - standard connections to the proxy are isolated to operate as if you were connected directly to the backend MySQL server. Currently, the interface supports a limited set of functionality designed to provide connection and configuration information.
Because connectivity is provided over the standard MySQL
protocol, you must access this information using SQL syntax. By
default, the administration port is configured as 4041. You can
change this port number using the
--admin-address command-line option.
To get a list of the currently active connections to the proxy:
mysql> select * from proxy_connections; +------+--------+-------+------+ | id | type | state | db | +------+--------+-------+------+ | 0 | server | 0 | | | 1 | proxy | 0 | | | 2 | server | 10 | | +------+--------+-------+------+ 3 rows in set (0.00 sec)
To get the current configuration:
mysql> select * from proxy_config; +----------------------------+----------------------+ | option | value | +----------------------------+----------------------+ | admin.address | :4041 | | proxy.address | :4040 | | proxy.lua_script | mc.lua | | proxy.backend_addresses[0] | mysql:3306 | | proxy.fix_bug_25371 | 0 | | proxy.profiling | 1 | +----------------------------+----------------------+ 6 rows in set (0.01 sec)
Questions
14.6.6.1: Is the system context switch expensive, how much overhead does the lua script add?
14.6.6.2: How do I use a socket with MySQL Proxy? Proxy change logs mention that support for UNIX sockets has been added.
14.6.6.3: Can I use MySQL Proxy with all versions of MySQL?
14.6.6.4: If MySQL Proxy has to live on same machine as MySQL, are there any tuning considerations to ensure both perform optimally?
14.6.6.5: Do proxy applications run on a separate server? If not, what is the overhead incurred by Proxy on the DB server side?
14.6.6.6: Can MySQL Proxy handle SSL connections?
14.6.6.7:
What is the limit for
max-connections on the
server?
14.6.6.8: As the script is re-read by proxy, does it cache this or is it looking at the file system with each request?
14.6.6.9: With load balancing, what happen to transactions ? Are all queries sent to the same server ?
14.6.6.10: Can I run MySQL Proxy as a daemon?
14.6.6.11: What about caching the authorization info so clients connecting are given back-end connections that were established with identical authorization information, thus saving a few more round trips?
14.6.6.12: Could MySQL Proxy be used to capture passwords?
14.6.6.13: Can MySQL Proxy be used on slaves and intercept binlog messages?
14.6.6.14: MySQL Proxy can handle about 5000 connections, what is the limit on a MySQL server?
14.6.6.15: How does MySQL Proxy compare to DBSlayer ?
14.6.6.16: I currently use SQL Relay for efficient connection pooling with a number of apache processes connecting to a MySQL server. Can MySQL proxy currently accomplish this. My goal is to minimize connection latency while keeping temporary tables available.
14.6.6.17: The global namespace variable example with quotas does not persist after a reboot, is that correct?
14.6.6.18: I tried using MySQL Proxy without any Lua script to try a round-robin type load balancing. In this case, if the first database in the list is down, MySQL Proxy would not connect the client to the second database in the list.
14.6.6.19: Would the Java-only connection pooling solution work for multiple web servers? With this, I'd assume you can pool across many web servers at once?
14.6.6.20: Is the MySQL Proxy an API ?
14.6.6.21: If you have multiple databases on the same box, can you use proxy to connect to databases on default port 3306?
14.6.6.22: Will Proxy be deprecated for use with connection pooling once MySQL 6.x comes out? Or will 6.x integrate proxy more deeply?
14.6.6.23: In load balancing, how can I separate reads from writes?
14.6.6.24: We've looked at using MySQL Proxy but we're concerned about the alpha status - when do you think the proxy would be considered production ready?
14.6.6.25: Will the proxy road map involve moving popular features from lua to C? For example Read/Write splitting
14.6.6.26: Are these reserved function names (for example, error_result) that get automatically called?
14.6.6.27: Can you explain the status of your work with memcached and MySQL Proxy?
14.6.6.28: Is there any big web site using MySQL Proxy ? For what purpose and what transaction rate have they achieved.
14.6.6.29: So the authentication when connection pooling has to be done at every connection? What's the authentication latency?
14.6.6.30: Is it possible to use the MySQL proxy w/ updating a Lucene index (or Solr) by making TCP calls to that server to update?
14.6.6.31: Isn't MySQL Proxy similar to what is provided by Java connection pools?
14.6.6.32: Are there tools for isolating problems? How can someone figure out if a problem is in the client, in the db or in the proxy?
14.6.6.33: Can you dynamically reconfigure the pool of MySQL servers that MySQL Proxy will load balance to?
14.6.6.34:
Given that there is a
connect_server
function, can a Lua script link up with multiple servers?
14.6.6.35: Adding a proxy must add latency to the connection, how big is that latency?
14.6.6.36: In the quick poll, I see "Load Balancer: read-write splitting" as an option, so would it be correct to say that there are no scripts written for Proxy yet to do this?
14.6.6.37:
Is it "safe" to use
LuaSocket with proxy
scripts?
14.6.6.38: How different is MySQL Proxy from DBCP (Database connection pooling) for Apache in terms of connection pooling?
14.6.6.39: Do you have make one large script and call at proxy startup, can I change scripts without stopping and restarting (interrupting) the proxy?
Questions and Answers
14.6.6.1: Is the system context switch expensive, how much overhead does the lua script add?
Lua is fast and the overhead should be small enough for most applications. The raw packet-overhead is around 400 microseconds.
14.6.6.2: How do I use a socket with MySQL Proxy? Proxy change logs mention that support for UNIX sockets has been added.
Just specify the path to the socket:
--proxy-backend-addresses=/path/to/socket
However it appears that
--proxy-address=/path/to/socket does not work
on the front end. It would be nice if someone added this
feature.
14.6.6.3: Can I use MySQL Proxy with all versions of MySQL?
MySQL Proxy is designed to work with MySQL 5.0 or higher, and supports the MySQL network protocol for 5.0 and higher.
14.6.6.4: If MySQL Proxy has to live on same machine as MySQL, are there any tuning considerations to ensure both perform optimally?
MySQL Proxy can live on any box: application, db or its own box. MySQL Proxy uses comparatively little CPU or RAM, so additional requirements or overhead is negligible.
14.6.6.5: Do proxy applications run on a separate server? If not, what is the overhead incurred by Proxy on the DB server side?
You can run the proxy on the application server, on its own box or on the DB-server depending on the use-case
14.6.6.6: Can MySQL Proxy handle SSL connections?
No, being the man-in-the-middle, Proxy can't handle encrypted sessions because it cannot share the SSL information.
14.6.6.7:
What is the limit for
max-connections on the
server?
Around 1024 connections the MySQL Server may run out of threads it can spawn. Leaving it at around 100 is advised.
14.6.6.8: As the script is re-read by proxy, does it cache this or is it looking at the file system with each request?
It looks for the script at client-connect and reads it if it has changed, otherwise it uses the cached version.
14.6.6.9: With load balancing, what happen to transactions ? Are all queries sent to the same server ?
Without any special customization the whole connection is sent to the same server. That keeps the whole connection state intact.
14.6.6.10: Can I run MySQL Proxy as a daemon?
Starting from version 0.6.0, the Proxy is launched as a daemon
by default. If you want to avoid this, use the
-D or
--no-daemon option.
To keep track of the process ID, the daemon can be started with
the additional option
--pid-file=file, to
save the PID to a known file name. On version 0.5.x, the Proxy
can't be started natively as a daemon
14.6.6.11: What about caching the authorization info so clients connecting are given back-end connections that were established with identical authorization information, thus saving a few more round trips?
There is an option that provides this functionality
--proxy-pool-no-change-user.
14.6.6.12: Could MySQL Proxy be used to capture passwords?
The MySQL network protocol does not allow passwords to be sent in clear-text, all you could capture is the encrypted version.
14.6.6.13: Can MySQL Proxy be used on slaves and intercept binlog messages?
We are working on that. See for an example.
14.6.6.14: MySQL Proxy can handle about 5000 connections, what is the limit on a MySQL server?
Se your
max-connections settings. By default
the setting is 150, the proxy can handle a lot more.
14.6.6.15: How does MySQL Proxy compare to DBSlayer ?
DBSlayer is a REST->MySQL tool, MySQL Proxy is transparent to your application. No change to the application is needed.
14.6.6.16: I currently use SQL Relay for efficient connection pooling with a number of apache processes connecting to a MySQL server. Can MySQL proxy currently accomplish this. My goal is to minimize connection latency while keeping temporary tables available.
Yes.
14.6.6.17: The global namespace variable example with quotas does not persist after a reboot, is that correct?
Yes. if you restart the proxy, you lose the results, unless you save them in a file.
14.6.6.18: I tried using MySQL Proxy without any Lua script to try a round-robin type load balancing. In this case, if the first database in the list is down, MySQL Proxy would not connect the client to the second database in the list.
This issue is fixed in version 0.7.0.
14.6.6.19: Would the Java-only connection pooling solution work for multiple web servers? With this, I'd assume you can pool across many web servers at once?
Yes. But you can also start one proxy on each application server to get a similar behaviour as you have it already.
14.6.6.20: Is the MySQL Proxy an API ?
No, MySQL Proxy is an application that forwards packets from a client to a server using the MySQL network protocol. The MySQL proxy provides a API allowing you to change its behaviour.
14.6.6.21: If you have multiple databases on the same box, can you use proxy to connect to databases on default port 3306?
Yes, MySQL Proxy can listen on any port. Providing none of the MySQL servers are listening on the same port.
14.6.6.22: Will Proxy be deprecated for use with connection pooling once MySQL 6.x comes out? Or will 6.x integrate proxy more deeply?
The logic about the pooling is controlled by the lua scripts, you can enable and disable it if you like. There are no plans to embed the current MySQL Proxy functionality into the MySQL Server.
14.6.6.23: In load balancing, how can I separate reads from writes?
There is no automatic separation of queries that perform reads or writes to the different backend servers. However, you can specify to mysql-proxy that one or more of the 'backend' MyuSQL servers are read-only.
$ mysql-proxy \ --proxy-backend-addresses=10.0.1.2:3306 \ --proxy-read-only-backend-addresses=10.0.1.3:3306 &
In the next releases we will add connection pooling and read/write splitting to make this more useful. See also MySQL Load Balancer.
14.6.6.24: We've looked at using MySQL Proxy but we're concerned about the alpha status - when do you think the proxy would be considered production ready?
We are on the road to the next feature release: 0.7.0. It will improve the performance quite a bit. After that we may be able to enter the beta phase.
14.6.6.25: Will the proxy road map involve moving popular features from lua to C? For example Read/Write splitting
We will keep the high-level parts in the Lua layer to be able to adjust to special situations without a rebuild. Read/Write splitting sometimes needs external knowledge that may only be available by the DBA.
14.6.6.26: Are these reserved function names (for example, error_result) that get automatically called?
Only functions and values starting with
proxy.* are provided by the proxy. All others
are provided by you.
14.6.6.27: Can you explain the status of your work with memcached and MySQL Proxy?
There are some ideas to integrate proxy and memcache a bit, but no code yet.
14.6.6.28: Is there any big web site using MySQL Proxy ? For what purpose and what transaction rate have they achieved.
Yes, gaiaonline. They have tested MySQL Proxy and seen it handle 2400 queries per second through the proxy.
14.6.6.29: So the authentication when connection pooling has to be done at every connection? What's the authentication latency?
You can skip the round-trip and use the connection as it was added to the pool. As long as the application cleans up the temporary tables it used. The overhead is (as always) around 400 microseconds.
14.6.6.30: Is it possible to use the MySQL proxy w/ updating a Lucene index (or Solr) by making TCP calls to that server to update?
Yes, but it isn't advised for now.
14.6.6.31: Isn't MySQL Proxy similar to what is provided by Java connection pools?
Yes and no. Java connection pools are specific to Java applications, MySQL Proxy works with any client API that talks the MySQL network protocol. Also, connection pools do not provide any functionality for intelligently examining the network packets and modifying the contents.
14.6.6.32: Are there tools for isolating problems? How can someone figure out if a problem is in the client, in the db or in the proxy?
You can set a debug script in the proxy, which is an exceptionally good tool for this purpose. You can see very clearly which component is causing the problem, if you set the right breakpoints.
14.6.6.33: Can you dynamically reconfigure the pool of MySQL servers that MySQL Proxy will load balance to?
Not yet, it is on the list. We are working on a administration interface for that purpose.
14.6.6.34:
Given that there is a
connect_server
function, can a Lua script link up with multiple servers?
The proxy provides some tutorials in the source-package, one is
examples/tutorial-keepalive.lua.
14.6.6.35: Adding a proxy must add latency to the connection, how big is that latency?
In the range of 400microseconds
14.6.6.36: In the quick poll, I see "Load Balancer: read-write splitting" as an option, so would it be correct to say that there are no scripts written for Proxy yet to do this?
There is a proof of concept script for that included. But its far from perfect and may not work for you yet.
14.6.6.37:
Is it "safe" to use
LuaSocket with proxy
scripts?
You can, but it is not advised as it may block.
14.6.6.38: How different is MySQL Proxy from DBCP (Database connection pooling) for Apache in terms of connection pooling?
Connection Pooling is just one use-case of the MySQL Proxy. You can use it for a lot more and it works in cases where you can't use DBCP (like if you don't have Java).
14.6.6.39: Do you have make one large script and call at proxy startup, can I change scripts without stopping and restarting (interrupting) the proxy?
You can just change the script and the proxy will reload it when a client connects.
|
http://idlebox.net/2009/apidocs/MySQL-5.0-20090720.zip/ha-overview.html
|
CC-MAIN-2013-48
|
refinedweb
| 14,936
| 62.48
|
Welcome to the FlashGen.Com Flex Biulder Ant Script page.Here you will find the latest information on the Flex Builder Ant scripts developed to in some way make your development processes smoother and easier.
Articles on how to use these scripts:
- Manifest and Config overview
- Setting up the FlashGen.Com Ant scripts
At this point in time these scripts will only work on Mac OS X 10.4+ (tested) and *nix (untested). This is due to the fact that they utilise a lot of bash shell commands. I suspect you would have no trouble using them on Windows but you will need to install Cygwin. A Windows version is planned in the future but no firm releae date has been set.
Currently these scripts include the following:
- Generate a custom Manifest.xml file (used in the Flex Library, advanced compiling and namespace management)
- Generate a custom Config.xml file (allowing the overriding of compiler information and remove the need to set flags in the preferences panel)
- ASDoc support from within Flex Builder (this one is a biggy for Mac users as there is a pathing bug in the ASDoc scripts that prevent paths with spaces working correctly)
Things on the todo list
Download the latest version of the scripts here*
*Please note these are provided without warranty or support and FlashGen.Com cannot be held responsible if these scripts achieve world dominance or just mess stuff up!
Change Log
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Known Issues:
0.7: If your main application file name is lower case the XML manifest will not convert it (Regex)
0.5: Additional Ant parameter being passed in task to create custom flex-config XML)
0.3: Regex doesn’t comment out root MXML file by default
0.1: Regex issue when writing directory listing (ordering is not correct)
|
https://blog.flashgen.com/components/distribution/flex-builder-ant-scripts/
|
CC-MAIN-2018-22
|
refinedweb
| 316
| 64.2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.