text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
log4net is built on a number of different frameworks. Each new version of the frameworks add new features. To take advantage of these new features we must build log4net using the appropriate framework. We also maintain builds compatible with older versions of the frameworks.
It is important to remember that the .NET frameworks support backward compatibility, that is a new version of the framework will run binary assemblies that were targeted to previous versions of the framework.
While the number of different builds available may seem confusing, you only need to select the nearest build for your platform that is equal to or earlier than your chosen deployment framework. If you intend to deploy your application on the Microsoft .NET Framework 1.0 don't pick the log4net build that is built against the Microsoft .NET Framework 1.1 because the .NET framework does not guarantee forward compatibility only backward compatibility.
The lowest common denominator build is the CLI 1.0 Compatible build. This build is compatible with the ECMA/ISO CLI 1.0 standard APIs and will run on all frameworks that support the standard. (Note that the Microsoft .NET Compact Framework does not support this standard). Use this build if you intend to deploy you application on both the Microsoft .NET Frameworks and the Mono frameworks.
log4net now builds on 6 frameworks:
For each of these frameworks a log4net assembly targeting the framework is supplied. Although it's perfectly possible to use the .NET Framework 1.0 version of log4net on the .NET Framework 1.1, having an assembly that really targets a specific framework allows us to use features in that framework that are not available in other frameworks or remove features from log4net that are not supported in a specific framework.
The appenders available to each framework depend on the functionality of the framework and the platform it runs on:
none
none
none
For Smart-device applications, the log4net system can be configured by passing the location of the log4net configuration file to the log4net.Config.XmlConfigurator.Configure(FileInfo) method in the entry point of the application.
For example:
namespace TestApp { using System.IO; public class EntryPoint { /// <summary> /// Application entry point. /// </summary> public static void Main() { //(); } } }
Applications will need to programmatically shutdown the log4net system during the application's shutdown using the log4net.LogManager.Shutdown() method in order to prevent losing logging events. See the code above for an example.
There are 2 separate builds of log4net for mono; Mono 1.0, built using the C# compiler in a mode which is compatible with the CLI 1.0 language specification, and; Mono 2.0, built using the .NET 2.0 extensions to the C# language.
none
none
This build of log4net is designed to run on any ECMA CLI 1.0 compatible runtime. The assembly does not support any platform specific features. The build includes the common subset of functionality found in the .NET 1.0 and Mono 1.0 builds. The output assembly is built using the Microsoft .NET 1.0 compiler and library.
The log4net CLI 1.0 assembly is runtime compatible with the following frameworks: | http://logging.apache.org/log4net/release/framework-support.html | crawl-001 | refinedweb | 522 | 61.02 |
C# allows to use one loop inside another loop. Following section shows few examples to illustrate the concept.
The syntax for a nested for loop statement in C# is as follows −
for ( init; condition; increment ) { for ( init; condition; increment ) { statement(s); } statement(s); }
The syntax for a nested while loop statement in C# is as follows −
while(condition) { while(condition) { statement(s); } statement(s); }
The syntax for a nested do...while loop statement in C# is as follows −
do { statement(s); do { statement(s); } while( condition ); } while( condition );
A final note on loop nesting is that you can put any type of loop inside of any other type of loop. For example a for loop can be inside a while loop or vice versa.
The following program uses a nested for loop to find the prime numbers from 2 to 100 −
using System; namespace Loops { class Program { static void Main(string[] args) { /* local variable definition */ int i, j; for (i = 2; i < 100; i++) { for (j = 2; j <= (i / j); j++) if ((i % j) == 0) break; // if factor found, not prime if (j > (i / j)) Console.WriteLine("{0} is prime", i); } Console.ReadLine(); } } } | https://www.tutorialspoint.com/csharp/csharp_nested_loops.htm | CC-MAIN-2020-16 | refinedweb | 193 | 62.31 |
)). Placing these messages in a service queue).
This.)
This drivers.
The word module is used differently when talking about drivers. A device driver is a kernel-loadable module that provides the interface between a device and the Device Driver Interface, and is linked to the kernel when it is first invoked.
STREAMS drivers share a basic programming model with STREAMS modules. Information common to both drivers and modules is discussed in Chapter 10, STREAMS Modules. After summarizing some basic device driver concepts, this chapter discusses several topics specific to STREAMS device drivers (and not covered elsewhere) and then presents code samples illustrating basic STREAMS driver processing.
A device driver is a loadable kernel module that translates between an I/O device and the kernel to operate the device.
Device drivers can also be software-only, implementing a pseudo-device such as RAM disk or a pseudo-terminal that only exists in software.
In the Solaris operating environment, the interface between the kernel and device drivers is called the Device Driver Interface (DDI/DKI). This interface is specified in the Section 9E manual pages that specify the driver entry points. Section 9 also details the kernel data structures (9S) and utility functions (9F) available to drivers.
The DDI protects the kernel from device specifics. Application programs and the rest of the kernel need little (if any) device-specific code to use the device. The DDI makes the system more portable and easier to maintain.
There are three basic types of device drivers corresponding to the three basic types of devices. Character devices handle data serially and transfer data to and from the processor one character at a time, the same as keyboards and low performance printers. Serial block devices and drivers also handle data serially, but transfer data to and from memory without processor intervention, the same as tape drives. Direct access block devices and drivers also transfer data without processor intervention and blocks of storage on the device can be addressed directly, the same as disk drives.
There are two types of character device drivers: standard character device drivers and STREAMS device drivers. STREAMS is a separate programming model for writing a character driver. Devices that receive data asynchronously (such as terminal and network devices) are suited to a STREAMS implementation.
STREAMS drivers share some kinds of processing with STREAMS modules. Important differences between drivers and modules include how the application manipulates drivers and modules and how interrupts are handled. In STREAMS, drivers are opened and modules are pushed. A device driver has an interrupt routine to process hardware interrupts.
STREAMS drivers have five different points of contact with the kernel:Table 9–1 Kernel Contact Points.
The initialization entry points of STREAMS drivers must perform the same tasks as those of non-STREAMS drivers. See Writing Device Drivers for more information.
In non-STREAMS drivers, most of the driver's work is accomplished through the entry points in the cb_ops(9S) structure. For STREAMS drivers, most of the work is accomplished through the message-based STREAMS queue processing entry points.
Figure 9–1 shows multiple streams (corresponding to minor devices) connecting to a common driver. There are two distinct streams opened from the same major device. Consequently, they have the same streamtab and the same driver procedures.
Multiple instances (minor devices) of the same driver are handled during the initial open for each device. Typically, a driver stores the queue address in a driver-private structure that is uniquely identified by the minor device number. (The DDI/DKI provides a mechanism for uniform handling of driver-private structures; see ddi_soft_state(9F)). The q_ptr of the queue points to the private data structure entry. When the messages are received by the queue, the calls to the driver put and service procedures pass the address of the queue, enabling the procedures to determine the associated device through the q_ptr field.
STREAMS guarantees that only one open or close can be active at a time per major/minor device pair.
STREAMS.
Most hardware drivers have an interrupt handler routine. You must supply an interrupt routine for the device's driver. The interrupt handling for STREAMS drivers is not fundamentally different from that for other device drivers. Drivers usually register interrupt handlers in their attach(9E)entry point, using ddi_add_intr(9F). Drivers unregister the interrupt handler at detach time using ddi_remove_intr(9F).
The system also supports software interrupts. The routines ddi_add_softintr(9F) and ddi_remove_softintr(9F) register and unregister (respectively) soft-interrupt handlers. A software interrupt is generated by calling ddi_trigger_softintr(9F).
See Writing Device Drivers for more information.
STREAMS drivers can prevent unloading through the standard driver detach(9E) entry point.
The following ).
STREAMS device drivers are in many ways similar to non-STREAMS device drivers. The following points summarize the differences between STREAMS drivers and other drivers:
Drivers must have attach(9E) and probe.
For more information on global driver issues and non-STREAMS drivers, see Writing Device Drivers.
This chapter provides specific examples of how modules work, including code. These it
In addition, when configuring the module list, an optional anchor can be placed within the module list. See STREAMS Anchors for more information.
When the module list is cleared, a range of minor devices has to be cleared as a range and not in parts.
The SAD driver is accessed through the /dev/sad/admin or /dev/sad/user node. After the device is initialized, a program can perform any autopush configuration. The program should open the SAD driver, read a configuration file to find out what modules need to be configured for which devices, format the information into strapush structures, and make the SAD_SAP ioctl(2) calls. See the sad(7D) man page for more information.
All autopush operations are performed through SAD_SAP ioctl(2) commands to set or get autopush information. Only the root user can set autopush information, but any user can get the autopush information for a device.
The SAD_SAP ioctl is a form of ioctl(fd, cmd, arg), where fd is the file descriptor of the SAD driver, cmd is either SAD_SAP (set autopush information) or SAD_GAP (get autopush information), and arg is a pointer to the structure strapush.
The strapush structure is shown in the following example:
/* * maximum number of modules that can be pushed on a * stream using the autopush feature should be no greater * than nstrpush */ #define MAXAPUSH 8 /* autopush information common to user and kernel */ struct apcommon { uint apc_cmd; /* command - see below */ major_t apc_major; /* major device number */ minor_t apc_minor; /* minor device number */ minor_t apc_lastminor; /* last minor dev # for range */ uint apc_npush; /* number of modules to push */ }; /* ap_cmd - various options of autopush */ #define SAP_CLEAR 0 /* remove configuration list */ #define SAP_ONE 1 /* configure one minor device */ #define SAP_RANGE 2 /* config range of minor devices */ #define SAP_ALL 3 /* configure all minor devices */ /* format of autopush ioctls */ struct strapush { struct apcommon sap_common; char sap_list[MAXAPUSH] [FMNAMESZ + 1]; /* module list */ }; #define sap_cmd sap_common.apc_cmd #define sap_major sap_common.apc_major #define sap_minor sap_common.apc_minor #define sap_lastminor sap_common.apc_lastminor #define sap_npush sap_common.apc_npush
A device is identified by its major device number, sap_major. The SAD_SAP ioctl(2) has the following options:. MAXAPUSH defines the maximum number of modules to push automatically.
A user can query the current configuration status of a given major/minor device by issuing the SAD_GAP ioctl(2) with sap_major and sap_minor values of the device set. On successful return from this system call, the strapush structure is filled in with the corresponding information for the device. The maximum number of entries. Minor numbers start and end at 0, creating only one minor number. The modules automatically pushed are ldterm and ttcompat. The second line configures the zs driver whose minor device numbers are 0 and 1, and automatically pushes the same modules. The last line configures the ptsl driver whose minor device numbers are from 0 to 15, and automatically pushes the same modules..
This can make effective use of the available parallelism of a symmetric shared-memory multiprocessor computer. All kernel subsystems are multithreaded: scheduler, virtual memory, file systems, block/character/STREAMS I/O, networking protocols, and device drivers.
MT STREAMS requires you to use some.
Simultaneous execution.
Suspending execution for the next thread to run.
Portion of code that is single-threaded.
Exclusive access to a data element by a single thread at one time.
Kernel event synchronization primitives.
Memory-based synchronization mechanism.
Data lock allowing one writer or many readers at one time.
On module or a driver can be either MT SAFE or MT UNSAFE. A module or driver is MT SAFE when its data values are correct regardless of the order that multiple threads access and modify the data. For
To configure a module as being MT SAFE, use the f_flag field in fmodsw(9S). (see MT STREAMS Perimeters for information on perimeters)..
Your MT SAFE modules should use perimeters and avoid using module private locks (mutex, condition variables, readers/writer, or semaphore). Should you opt to use module private locks, you need to read MT SAFE Modules Using Explicit Locks along with this section.
MT UNSAFE mode for STREAMS modules was temporarily supported as an aid in porting SVR4 modules; however, MT UNSAFE is not supported after SVR4. Beginning with the release of the Solaris 7 operating environment, no MT UNSAFE module or driver has been supported.
Upper and lower multiplexors share the same perimeter type and concurrency level.
To).)
This).
Although. porting a STREAMS module or driver from the SunOS 4 system to the SunOS 5 system, the module should be examined with respect to the following areas:
The SunOS 5 Device Driver Interface (DDI/DKI)
The SunOS 5 MT design
For portability and correct operation, each module must adhere to the SunOS DDI/DKI. Several facilities available in previous releases of the SunOS system have changed and can take different arguments, or produce different side effects, or no longer exist in the SunOS 5 system. The module writer should carefully review the module with respect to the DDI/DKI.
Each module that accesses underlying Sun-specific features included in the SunOS 5 system should conform to the Device Driver Interface. The SunOS 5 DDI defines the interface used by the device driver to register device hardware interrupts, access device node properties, map device slave memory, and establish and synchronize memory mappings for DVMA (Direct Virtual Memory Access). These areas are primarily applicable to hardware device drivers. Refer to the Device Driver Interface Specification within the Writing Device Drivers for details on the SunOS 5 DDI and DVMA.
The kernel networking subsystem in the SunOS 5 system is based on STREAMS. Datalink drivers that used the ifnet interface in the SunOS 4 system must be converted to use DLPI for the SunOS 5 system. Refer to the Data Link Provider Interface, Revision 2 specification.
After reviewing the module for conformance to the SunOS 5 DKI and DDI specifications, you should be able to consider the impact of multithreading on the module.
Example 12–1 is a sample multithreaded, loadable, STREAMS pseudo-driver. The driver MT design is the simplest possible based on using a per module inner perimeter. Thus, only one thread can execute in the driver at any time. In addition, a quntimeout(9F) synchronous callback routine is used. The driver cancels an outstanding qtimeout(9F) by calling quntimeout(9F) in the close routine. See close() Race Conditions.
/* * Example SunOS 5 multithreaded STREAMS pseudo device driver. * Using a D_MTPERMOD inner perimeter. */ #include <sys/types.h> #include <sys/errno.h> #include <sys/stropts.h> #include <sys/stream.h> #include <sys/strlog.h> #include <sys/cmn_err.h> #include <sys/modctl.h> #include <sys/kmem.h> #include <sys/conf.h> #include <sys/ksynch.h> #include <sys/stat.h> #include <sys/ddi.h> #include <sys/sunddi.h> /* * Function prototypes. */ static int xxidentify(dev_info_t *); static int xxattach(dev_info_t *, ddi_attach_cmd_t); static int xxdetach(dev_info_t *, ddi_detach_cmd_t); static int xxgetinfo(dev_info_t *,ddi_info_cmd_t,void *,void**); static int xxopen(queue_t *, dev_t *, int, int, cred_t *); static int xxclose(queue_t *, int, cred_t *); static int xxwput(queue_t *, mblk_t *); static int xxwsrv(queue_t *); static void xxtick(caddr_t); /* * Streams Declarations */ static struct module_info xxm_info = { 99, /* mi_idnum */ "xx", /* mi_idname */ 0, /* mi_minpsz */ INFPSZ, /* mi_maxpsz */ 0, /* mi_hiwat */ 0 /* mi_lowat */ }; static struct qinit xxrinit = { NULL, /* qi_putp */ NULL, /* qi_srvp */ xxopen, /* qi_qopen */ xxclose, /* qi_qclose */ NULL, /* qi_qadmin */ &xxm_info, /* qi_minfo */ NULL /* qi_mstat */ }; static struct qinit xxwinit = { xxwput, /* qi_putp */ xxwsrv, /* qi_srvp */ NULL, /* qi_qopen */ NULL, /* qi_qclose */ NULL, /* qi_qadmin */ &xxm_info, /* qi_minfo */ NULL /* qi_mstat */ }; static struct streamtab xxstrtab = { &xxrinit, /* st_rdinit */ &xxwinit, /* st_wrinit */ NULL, /* st_muxrinit */ NULL /* st_muxwrinit */ }; /* * define the xx_ops structure. */ static struct cb_ops cb_xx */ &xxstrtab, /* cb_stream */ (D_NEW|D_MP|D_MTPERMOD) /* cb_flag */ }; static struct dev_ops xx_ops = { DEVO_REV, /* devo_rev */ 0, /* devo_refcnt */ xxgetinfo, /* devo_getinfo */ xxidentify, /* devo_identify */ nodev, /* devo_probe */ xxattach, /* devo_attach */ xxdetach, /* devo_detach */ nodev, /* devo_reset */ &cb_xx_ops, /* devo_cb_ops */ (struct bus_ops *)NULL /* devo_bus_ops */ }; /* * Module linkage information for the kernel. */ static struct modldrv modldrv = { &mod_driverops, /* Type of module. This one is a driver */ "xx", /* Driver name */ &xx_ops, /* driver ops */ }; static struct modlinkage modlinkage = { MODREV_1, &modldrv, NULL }; /* * Driver private data structure. One is allocated per Stream. */ struct xxstr { struct xxstr *xx_next; /* pointer to next in list */ queue_t *xx_rq; /* read side queue pointer */ minor_t xx_minor; /* minor device # (for clone) */ int xx_timeoutid; /* id returned from timeout() */ }; /* * Linked list of opened Stream xxstr structures. * No need for locks protecting it since the whole module is * single threaded using the D_MTPERMOD perimeter. */ static struct xxstr *xxup = NULL; /* * Module Config entry points */ _init(void) { return (mod_install(&modlinkage)); } _fini(void) { return (mod_remove(&modlinkage)); } _info(struct modinfo *modinfop) { return (mod_info(&modlinkage, modinfop)); } /* * Auto Configuration entry points */ /* Identify device. */ static int xxidentify(dev_info_t *dip) { if (strcmp(ddi_get_name(dip), "xx") == 0) return (DDI_IDENTIFIED); else return (DDI_NOT_IDENTIFIED); } /* Attach device. */ static int xxattach(dev_info_t *dip, ddi_attach_cmd_t cmd) { /* This creates the device node. */ if (ddi_create_minor_node(dip, "xx", S_IFCHR, ddi_get_instance(dip), DDI_PSEUDO, CLONE_DEV) == DDI_FAILURE) { return (DDI_FAILURE); } ddi_report_dev(dip); return (DDI_SUCCESS); } /* Detach device. */ static int xxdetach(dev_info_t *dip, ddi_detach_cmd_t cmd) { ddi_remove_minor_node(dip, NULL); return (DDI_SUCCESS); } /* ARGSUSED */ static int xxgetinfo(dev_info_t *dip, ddi_info_cmd_t infocmd, void *arg, void **resultp) { dev_t dev = (dev_t) arg; int instance, ret = DDI_FAILURE; devstate_t *sp; state *statep; instance = getminor(dev); switch (infocmd) { case DDI_INFO_DEVT2DEVINFO: if ((sp = ddi_get_soft_state(statep, getminor((dev_t) arg))) != NULL) { *resultp = sp->devi; ret = DDI_SUCCESS; } else *result = NULL; break; case DDI_INFO_DEVT2INSTANCE: *resultp = (void *)instance; ret = DDI_SUCCESS; break; default: break; } return (ret); } static xxopen(rq, devp, flag, sflag, credp) queue_t *rq; dev_t *devp; int flag; int sflag; cred_t *credp; { struct xxstr *xxp; struct xxstr **prevxxp; minor_t minordev; /* If this stream already open - we're done. */ if (rq->q_ptr) return (0); /* Determine minor device number. */ prevxxp = & xxup; if (sflag == CLONEOPEN) { minordev = 0; while ((xxp = *prevxxp) != NULL) { if (minordev < xxp->xx_minor) break; minordev++; prevxxp = &xxp->xx_next; } *devp = makedevice(getmajor(*devp), minordev) } else minordev = getminor(*devp); /* Allocate our private per-Stream data structure. */ if ((xxp = kmem_alloc(sizeof (struct xxstr), KM_SLEEP)) == NULL) return (ENOMEM); /* Point q_ptr at it. */ rq->q_ptr = WR(rq)->q_ptr = (char *) xxp; /* Initialize it. */ xxp->xx_minor = minordev; xxp->xx_timeoutid = 0; xxp->xx_rq = rq; /* Link new entry into the list of active entries. */ xxp->xx_next = *prevxxp; *prevxxp = xxp; /* Enable xxput() and xxsrv() procedures on this queue. */ qprocson(rq); return (0); } static xxclose(rq, flag, credp)(rq, xxp->xx_timeoutid); xxp->xx_timeoutid = 0; } /* xxwput(wq, mp) queue_t *wq; mblk_t *mp; { struct xxstr *xxp = (struct xxstr *)wq->q_ptr; /* write your code here */ /* *** Sacha's Comments *** broken */ freemsg(mp); mp = NULL; if (mp != NULL) putnext(wq, mp); } static xxwsrv(wq) queue_t *wq; { mblk_t *mp; struct xxstr *xxp; xxp = (struct xxstr *) wq->q_ptr; while (mp = getq(wq)) { /* write your code here */ */ }
Example 12–2 is a sample multithreaded, loadable STREAMS module. The module MT design is a relatively simple one, based on a per queue-pair inner perimeter plus an outer perimeter. The inner perimeter protects per-instance data structure (accessed through the q_ptr field) and the module global data is protected by the outer perimeter. The outer perimeter is configured so that the open and close routines have exclusive access to the outer perimeter. This is necessary because they both modify the global-linked list of instances. Other routines that modify global data are run as qwriter(9F) callbacks, giving them exclusive access to the whole module.
/* * Example SunOS 5 multi-threaded STREAMS module. * Using a per-queue-pair inner perimeter plus an outer perimeter. */ #include <sys/types.h> #include <sys/errno.h> #include <sys/stropts.h> #include <sys/stream.h> #include <sys/strlog.h> #include <sys/cmn_err.h> #include <sys/kmem.h> #include <sys/conf.h> #include <sys/ksynch.h> #include <sys/modctl.h> #include <sys/stat.h> #include <sys/ddi.h> #include <sys/sunddi.h> /* * Function prototypes. */ static int xxopen(queue_t *, dev_t *, int, int, cred_t *); static int xxclose(queue_t *, int, cred_t *); static int xxwput(queue_t *, mblk_t *); static int xxwsrv(queue_t *); static void xxwput_ioctl(queue_t *, mblk_t *); static int xxrput(queue_t *, mblk_t *); static void xxtick(caddr_t); /* * Streams Declarations */ static struct module_info xxm_info = { 99, /* mi_idnum */ “xx”, /* mi_idname */ 0, /* mi_minpsz */ INFPSZ, /* mi_maxpsz */ 0, /* mi_hiwat */ 0 /* mi_lowat */ }; /* * Define the read-side qinit structure */ static struct qinit xxrinit = { xxrput, /* qi_putp */ NULL, /* qi_srvp */ xxopen, /* qi_qopen */ xxclose, /* qi_qclose */ NULL, /* qi_qadmin */ &xxm_info, /* qi_minfo */ NULL /* qi_mstat */ }; /* * Define the write-side qinit structure */ static struct qinit xxwinit = { xxwput, /* qi_putp */ xxwsr, /* qi_srvp */ NULL, /* qi_qopen */ NULL, /* qi_qclose */ NULL, /* qi_qadmin */ &xxm_info, /* qi_minfo */ NULL /* qi_mstat */ }; static struct streamtab xxstrtab = { &xxrini, /* st_rdinit */ &xxwini, /* st_wrinit */ NULL, /* st_muxrinit */ NULL /* st_muxwrinit */ }; /* * define the fmodsw structure. */ static struct fmodsw xx_fsw = { “xx”, /* f_name */ &xxstrtab, /* f_str */ (D_NEW|D_MP|D_MTQPAIR|D_MTOUTPERIM|D_MTOCEXCL) /* f_flag */ }; /* * Module linkage information for the kernel. */ static struct modlstrmod modlstrmod = { &mod_strmodops, /* Type of module; a STREAMS module */ “xx module”, /* Module name */ &xx_fsw, /* fmodsw */ }; static struct modlinkage modlinkage = { MODREV_1, &modlstrmod, NULL }; /* * Module private data structure. One is allocated per stream. */ struct xxstr { struct xxstr *xx_next; /* pointer to next in list */ queue_t *xx_rq; /* read side queue pointer */ int xx_timeoutid; /* id returned from timeout() */ }; /* * Linked list of opened stream xxstr structures and other module * global data. Protected by the outer perimeter. */ static struct xxstr *xxup = NULL; static int some_module_global_data; /* * Module Config entry points */ int _init(void) { return (mod_install(&modlinkage)); } int _fini(void) { return (mod_remove(&modlinkage)); } int _info(struct modinfo *modinfop) { return (mod_info(&modlinkage, modinfop)); } static int xxopen(queue_t *rq,dev_t *devp,int flag,int sflag, cred_t *credp) { struct xxstr *xxp; /* If this stream already open - we're done. */ if (rq->q_ptr) return (0); /* We must be a module */ if (sflag != MODOPEN) return (EINVAL); /* * The perimeter flag D_MTOCEXCL implies that the open and * close routines have exclusive access to the module global * data structures. * * Allocate our private per-stream data structure. */ xxp = kmem_alloc(sizeof (struct xxstr),KM_SLEEP); /* Point q_ptr at it. */ rq->q_ptr = WR(rq)->q_ptr = (char *) xxp; /* Initialize it. */ xxp->xx_rq = rq; xxp->xx_timeoutid = 0; /* Link new entry into the list of active entries. */ xxp->xx_next = xxup; xxup = xxp; /* Enable xxput() and xxsrv() procedures on this queue. */ qprocson(rq); /* Return success */ return (0); } static int xxclose(WR(rq), xxp->xx_timeoutid); xxp->xx_timeoutid = 0; } /* * D_MTOCEXCL implies that the open and close routines have * exclusive access to the module global data structures. * * int xxrput(queue_t, *wq, mblk_t *mp) { struct xxstr *xxp = (struct xxstr *)wq->q_ptr; /* * Write your code here. Can read “some_module_global_data” * since we have shared access at the outer perimeter. */ putnext(wq, mp); } /* qwriter callback function for handling M_IOCTL messages */ static void xxwput_ioctl(queue_t, *wq, mblk_t *mp) { struct xxstr *xxp = (struct xxstr *)wq->q_ptr; /* * Write your code here. Can modify “some_module_global_data” * since we have exclusive access at the outer perimeter. */ mp->b_datap->db_type = M_IOCNAK; qreply(wq, mp); } static xxwput(queue_t *wq, mblk_t *mp) { struct xxstr *xxp = (struct xxstr *)wq->q_ptr; if (mp->b_datap->db_type == M_IOCTL) { /* M_IOCTL will modify the module global data */ qwriter(wq, mp, xxwput_ioctl, PERIM_OUTER); return; } /* * Write your code here. Can read “some_module_global_data” * since we have exclusive access at the outer perimeter. */ putnext(wq, mp); } static xxwsrv(queue_t wq) { mblk_t *mp; struct xxstr *xxp= (struct xxstr *) wq->q_ptr; while (mp = getq(wq)) { /* * Write your code here. Can read “some_module_global_data” * since we have exclusive access at the outer perimeter. */. Can read “some_module_global_data” * since we have shared access at the outer perimeter. */ }.
This section describes an example of multiplexer construction and usage. Multiple upper and lower streams interface to the multiplexer driver.
The Ethernet, LAPB, and IEEE 802.2 device drivers terminate links to other nodes. The multiplexer driver is an Internet Protocol (IP) multiplexer that switches data among the various nodes or sends data upstream to users in the system. The net modules typically provide a convergence function that matches the multiplexer driver and device driver interface.
Streams A, B, and C are opened by the process, and modules are pushed as needed. Two upper streams are opened to the IP multiplexer. The rightmost stream represents multiple streams, each connected to a process using the network. The stream second from the right provides a direct path to the multiplexer for supervisory functions. The control stream, leading to a process, sets up and supervises this configuration. It is always directly connected to the IP driver. Although not shown, modules can be pushed on the control stream.
After the streams are opened, the supervisory process typically transfers routing information to the IP drivers (and any other multiplexers above the IP), and initializes the links. As each link becomes operational, its stream is connected below the IP driver. If a more complex multiplexing configuration is required, the IP multiplexer stream with all its connected links can be connected below another multiplexer driver.
This section contains an example of a multiplexing driver that implements an N-to-1 configuration. This configuration might be used for terminal windows, where each transmission to or from the terminal identifies the window. This resembles a typical device driver, with two differences: the device-handling functions are performed by a separate driver, connected as a lower stream, and the device information (that is, relevant user process) is contained in the input data rather than in an interrupt call.
Each upper stream is created by open(2). A single lower stream is opened and then it is linked by use of the multiplexing facility. This lower stream might connect to the TTY driver. The implementation of this example is a foundation for an M-to-N multiplexer.
As in the loop-around driver (Chapter 9, STREAMS Drivers), flow control requires the use of standard and special code because physical connectivity among the streams is broken at the driver. Different approaches are used for flow control on the lower stream, for messages coming upstream from the device driver, and on the upper streams, for messages coming downstream from the user processes.
The code presented here for the multiplexing driver represents a single-threaded, uniprocessor implementation. See Chapter 12, Multithreaded STREAMS for details on multiprocessor and multithreading issues such as locking for data corruption and to prevent race conditions.
Example 13–2 is of multiplexer declarations:
#include <sys/types.h> #include <sys/param.h> #include <sys/stream.h> #include <sys/stropts.h> #include <sys/errno.h> #include <sys/cred.h> #include <sys/ddi.h> #include <sys/sunddi.h> static int muxopen (queue_t*, dev_t*, int, int, cred_t*); static int muxclose (queue_t*, int, cred_t*); static int muxuwput (queue_t*, mblk_t*); static int muxlwsrv (queue_t*); static int muxlrput (queue_t*, mblk_t*); static int muxuwsrv (queue_t*); static struct module_info info = { 0xaabb, "mux", 0, INFPSZ, 512, 128 }; static struct qinit urinit = { /* upper read */ NULL, NULL, muxopen, muxclose, NULL, &info, NULL }; static struct qinit uwinit = { /* upper write */ muxuwput, muxuwsrv, NULL, NULL, NULL, &info, NULL }; static struct qinit lrinit = { /* lower read */ muxlrput, NULL, NULL, NULL, NULL, &info, NULL }; static struct qinit lwinit = { /* lower write */ NULL, muxlwsrv, NULL, NULL, NULL, &info, NULL }; struct streamtab muxinfo = { &urinit, &uwinit, &lrinit, &lwinit }; struct mux { queue_t *qptr; /* back pointer to read queue */ int bufcid; /* bufcall return value */ }; extern struct mux mux_mux[]; extern int mux_cnt; /* max number of muxes */ static queue_t *muxbot; /* linked lower queue */ static int muxerr; /* set if error of hangup on lower strm */
The four streamtab entries correspond to the upper read, upper write, lower read, and lower write qinit structures. The multiplexing qinit structures replace those in each lower stream head (in this case there is only one) after the I_LINK has concluded successfully. In a multiplexing configuration, the processing performed by the multiplexing driver can be partitioned between the upper and lower queues. There must be an upper-stream write put procedure and lower-stream read put procedure. If the queue procedures of the opposite upper/lower queue are not needed, the queue can be skipped, and the message put to the following queue.
In the example, the upper read-side procedures are not used. The lower-stream read queue put procedure transfers the message directly to the read queue upstream from the multiplexer. There is no lower write put procedure because the upper write put procedure directly feeds the lower write queue downstream from the multiplexer.
The driver uses a private data structure, mux. mux_mux[dev] points back to the opened upper read queue. This is used to route messages coming upstream from the driver to the appropriate upper queue. It is also used to find a free major or minor device for a CLONEOPEN driver open case.
Example 13–3, the upper queue open, contains the canonical driver open code.
static int muxopen(queue_t *q, dev_t *devp, int flag, int sflag, cred_t *credp) { struct mux *mux; minor_t device; if (q->q_ptr) return(EBUSY); if (sflag == CLONEOPEN) { for (device = 0; device < mux_cnt; device++) if (mux_mux[device].qptr == 0) break; *devp=makedevice(getmajor(*devp), device); } else { device = getminor(*devp); if (device >= mux_cnt) return ENXIO; } mux = &mux_mux[device]; mux->qptr = q; q->q_ptr = (char *) mux; WR(q)->q_ptr = (char *) mux; qprocson(q); return (0); }
muxopen checks for a clone or ordinary open call. It initializes q_ptr to point at the mux_mux[] structure.
The core multiplexer processing is as follows: downstream data written to an upper stream is queued on the corresponding upper write message queue if the lower stream is flow controlled. This allows flow control to propagate toward the stream head for each upper stream. A lower write service procedure, rather than a write put procedure, is used so that flow control, coming up from the driver below, may be handled.
On the lower read side, data coming up the lower stream are passed to the lower read put procedure. The procedure routes the data to an upper stream based on the first byte of the message. This byte holds the minor device number of an upper stream. The put procedure handles flow control by testing the upper stream at the first upper read queue beyond the driver.
muxuwput, the upper-queue write put procedure, traps ioctl calls, in particular I_LINK and I_UNLINK:
static int /* * This is our callback routine used by bufcall() to inform us * when buffers become available */ static void mux_qenable(long ql) { queue_t *q = (queue_t *ql); struct mux *mux; mux = (struct mux *)(q->q_ptr); mux->bufcid = 0; qenable(q); } muxuwput(queue_t *q, mblk_t *mp) { struct mux *mux; mux = (struct mux *)q->q_ptr; switch (mp->b_datap->db_type) { case M_IOCTL: { struct iocblk *iocp; struct linkblk *linkp; /* * ioctl. Only channel 0 can do ioctls. Two * calls are recognized: LINK, and UNLINK */ if (mux != mux_mux) goto iocnak; iocp = (struct iocblk *) mp->b_rptr; switch (iocp->ioc_cmd) { case I_LINK: /* *Link. The data contains a linkblk structure *Remember the bottom queue in muxbot. */ if (muxbot != NULL) goto iocnak; linkp=(struct linkblk *) mp->b_cont->b_rptr; muxbot = linkp->l_qbot; muxerr = 0; mp->b_datap->db_type = M_IOCACK; iocp->ioc_count = 0; qreply(q, mp); break; case I_UNLINK: /* * Unlink. The data contains a linkblk struct. * Should not fail an unlink. Null out muxbot. */ linkp=(struct linkblk *) mp->b_cont->b_rptr; muxbot = NULL; mp->b_datap->db_type = M_IOCACK; iocp->ioc_count = 0; qreply(q, mp); break; default: iocnak: /* fail ioctl */ mp->b_datap->db_type = M_IOCNAK; qreply(q, mp); } break; } case M_FLUSH: if (*mp->b_rptr & FLUSHW) flushq(q, FLUSHDATA); if (*mp->b_rptr & FLUSHR) { *mp->b_rptr &= ~FLUSHW; qreply(q, mp); } else freemsg(mp); break; case M_DATA:{ */ * Data. If we have no lower queue --> fail * Otherwise, queue the data and invoke the lower * service procedure. mblk_t *bp; if (muxerr || muxbot == NULL) goto bad; if ((bp = allocb(1, BPRI_MED)) == NULL) { putbq(q, mp); mux->bufcid = bufcall(1, BPRI_MED, mux_qenable, (long)q); break; } *bp->b_wptr++ = (struct mux *)q->ptr - mux_mux; bp->b_cont = mp; putq(q, bp); break; } default: bad: /* * Send an error message upstream. */ mp->b_datap->db_type = M_ERROR; mp->b_rptr = mp->b_wptr = mp->b_datap->db_base; *mp->b_wptr++ = EINVAL; qreply(q, mp); } }
First, there is a check to enforce that the stream associated with minor device 0 will be the single, controlling stream. The ioctls are only accepted on this stream. As described previously, a controlling stream is the one that issues the I_LINK. There should be only a single control stream. I_LINK and I_UNLINK include a linkblk structure containing the following fields:
l_qtop is the upper write queue from which the ioctl(2) comes. It always equals q for an I_LINK, and NULL for I_PLINK.
l_qbot is the new lower write queue. It is the former stream head write queue and is where the multiplexer gets and puts its data.
l_index is a unique (system-wide) identifier for the link. It can be used for routing or during selective unlinks. Since the example only supports a single link, l_index is not used.
For I_LINK, l_qbot is saved in muxbot and a positive acknowledgement is generated. From this point on, until an I_UNLINK occurs, data from upper queues will be routed through muxbot. Note that when an I_LINK, is received, the lower stream has already been connected. This enables the driver to send messages downstream to perform any initialization functions. Returning an M_IOCNAK message (negative acknowledgement) in response to an I_LINK causes the lower stream to be disconnected.
The I_UNLINK handling code nulls out muxbot and generates a positive acknowledgement. A negative acknowledgement should not be returned to an I_UNLINK. The stream head ensures that the lower stream is connected to a multiplexer before sending an I_UNLINK M_IOCTL.
Drivers can handle the persistent link requests I_PLINK and I_PUNLINK ioctl(2) in the same manner, except that l_qtop in the linkblk structure passed to the put routine is NULL instead of identifying the controlling stream.
muxuwput handles M_FLUSH messages as a normal driver does, except that there are no messages queued on the upper read queue, so there is no need to call flushq if FLUSHR is set.
M_DATA messages are not placed on the lower write message queue. They are queued on the upper write message queue. When flow control subsides on the lower stream, the lower service procedure, muxlwsrv, is scheduled to start output. This is similar to starting output on a device driver.
The following example shows the code for the upper multiplexer write service procedure:
static int muxuwsrv(queue_t *q) { mblk_t *mp; struct mux *muxp; muxp = (struct mux *)q->q_ptr; if (!muxbot) { flushq(q, FLUSHALL); return (0); } if (muxerr) { flushq(q, FLUSHALL); return (0); } while (mp = getq(q)) { if (canputnext(muxbot)) putnext(muxbot, mp); else { putbq(q, mp); return(0); } } return (0); }
As long as there is a stream still linked under the multiplexer and there are no errors, the service procedure will take a message off the queue and send it downstream, if flow control allows.
m.
The lower (linked) queue read put procedure is shown in the following example:
static int muxlrput(queue_t *q, mblk_t *mp) { queue_t *uq; int device; if(muxerr) { freemsg(mp); return (0); } switch(mp->b_datap->db_type) { case M_FLUSH: /* * Flush queues. NOTE: sense of tests is reversed * since we are acting like a "stream head" */ if (*mp->b_rptr & FLUSHW) { *mp->b_rptr &= ~FLUSHR; qreply(q, mp); } else freemsg(mp); break; case M_ERROR: case M_HANGUP: muxerr = 1; freemsg(mp); break; case M_DATA: /* * Route message. First byte indicates * device to send to. No flow control. * * Extract and delete device number. If the * leading block is now empty and more blocks * follow, strip the leading block. */ device = *mp->b_rptr++; /* Sanity check. Device must be in range */ if (device < 0 || device >= mux_cnt) { freemsg(mp); break; } /* * If upper stream is open and not backed up, * send the message there, otherwise discard it. */ uq = mux_mux[device].qptr; if (uq != NULL && canputnext(uq)) putnext(uq, mp); else freemsg(mp); break; default: freemsg(mp); } return (0); }
muxlrput receives messages from the linked stream. In this case, it is acting as a stream head and handles M_FLUSH messages. The code is the reverse of a driver, handling M_FLUSH messages from upstream. There is no need to flush the read queue because no data is ever placed in it.
muxlrput also handles M_ERROR and M_HANGUP messages. If one is received, it locks up the upper streams by setting muxerr.
M_DATA messages are routed by checking the first data byte of the message. This byte contains the minor device of the upper stream. Several checks examine whether:
The device is in range
The upper stream is open
The upper stream is full
This multiplexer does not support flow control on the read side; it is merely a router. If the message passes all checks, it is put to the proper upper queue. Otherwise, the message is discarded.
The upper stream close routine clears the mux entry so this queue will no longer be found. Outstanding bufcalls are not cleared.
/* * Upper queue close */ static int muxclose(queue_t *q, int flag, cred_t *credp) { struct mux *mux; mux = (struct mux *) q->q_ptr; qprocsoff(q); if (mux->bufcid != 0) unbufcall(mux->bufcid); mux->bufcid = 0; mux->ptr = NULL; q->q_ptr = NULL; WR(q)->q_ptr = NULL; return(0); }.) | http://docs.oracle.com/cd/E19253-01/816-4855/part2-1/index.html | CC-MAIN-2015-18 | refinedweb | 5,545 | 54.83 |
[
]
Dag H. Wanvik updated DERBY-3673:
---------------------------------
Derby Info: [Patch Available]
Fix Version/s: 10.5.0.0
>.stat
>
>
> Derby current does not have dictionary information about legal users.
> Authentication is configurable as being derby internal, LDAP based, or
> user supplied.
> SQL specifies that user ids and role names go in the same namespace
> (authorization ids). Therefore, at role creation time, a new role
> name should be checked against legal users for this database, and be
> defined if there is already a user id by that name.
> Unfortunately, since there is currently no reliable dictionary
> information about legal users, the best we can do presently is perform
> heuristic checks that a proposed role id is not already a user id.
> Since the check can not not reliable, we should also add a check to
> prohibit conncting with a user id that is a known role id.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/db-derby-dev/200805.mbox/%3C680602645.1210691935748.JavaMail.jira@brutus%3E | CC-MAIN-2017-26 | refinedweb | 166 | 61.77 |
Laravel Livewire is a library for building reactive and dynamic interfaces using Blade as your templating engine. It works by making AJAX requests to the server when a user interaction occurs and rendering the updated HTML sent to it by the server.
In this tutorial, we will build a live search page for a list of users stored in a MySQL database. The reactive parts of our interface such as changing loading state, dynamically showing and hiding parts of the web page, etc will be handled by Livewire.
Pre-requisites
To complete this tutorial, you will need the following:
- Composer, npm, and Laravel (version 8.26.1 is used in this article) installed on your computer.
- MySQL installed (for FULLTEXT indexes support).
Set up Laravel, Livewire, and Tailwind CSS
To get started, generate a fresh Laravel application with the Laravel CLI and enter the directory with the commands below:
$ laravel new livewire-search && cd livewire-search
Next, install Livewire as a composer dependency by running the command below in the project folder.
$ composer require livewire/livewire
Since the project doesn’t require any special configuration for Tailwind CSS, and Tailwind CSS is the only npm dependency it needs, we will build it directly using
npx. Alternatively, you can link to Tailwind CSS directly from the CDN. For production usage, the documentation recommends that you set it up as a PostCSS plugin. Generate the Tailwind CSS files by running:
$ npx tailwindcss-cli@latest build -o public/css/tailwind.css
The command above will create a
tailwind.css file in the
public/css folder. We can then import it to our Blade templates using HTML
<link> tags as we would any other stylesheet.
Set up database migrations and FULLTEXT Indexes
At this point, ensure that you have a new MySQL database set up for the project and populate the project’s env file with the database credentials (database name, username, and password). Next, we will modify the user migrations file that comes built-in with Laravel to add a
bio field and a FULLTEXT index.
The index will cover the name, email, and bio of a given user. That way, any search we perform will scan through all three fields. Open the user migrations file (you can find it at
database/migrations/2014_10_12_000000_create_users_table.php) and replace its content with the following:
<?php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\DB; use Illuminate\Support\Facades\Schema; class CreateUsersTable extends Migration { public function up() { Schema::create('users', function (Blueprint $table) { $table->id(); $table->string('name'); $table->string('email')->unique(); $table->mediumText('bio'); $table->timestamp('email_verified_at')->nullable(); $table->string('password'); $table->rememberToken(); $table->timestamps(); }); DB::statement( 'ALTER TABLE users ADD FULLTEXT fulltext_index(name, email, bio)' ); } public function down() { Schema::dropIfExists('users'); } }
Note that we are using raw SQL queries to add the index above. That is because Laravel does not have built-in support for FULLTEXT indexes as they are MySQL specific.
While at it, let’s also set up the factories and seeders for the user table whose migration we created above, that way, we can focus on getting our code to work instead of the data in the database.
Open the generated factory file at
database/factories/UserFactory.php (feel free to create it if it doesn’t exist) and replace the
definition method with the code block below:
<?php public function definition() { return [ 'name' => $this->faker->name, 'email' => $this->faker->unique()->safeEmail, 'bio' => $this->faker->text(200), 'email_verified_at' => now(), 'password' => Hash::make("password"), 'remember_token' => Str::random(10), ]; }
Next, direct Laravel to generate users based on the factory above, by changing the
run method of the database seeder class (
/database/seeders/DatabaseSeeder.php) to the one below.
public function run() { \App\Models\User::factory(50)->create(); }
The code above will generate and add 50 users to the
users table when the seeder is run. Apply the migrations and the seeders by running the set of commands below.
$ php artisan migrate && php artisan db:seed
Search your database with Traits
We will make our database search a bit flexible by using Traits. Traits help PHP developers achieve code reuse while working around some of the limitations of PHP’s single inheritance model.
For our use case, we will create a
Search trait that we can use from any Laravel model by adding a
$searchable field to the model. This should represent the fields that have been added to a FULLTEXT index. Then, create a new file, named
Search.php, file in
app/Models, and add the trait implementation shown below:
<?php namespace App\Models; trait Search { private function buildWildCards($term) { if ($term == "") { return $term; } // Strip MySQL reserved symbols $reservedSymbols = ['-', '+', '<', '>', '@', '(', ')', '~']; $term = str_replace($reservedSymbols, '', $term); $words = explode(' ', $term); foreach($words as $idx => $word) { // Add operators so we can leverage the boolean mode of // fulltext indices. $words[$idx] = "+" . $word . "*"; } $term = implode(' ', $words); return $term; } protected function scopeSearch($query, $term) { $columns = implode(',', $this->searchable); // Boolean mode allows us to match john* for words starting with john // () $query->whereRaw( "MATCH ({$columns}) AGAINST (? IN BOOLEAN MODE)", $this->buildWildCards($term) ); return $query; } }
Here, we’ve split the search operation into two methods:
buildWildCards and
scopeSearch.
buildWildCards cleans up the search term by:
- Ensuring that there’s no MySQL reserved symbol present in the search term. You can modify it to escape them instead of replacing them if they turn out to be important to the search term for you.
- Adding MySQL wildcard characters (
+and
*) to the search term. This helps it to take advantage of MySQL’s boolean mode.
scopeSearch on the other hand is a Laravel local scope (identified by the “scope” prefix). Models are automatically searchable via a static
search method once they:
- Import and use the
Searchtrait.
- Declare a
$searchablearray that contains the columns that should be searched e.g the
Usermodel below searches the
name,
biocolumns.
Next, bring the
Search trait into the
User model class, and set up the
$searchable fields as shown below:
<?php namespace App\Models; /* --- existing code here --- */ class User extends Authenticatable { use HasFactory, Notifiable; use Search; // Use the search trait we created earlier /* --- existing code here --- */ protected $searchable = [ 'name', 'email', 'bio', ]; /* -- rest of user class code --- */ }
Get familiar with Livewire Components
To initialize Livewire in your Laravel app, you need to add the
@livewireStyles and
@livewireScripts directive within the
<head> tag, and at the end of the
<body> tag respectively, in your app layout. So, in
resources/views/welcome.blade.php add the code below:
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>@yield('title', 'My cool Livewire App')</title> @livewireStyles </head> <body> <!-- other app content goes here--> @livewireScripts </body> </html>
Livewire components are meant to live in individual template files rendered within Blade. To render a component, use the
@livewire directive or the
livewire tag, e.g:
<div> @livewire('search-users') </div> <!-- This also works --> <div> <livewire:search-users /> </div>
Livewire components are typically attached to a “component class” that performs the necessary computation and holds the data needed to render the component. The classes live in
app/Http/Livewire and are automatically generated when you generate a component with
php artisan make:livewire.
Traditional JavaScript concepts like data binding and event handling happen in Livewire using the
wire: attribute. For instance, The snippet below binds the value of the text field to a
$name variable in the component class, and the value of
$name is rendered within the
h1 tag as it changes.
<input wire: <h1>{{ $message }}</h1>
You can learn more about Livewire’s features and how it handles traditional Javascript operations from the Livewire documentation.
Hook Livewire Components to Blade Templates
Armed with some knowledge of Livewire, we can now set it up in our application. Create the
SearchUser component by running the artisan command below, in the project folder.
$ php artisan make:livewire SearchUser
The command creates two files in our project:
app/Http/Livewire/SearchUser.php: The component class that interacts with our database and prepares the data to be rendered.
resources/views/livewire/search-user.blade.php: The component template that holds the UI for the component.
Open the component class (
app/Http/Livewire/SearchUser.php) and add the code below to it:
<?php namespace App\Http\Livewire; use App\Models\User; use Livewire\Component; class SearchUser extends Component { public $term = ""; public function render() { sleep(1); $users = User::search($this->term)->paginate(10); $data = [ 'users' => $users, ]; return view('livewire.search-user', $data); } }
The code above calls the
search method on the
User model class (which
User inherited from the
Search trait) and paginates the result. The result is then returned with the component template in the same way we would do it from a regular Laravel controller.
Note that we have added a
sleep call to the code above. This is to delay the code execution to simulate a page load. This delay will help us see Livewire’s loading state in action in our development environment.
Next, open the component template (
resources/views/livewire/search-user.blade.php) and add the code block below to it:
<div> <div class="px-4 space-y-4 mt-8"> <form method="get"> <input class="border-solid border border-gray-300 p-2 w-full md:w-1/4" type="text" placeholder="Search Users" wire: </form> <div wire:loading>Searching users...</div> <div wire:loading.remove> <!-- notice that $term is available as a public variable, even though it's not part of the data array --> @if ($term == "") <div class="text-gray-500 text-sm"> Enter a term to search for users. </div> @else @if($users->isEmpty()) <div class="text-gray-500 text-sm"> No matching result was found. </div> @else @foreach($users as $user) <div> <h3 class="text-lg text-gray-900 text-bold">{{$user->name}}</h3> <p class="text-gray-500 text-sm">{{$user->email}}</p> <p class="text-gray-500">{{$user->bio}}</p> </div> @endforeach @endif @endif </div> </div> <div class="px-4 mt-4"> {{$users->links()}} </div> </div>
In the template, we use Livewire’s data-binding functionality to map the
$term variable in the component class to the search field. We’ve also used
wire:loading to show the
div when our data is loading. Finally, we used
wire:loading.remove to hide the div containing the search results when loading.
Set up our application’s UI
Now, let’s add the Livewire component by first initializing Livewire in the
welcome.blade.php file that was automatically generated by Laravel, and rendering the component within the
welcome template. While we’re at it, we will also include the Tailwind CSS file we generated while setting up the application.
Open the
welcome.blade.php file and replace its content with the code below:
<!DOCTYPE html> <html lang="{{ str_replace('_', '-', app()->getLocale()) }}"> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>uSearch</title> <link href="/css/tailwind.css" rel="stylesheet"> <link href=";600;700&display=swap" rel="stylesheet"> <style> body { font-family: 'Nunito'; } </style> @livewireStyles </head> <body> <header class="bg-gray-900 text-gray-200 w-full py-4 px-4"> uSearch </header> <livewire:search-user/> @livewireScripts </body> </html>
Testing
At this point, you are now ready to test out the application. Start the Laravel server by running
php artisan serve in a terminal and the command should launch the server on. Visit the application URL () in your browser to see the home page below:
Enter a search query using the input field on the page and it should return the list of users whose name, email, or bio matches the query you entered as shown:
You can also open up your browser’s developer console while searching to see how Livewire moves the network requests between your application’s frontend and the server.
Conclusion
Laravel Livewire presents an approach to building dynamic interfaces that is quite different from frontend frameworks like Vue and React, one that doesn’t require you to leave the comfort of PHP and HTML/Blade templates. The complete source code for this tutorial is available on GitLab. Feel free to raise a GitLab issue if you notice an issue.
Michael Okoko is a CS undergrad at Obafemi Awolowo University. He loves open source and is mostly interested in Linux, Golang, PHP, and fantasy novels! You can reach him via: | https://www.twilio.com/blog/build-live-search-box-laravel-livewire-mysql | CC-MAIN-2021-10 | refinedweb | 2,066 | 52.8 |
GCC(1) GNU GCC(1)
mgcc, c++ - GNU project C and C++ compiler.
When you invoke GCC, it normally does preprocessing, compi- lation, com- piler dif- ferentforce-mem, -fstrength-reduce, -Wformat and so on. Most of these have both positive and negative forms; the negative form of -ffoo would be -fno-foo. This manual documents only one of these two forms, whichever one is not the default. gcc-3.4.6 2012-02-20 2012-02-20 -fkeep-inline-functions gcc-3.4.6 2012-02-20 3 GCC(1) GNU GCC(1) profile 2012-02-20 2012-02-20 5 GCC(1) GNU GCC(1) prioritize 2012-02-20 Intel 960 Options -mcpu gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 2012-02-20 2012-02-20 11 GCC(1) GNU GCC(1) -x none Turn off any specification of a language, so that subse- quent pro- duced out- put. gcc-3.4.6 2012-02-20 com- mand gcc-3.4.6 2012-02-20 2012-02-20,. gnu89 Default, ISO C90 plus GNU extensions (including some gcc-3.4.6 2012-02-20 2012-02-20 16 GCC(1) GNU GCC(1) environ- ment. This implies -fbuiltin. A hosted environment is one in which the entire standard library is available, and in which "main" has a return type of "int". Exam- ples gcc-3.4.6 2012-02-20 sup- port- ness. gcc-3.4.6 2012-02-20. con- forms most closely to the C++ ABI specification. gcc-3.4.6 2012-02-20 19 GCC(1) GNU GCC(1) allo- cated. The C++ standard allows an implementation to omit creat- ing a temporary which is only used to initialize another object of the same type. Specifying this option disables gcc-3.4.6 2012-02-20, dif- ferently. gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 23 GCC(1) GNU GCC(1) its size is a multiple of the byte size on your platform; that will cause G++ and other compilers to layout "B" identically. *) {} gcc-3.4.6 2012-02-20 2012-02-20 25 GCC(1) GNU GCC(1) must return an object. Also warn about violations of the following style guide- lines. -Wno-deprecated (C++ only) Do not warn about usage of deprecated features. -Wno-non-template-friend inter- preted 2012-02-20 (NOTE: gcc-3.4.6 2012-02-20-exceptions Enable syntactic support for structured exception han- dling in Objective-C, similar to what is offered by C++ and Java. Currently, this option is only available in conjunction with the NeXT runtime on Mac OS X 10.3 and later. gcc-3.4.6 2012-02-20 28 GCC(1) GNU GCC(1) gcc-3.4.6 2012-02-20 29 GCC(1) GNU GCC(1) used on Mac OS X 10.3 (Panther) and later systems, due to additional functionality needed in the (NeXT) Objective-C runtime. * As mentioned above, the new exceptions do not sup- port exe- cution. -freplace-objc-classes Emit a special marker instructing ld(1) not to stati- cally 2012-02-20. gcc-3.4.6 2012-02-20 31 GCC(1) GNU GCC(1) dig- est 2012-02-20 32 GCC(1) GNU GCC(1) version of the ISO 2012-02-20 33 GCC(1) GNU GCC(1) than warnings. 2012-02-20 Enable -Wformat plus format checks not included in -Wformat. Currently. gcc-3.4.6 2012-02-20 35 GCC(1) GNU GCC(1) -Winit-self (C, C++, and Objective-C only) Warn about uninitialized variables which are initialized with themselves.. parentheses Warn if parentheses are omitted in certain contexts, such as when there is an assignment in a context where a truth value is expected, or when operators are nested whose precedence people often get confused about. Also warn about constructions where there may be confu- sion to which "if" statement an "else" branch belongs. Here is an example of such a case: gcc-3.4.6 2012-02-20 (); } } -Wsequence-point Warn about code that may have undefined semantics because of violations of sequence point rules in the C standard. gcc-3.4.6 2012-02-20 <>. -Wreturn-type Warn whenever a function is defined with a return-type that defaults to "int". Also warn about any "return" statement with no return-value in a function whose return-type is not "void". For C++, a function without return type always produces a diagnostic message, even when -Wno-return-type is specified. The only exceptions are main and functions defined in system headers. . -Wswitch-default gcc-3.4.6 2012-02-20. gcc-3.4.6 2012-02-20 2012-02-20 40 GCC(1) GNU GCC compi- lation.strict-aliasing This option is only active when -fstrict-aliasing is active. It warns about code which might break the strict aliasing rules that the compiler is using 2012-02-20 41 GCC(1) GNU GCC(1) consider questionable, supported, but the newer name is more descrip- tive.) Print extra warning messages for these events: * A function can return either with or without a value. (Falling off the end of the function body is considered returning without >=. * A comparison like x<=y<=z appears; this is equivalent to (x<=y ? 1 : 0) <= z, which is a dif- ferent interpretation from that of ordinary mathematical notation. * Storage-class specifiers like "static" are not the first things in a declaration. According to the C Standard, this usage is obsolescent. * The return type of a function has a type qualifier such as "const". Such a type qualifier has no effect, since the value returned by a function is not an lvalue. (But don't warn about the GNU 2012-02-20ors. * (C++ only) Ambiguous virtual bases. * (C++ only) Subscripting an array which has been declared register. * (C++ only) Taking the address of a variable which has been declared register. * (C++ only) A base class is not initialized in a derived class' copy constructor. gcc-3.4.6 2012-02-20 43 GCC(1) GNU GCC(1) com- parisons. traditional and ISO C. Also warn about ISO C constructs that have no traditional C equivalent, and/or 2012-02-20 2012-02-20 45 GCC(1) GNU GCC(1) "PARAMS" and "VPARAMS". This warning is also bypassed for nested functions because that feature is already a GCC extensionendif-labelspointer-arith Warn about anything that depends on the ``size of'' a function type or of "void". GNU C assigns these types a size of 1, for convenience in calculations with "void *" pointers and pointers to functions. -Wbad-function-cast (C only) Warn whenever a function call is cast to a non-matching type. For example, warn if "int malloc()" is cast to "anything *". -Wbounded 2012-02-20 46 GCC(1) GNU GCC(1) additional checks are performed on sscanf(3) format strings. The %s fields are checked for incorrect bound lengths by checking the size of the buffer associated with the format argument. boun- daries. conver- sions changing the width or signedness of a fixed point argument except when the same as the default promotion. Also, warn if a negative integer constant expression is implicitly converted to an unsigned type. For example, warn about the assignment gcc-3.4.6 2012-02-20 declaration. Do so even if the definition itself 2012-02-20 48 GCC(1) GNU GCC(1) get a warning for "main" in hosted C environments. -Wmissing-format-attribute If -Wformat is enabled, also warn about functions which might be candidates for "format" attributes. Note these are only possible candidates, not absolute ones. GCC will guess that "format" attributes might be appropriate for any function that calls a function like "vprintf" or "vscanf", but this might not always be the case, and some functions for which "format" attributes are appropriate may not be detected. This option has no effect unless -Wformat is enabled (possibly by -Wall). -Wno-multichar Do not warn if a multicharacter constant ('FOOF') is used. Usually they indicate a typo in the user's code, as they have implementation-defined values, and should not be used in portable code. vari- able gcc-3.4.6 2012-02-20 49 GCC(1) GNU GCC(1) substan- tial code which checks correct functioning of the pro- gram gcc-3.4.6 2012-02-20. disabled-optimization Warn if a requested optimization pass is disabled. This warning does not generally indicate that there is 2012-02-20. for- mat used by DBX on most BSD systems. On MIPS, Alpha and System V Release 4 systems this option produces stabs gcc-3.4.6 2012-02-20 52 GCC(1) GNU GCC(1) Sys- tem. gcc-3.4.6 2012-02-20 2012-02-20). @bulCompile the source files with -fprofile-arcs plus optimization and code generation options. For test coverage analysis, use the additional -ftest-coverage option. You do not need to profile every source file in a program. vmmfu@u-3p @cvmmfu Link your object files with -lgcov or -fprofile-arcs (the latter implies the former). @dwnRun). @exoFor profile-directed optimizations, compile the source files again with the same optimization and code generation options plus -fbranch-probabilities. @fypFor span- ning, gcc-3.4.6 2012-02-20 55 GCC(1) GNU GCC(1). gcc-3.4.6 2012-02-20 56 GCC(1) GNU GCC(1). gcc-3.4.6 2012-02-20 57 GCC(1) GNU GCC(1)) gcc-3.4.6 2012-02-20 58 GCC(1) GNU GCC(1) 2012-02-20 59 GCC(1) GNU GCC(1) per- manently;. -time Report the CPU time taken by each subprocess in the com- pilation sequence. For C source files, this is the com- piler operat- ing system routines on behalf of the program. Both numbers are in seconds. gcc-3.4.6 2012-02-20 60 GCC(1) GNU GCC(1){-, instal- lation problem, cannot exec cpp0: No such file or direc- tory. any- thing else. (This is used when GCC itself is being built.) -feliminate-unused-debug-types Normally, when producing DWARF2 output, GCC will emit gcc-3.4.6 2012-02-20 61 GCC(1) GNU GCC(1) exe- cution time, without performing any optimizations that take a great deal of compilation time. -O turns on the following optimization flags: -fdefer-pop -fmerge-constants -fthread-jumps -floop-optimize -fif-conversion -fif-conversion2 -fdelayed-branch -fguess-branch-probability gcc-3.4.6 2012-02-20 62 GCC(1) GNU GCC(1) . -O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags: -fforce-mem -foptimize-sibling-calls -fstrength-reduce -fcse-follow-jumps -fcse-skip-blocks -frerun-cse-after-loop -frerun-loop-opt -fgcse -fgcse-lm -fgcse-sm -fgc further optimizations designed to reduce code size. -Os disables the following optimization flags: -falign-functions -falign-jumps -falign-loops -falign-labels -freorder-blocks -fprefetch-loop-arrays If you use multiple -O options, with or without level numbers, the last such option is the one that is effec- tive. gcc-3.4.6 2012-02-20 63 GCC(1) GNU GCC(1) Force memory address constants to be copied into regis- ters before doing arithmetic on them. This may produce gcc-3.4.6 2012-02-20 64 GCC(1) GNU GCC(1) 2012-02-20 65 GCC(1) GNU GCC(1),. gcc-3.4.6 2012-02-20 66 GCC(1) GNU GCC(1). pro- grams explicitly rely on variables going to the data section. E.g., so that the resulting executable can gcc-3.4.6 2012-02-20 67 GCC(1) GNU GCC(1) find the beginning of that section and/or make assump- tions based on that. The default is -fzero-initialized-in-bss. -fstrength-reduce Perform the optimizations of loop strength reduction and elimination. gcc-3.4.6 2012-02-20 68 GCC(1) GNU GCC(1) This pass also performs global constant and copy propa- g. Enabled at levels -O, -O2, -O3, -Os. gcc-3.4.6 2012-02-20 69 GCC(1) GNU GCC(1) -fif-conversion Attempt to transform conditional jumps into branch-less equivalents. This include use of conditional moves, min, max, set flags and abs instructions, and some tricks doable by standard arithmetics. The use of con- ditional If supported for the target machine, attempt to reorder instructions to exploit instruction slots available gcc-3.4.6 2012-02-20 70 GCC(1) GNU GCC(1) instruc- tions. gcc-3.4.6 2012-02-20 71 GCC(1) GNU GCC(1) - blockcaller-saves Enable values to be allocated in registers that will be clobbered by function calls, by emitting extra instruc- tionsmove-all-movables Forces all invariant computations in loops to be moved outside the loop. gcc-3.4.6 2012-02-20 72 GCC(1) GNU GCC(1) -freduce-all-givs Forces all general-induction variables in loops to be strength-reduced. Note: When compiling programs written in Fortran, -fmove-all-movables and contact <gcc@gcc.gnu.org>, and describe how use of these options affects the performance of your produc- tion code. Examples of code that runs slower when these options are enabled are very valuable. a randomized model.. The default is -fguess-branch-probability at levels -O, -O2, -O3, -Os. gcc-3.4.6 2012-02-20 73 GCC(1) GNU GCC(1) 2012-02-20 74 GCC(1) GNU GCC(1) not:". Enabled at level -O3. gcc-3.4.6 2012-02-20 75 GCC(1) GNU GCC(1). pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and gcc-3.4.6 2012-02-20 76 GCC(1) GNU GCC float- ing pro- grams, after modifying them to store all pertinent intermediate computations into variables. gcc-3.4.6 2012-02-20 77 GCC(1) GNU GCC(1) -ffast-math Sets -fno-math-errno, -funsafe-math-optimizations, -fno-trapping-math, -ffinite-math-only, -fno-rounding-math and -fno-signaling-n. gcc-3.4.6 2012-02-20 78 GCC(1) GNU GCC(1) fas- ter. The default is -fno-signaling-nans. gcc-3.4.6 2012-02-20 79 GCC(1) GNU GCC optim- ization. -fvpt If combined with -fprofile-arcs, it instructs the com- piler to add a code to gather information about values of expressions. With -fbranch-probabilities, it reads back the data gathered and actually performs the optimizations based on them. Currently the optimizations include gcc-3.4.6 2012-02-20 80 GCC(1) GNU GCC(1) specialization. gcc-3.4.6 2012-02-20 81 GCC(1) GNU GCC(1) regis- ters can typically be exposed only during reload, thus hoisting loads out of loops and doing inter-block gcc-3.4.6 2012-02-20 82 GCC(1) GNU GCC(1) scheduling needs a separate optimization pass. -fbranch-target-load-optimize2 Perform branch target register load optimization after prologue / epilogue threading. - improve- ment gcc-3.4.6 2012-02-20 83 GCC(1) GNU GCC(1) subexpression elimination optimization. If more memory than specified is required, the optimization will not be done. max-gcse-passes The maximum number of passes of GCSE to run.. gcc-3.4.6 2012-02-20 84 GCC(1) GNU GCC(1). proba- bilities sin- gle loop. hot-bb-count-fraction Select fraction of the maximal count of repetitions of basic block in program given basic block needs to have to be considered hot. gcc-3.4.6 2012-02-20 85 GCC(1) GNU GCC(1) pseudo register as last known value of that regis- ter. The default is 10000. ggc-min-expand gcc-3.4.6 2012-02-20 86 GCC(1) GNU GCC(1) GCC uses a garbage collector to manage its own memory allocation. This parameter specifies the minimum percentage by which the garbage collector's heap should be allowed to expand between collec- tions. Tuning this may improve compilation speed; it has no effect on code generation. The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when RAM >= 1GB. If "getrlimit" is available, the notion of "RAM" is the smallest of actual RAM, RLIMIT_RSS, RLIMIT_DATA RAM/8, with a lower bound of 4096 (four megabytes) and an upper bound of 131072 (128 megabytes). If "getrlimit" is available, the notion of "RAM" is the smallest of actual RAM, RLIMIT_RSS, RLIMIT_DATA and RLIMIT. reorder-blocks-duplicate reorder-blocks-duplicate-feedback gcc-3.4.6 2012-02-20 87 GCC(1) GNU GCC(1). modi- fied, translated or interpreted by the compiler driver before being passed to the preprocessor, and -Wp forci- bly Predefine name as a macro, with definition definition. The contents of definition are tokenized and processed as if they appeared during translation phase three in a #define directive. In particular, the definition will be truncated by embedded newline characters. gcc-3.4.6 2012-02-20 88 GCC(1) GNU GCC(1) direc- tory, gcc-3.4.6 2012-02-20 89 GCC(1) GNU GCC: #if defined the_macro_causing_the_warning #endif -Wendif-labels gcc-3.4.6 2012-02-20 90 GCC(1) GNU GCC(1). gcc-3.4.6 2012-02-20 91 GCC(1) GNU GCC(1). gcc-3.4.6 2012-02-20 92 GCC(1) GNU GCC(1). -fpch-deps When using precompiled headers, this flag will cause the gcc-3.4.6 2012-02-20 93 GCC(1) GNU GCC gcc-3.4.6 2012-02-20 94 GCC(1) GNU GCC gcc-3.4.6 2012-02-20 95 GCC(1) GNU GCC(1) macros from a header without also processing its declarations. All files specified by -imacros are processed before all files specified by -include. . gcc-3.4.6 2012-02-20 96 GCC(1) GNU GCC(1) gcc-3.4.6 2012-02-20 97 GCC(1) GNU GCC(1) gcc-3.4.6 2012-02-20 98 GCC(1) GNU GCC gcc-3.4.6 2012-02-20 99 GCC(1) GNU GCC gcc-3.4.6 2012-02-20 100 GCC(1) GNU GCC(1) arguments. . gcc-3.4.6 2012-02-20 101 GCC(1) GNU GCC(1) entries in libc. These entry points should be supplied through some other mechanism when this option is speci- fied. -nostdlib 2012-02-20 102 GCC(1) GNU GCC(1) ver- sion. con- figuration ver- sion of libgcc by default. This allows exceptions to propagate through such shared libraries, without incur- ring gcc-3.4.6 2012-02-20 103 GCC(1) GNU GCC after the -I-, these directories are searched for all gcc-3.4.6 2012-02-20 104 GCC(1) GNU GCC, 2012-02-20 2012-02-20 106 GCC(1) GNU GCC(1) Hardware Models and Configurations Earlier we discussed the standard option -b which chooses among different installed compilers for completely different target machines, such as VAX vs. 68000 vs. 80386. In addition, each of these target machine types can have its own special options, starting with -m, to choose among 2012-02-20 107 GCC(1) GNU GCC(1) . Warning: the requisite libraries are not avail- able for all m68k targets. Normally the facilities of gcc-3.4.6 2012-02-20 2012-02-20 109 GCC(1) GNU GCC(1) processors with 32-bit busses at the expense of more memory. Warning: if you use the -malign-int switch, GCC will align structures containing the above types differently than most published application binary interface specif- ications68hc1x Options gcc-3.4.6 2012-02-20. VAX Options These -m options are defined for the VAX: -munix Do not output certain jump instructions ("aobleq" and so gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 2012-02-20 114 GCC(1) GNU GCC(1)- SPAR- Clet but not in SPARC-V7. With -mcpu=tsc701, the com- piler addition- ally optimizes it for the Sun UltraSPARC III chip. -mtune=cpu_type Set the instruction scheduling parameters for machine type cpu_type, but do not set the instruction set or gcc-3.4.6 2012-02-20 115 GCC(1) GNU GCC gcc-3.4.6 2012-02-20 116 GCC(1) GNU GCC: gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 2012-02-20 func- tion. The run-time system is responsible for initializ- ing this register with an appropriate value before exe- cution begins. gcc-3.4.6 2012-02-20 121 GCC(1) GNU GCC 2012-02-20 122 GCC(1) GNU GCC(1)N10300 Options These -m options are defined for Matsushita MN10300 archi- tectures: no 2012-02-20 2012-02-20 124 GCC(1) GNU GCC(1). gcc-3.4.6 2012-02-20 2012-02-20 archi- tecture- . gcc-3.4.6 2012-02-20 127 GCC(1) GNU GCC pro- cessor, and may not run at all on others., since parame- ters set by -mtune. . gcc-3.4.6 2012-02-20, gcc-3.4.6 2012-02-20,. -msoft-float -mhard-float gcc-3.4.6 2012-02-20 130 GCC(1) GNU GCC(1) instruc- tions. float- ing is used. -mno-bit-align -mbit-align On System V.4 and embedded PowerPC systems do not (do) force structures and unions that contain bit-fields to gcc-3.4.6 2012-02-20 131 GCC(1) GNU GCC dif- ferent address at runtime. dif- ferent address at runtime. Modules compiled with -mrelocatable-lib can be linked with either modules com- piled gcc-3.4.6 2012-02-20 132 GCC(1) GNU GCC(1) relocatable. The resulting code is suitable for appli- cations, but not shared libraries. depen- dences are costly, true_store_to_load: a true dependence from store to load is costly, store_to_load: any depen- dence from store to load is costly, number: any depen- dence gcc-3.4.6 2012-02-20 gcc-3.4.6 2012-02-20 134 GCC(1) GNU GCC(1) simula- tion environment. -memb On embedded PowerPC systems, set the PPC_EMB bit in the ELF flags header to indicate that eabi extended reloca- tions gcc-3.4.6 2012-02-20 135 GCC(1) GNU GCC(1) 2012-02-20. specifies the executable that will be loading the build output file being linked. See man ld(1) for more information. -allowable_client client_name -arch_only -client_name gcc-3.4.6 2012-02-20 137 GCC(1) GNU GCC(1) gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 2012-02-20. regis- ters, whichever is smaller. pos- sible, gcc-3.4.6 2012-02-20 2012-02-20 143 GCC(1) GNU GCC(1) con- figured for, but commonly is either _flush_func or __cpu_flush. -mbranch-likely -mno-branch-likely Enable or disable use of Branch Likely instructions, gcc-3.4.6 2012-02-20 2012-02-20 145 GCC(1) GNU GCC(1) exten- sions.) sup- port. (No scheduling is implemented for this chip.) c3-2 Via C3-2 CPU with MMX and SSE instruction set sup- port. . gcc-3.4.6 2012-02-20 146 GCC(1) GNU GCC(1) pre- cision arithmetics too. For i387 you need to use -march=cpu-type, -msse or -msse2 switches to enable SSE extensions and make this option effective. For x86-64 compiler, these extensions are enabled by default. The resulting code should be considerably faster in the majority of cases and avoid the numerical insta- bility problems of 387 code, but may break some existing code that expects temporaries to be 80bit. This is the default choice for the x86-64 compiler. gcc-3.4.6 2012-02-20 147 GCC(1) GNU GCC(1). 2012-02-20 2012-02-20 149 GCC(1) GNU GCC(1) argu- ments com- piler.). gcc-3.4.6 2012-02-20. pro- logue. This is faster on most modern CPUs because of gcc-3.4.6 2012-02-20 151 GCC(1) GNU GCC com- pile des- tination is known to be aligned at least to 4 byte boun- dary. gcc-3.4.6 2012-02-20 152 GCC(1) GNU GCC(1) adjust- ing: gcc-3.4.6 2012-02-20 153 GCC(1) GNU GCC(1) around. PA 2.0 support currently requires gas snapshot 19990413 or later. The next release of binutils (current is 2.9.1) will probably contain PA 2.0 2012-02-20 2012-02-20 155 GCC(1) GNU GCC(1)- ismatch gen- erated stubs. The default is to generate long calls only when the distance from the call site to the begin- ning gcc-3.4.6 2012-02-20 156 GCC(1) GNU GCC(1) absolute calls, and long pic symbol-difference or pc- relative calls should be relatively small. However, an indirect call is used on 32-bit ELF systems in pic code and it is quite 2012-02-20 157 GCC(1) GNU GCC proces- sors). This option implies -mstrict-align. -mlong-double-64 Implement type long double as 64-bit floating point numbers. Without the option long double is implemented gcc-3.4.6 2012-02-20 2012-02-20. gcc-3.4.6 2012-02-20 160 GCC(1) GNU GCC(1) gen- erate deter- mine). gcc-3.4.6 2012-02-20 161 GCC(1) GNU GCC(1) sym- bol relocations except via assembler macros. Use of these macros does not allow optimal instruction schedul- ing. gcc-3.4.6 2012-02-20 162 GCC(1) GNU GCC(1) func- tion proces- sor gcc-3.4.6 2012-02-20 163 GCC(1) GNU GCC(1) implemen- tations: -mvms-return-codes Return VMS condition codes from main. The default is to return POSIX style condition (e.g. error) codes. gcc-3.4.6 2012-02-20 164 GCC(1) GNU GCC(1) H8/300 Options These -m options are defined for the H8/300 implementations: -mrelax Shorten some address references at link time, when pos- sible; boun- daries. This option has no effect on the H8/300.. gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 167 GCC(1) GNU GCC(1) . 2012-02-20 2012-02-20. gcc-3.4.6 2012-02-20 170 GCC(1) GNU GCC(1) . ARC Options These options are defined for ARC implementations: -EL Compile code for little endian mode. This is the default. gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 2012-02-20 2012-02-20 175 GCC(1) GNU GCC(1) boun- dary. . IA-64 Options These are the -m options defined for the Intel IA-64 archi- tecture. gcc-3.4.6 2012-02-20. gcc-3.4.6 2012-02-20 177 GCC(1) GNU GCC(1) the minimum latency algorithm. -minline-int-divide-max-throughput Generate code for inline divides of integer values using the maximumproces- sor and linker. It does not affect the thread safety of object code produced by the compiler or that of libraries supplied with it. These are HP-UX specific flags. gcc-3.4.6 2012-02-20 178 GCC(1) GNU GCC(1) . 2012-02-20 2012-02-20 2012-02-20 181 GCC(1) GNU GCC(1) instruc- tion; arrange- ments) for the stack-frame, individual data and con- stants 2012-02-20 2012-02-20 183 GCC(1) GNU GCC(1) -mabi=gnu Generate code that passes function parameters and return values that (in the called function) are seen as regis- ters gcc-3.4.6 2012-02-20 184 GCC(1) GNU GCC(1) exit point in each function. assem- blerstrhi" patterns for copying memory. This is the default. -mbcopy Do not use inline "movstr gcc-3.4.6 2012-02-20 2012-02-20 2012-02-20 187 GCC(1) GNU GCC(1) instruc- tions. gcc-3.4.6 2012-02-20.. gcc-3.4.6 2012-02-20 189 GCC(1) GNU GCC(1) gcc-3.4.6 2012-02-20 190 GCC(1) GNU GCC(1). Options for Code Generation Conventions These machine-independent options control the interface con- ventions. 2012-02-20 2012-02-20 2012-02-20 2012-02-20 allo- cated execu- tion model will produce disastrous results. This flag does not have a negative form, because it specifies a three-way choice. gcc-3.4.6 2012-02-20 currently disables function inlining. This res- triction is expected to be removed in future releases. A function may be given the attribute "no_instrument_function", in which case this gcc-3.4.6 2012-02-20 boun- dary, so it is possible to catch the signal without tak- ingstack 2012-02-20 argu- ments do not alias each other and do not alias global storage. Each language will automatically use whatever option is required by the language standard. You should not need to use these options yourself. gcc-3.4.6 2012-02-20 198 GCC(1) GNU GCC(1) -fleading-underscore This option and its counterpart, -fno-leading-underscore, forcibly change the way C sym- bols tar- gets multi- byte encodings that contain quote and escape characters that would otherwise be interpreted as a string end or escape. gcc-3.4.6 2012-02-20 2012-02-20 200 GCC(1) GNU GCC(1) speci- fied when searching for special linker files, if it can't find them using GCC_EXEC_PREFIX. Linking using GCC also uses these directories when searching for ordi- nary libraries for the -l option (but directories speci- fied multi- byte multi- byte gcc-3.4.6 2012-02-20 2012-02-20. 2012-02-20 203. | http://mirbsd.mirsolutions.de/htman/sparc/man1/mgcc.htm | crawl-003 | refinedweb | 4,950 | 58.69 |
dirfile-format man page it must be 7-bit ASCII compatible. Examples of acceptable character encodings include all the ISO 8859 character sets (i.e. Latin-1 through Latin-10, among others), as well as the UTF-8 encoding of Unicode and UCS.
This document primarily describes the latest version of the Standards (Version 10); differences with previous versions are noted where relevant. A complete list of changes between versions is given in the History section below. (1 to 3 octal digits).
- \xhh
the single byte given by the hexadecimal number hh (1 or 2 hexadecimal digits).
- \uhhhhhhh
the UTF-8 byte sequence encoding the Unicode code point given by the hexadecimal number hhhhhhh (1 to 7 hexadecimal digits).).
Standards Version 5 and earlier do not recognise the character escape sequences, nor allow quoting of tokens. As a result, they prohibit both whitespace and the comment delimiter from being used in tokens.
Directives
There are ten directives, each specified by a different reserved word, which cannot be used as field names in the dirfile. As of Standards Version 8, all reserved words start with an initial forward slash (/), to distinguish them from field names. Standards Versions 5, 6, and 7 permitted the omission of the initial forward slash, while in Standards Version 4 and earlier, reserved words may not have an initial forward is honoured, with the exception that the effect of a directive is not propagated to sub-fragments if the directive line appears after the sub-fragment is included. The scoping rules of the remaining directives are discussed below.
- /ALIAS
The /ALIAS directive defines an alternate name for a field defined elsewhere in the format specification (called the "target"). Aliases may not be used as the parent field in a /META directive, but are in most other ways indistinguishable from the target's original, canonical name. Aliases may be chained (that is, the target name appearing in an /ALIAS directive may itself be an alias). In this case, the new alias is another name for the target's own target. Just as there is no requirement that the input fields of a derived field exist, it is not an error for the target of an alias to not exist. Syntax is:
/ALIAS <name> <target>
A metafield alias may defined using the <parent-field>/<alias-name> syntax for name in the /ALIAS directive. No restriction is placed on target; specifically, a metafield alias may target a top-level field, or a metafield of with a different parent; conversely, a top-level alias may target a metafield.
A metafield alias may never appear as the parent part of a metafield field code, even if it refers to a top-level field. That is, given the valid format:
aaaa RAW UINT8 1
aaaa/bbbb CONST FLOAT64 0.0
cccc RAW UINT8 1
/ALIAS cccc/dddd aaaa
the metafield aaaa/bbbb may not be referred to as cccc/dddd/bbbb, even though cccc/dddd is a valid field code referring to aaaa.
This is not true of top-level aliases: if eeee is an alias of ffff, then ffff/gggg, a metafield of ffff, may be referred to as eeee/gggg as well.
The /ALIAS directive has no scope: it is processed immediately. It appeared in Standards Version 9.
- .
- flac
The dirfile is compressed using the flac compression scheme.
- gzip
The dirfile is compressed using the gzip compression scheme.
- lzma
The dirfile is compressed using the LZMA compression scheme.
- slim
The dirfile is compressed using the slim compression scheme.
- sie
The dirfile is sample-index encoded (a variant of run-length encoding).
- text
The dirfile is text encoded.
- zzip
The dirfile is compressed and encapsulated using the zzip compression scheme.
- zzslim
The dirfile is compressed and encapsulated using a combination of the zzip and slim compression schemes.
Implementations should fail gracefully when encountering an unknown encoding scheme. If no encoding scheme is specified, behaviour is implementation dependent. Syntax is:
/ENCODING <scheme> [<enc-datum>]
The enc-datum token provides additional data for certain encoding schemes; see dirfile-encoding(5) for details. The form of enc-datum is not specified.
The /ENCODING directive has fragment scope. It appeared in Standards Version 6. The predefined schemes sie, zzip, and zzslim, and the optional enc-datum token, appeared in Standards Version 9; the predefined scheme lzma appeared in Standards Version 7; all other predefined schemes appeared in Standards Version 6.
- . It appeared in Standards Version 5. The optional arm token appeared in Standards Version 8.
- /FRAMEOFFSET
The /FRAMEOFFSET directive specifies the frame number of the first frame for which data exists in binary files associated with RAW fields. Syntax is:
/FRAMEOFFSET <integer>
The /FRAMEOFFSET directive has fragment scope. It appeared in Standards Version 1.
- /HIDDEN
The /HIDDEN directive indicates that the specified field name is hidden. The difference (if any) between a field name which is hidden and one that is not is implementation dependent. Hiddenness is not inherited by metafields of the specified field. Hiddenness applies to the name, not the field itself; it does not hide all aliases of the field-name, and if field-name an alias, the alias is hidden, not its target. Syntax is:
/HIDDEN <field-name>
A /HIDDEN directive must appear after the specification of field-name, (which occurs either in a field specification line, or an /ALIAS directive, or a /META directive) in the same fragment.
The /HIDDEN directive has no scope: it is processed immediately. It appeared in Standards Version 9.
- /INCLUDE
The /INCLUDE directive specifies another file (called a fragment) to parse for additional format specification for the dirfile. The inclusion is processed immediately, before the fragment containing the /INCLUDE directive (the parent fragment) is parsed further. RAW fields specified in the included fragment are located in the directory containing the fragment file, and not in the directory containing the parent fragment, and the binary file encoding may be different for each fragment. The fragment may be specified either with an absolute path, or else a path relative to the directory containing the parent fragment.
The /INCLUDE directive may optionally specify a prefix and/or suffix to apply to field names defined in the included fragment. If present, affixes are applied to all field-names (including aliases) defined in the included fragment and any fragments it further includes. Affixes nest, with the affixes of the deepest inclusion innermost. Affixes are not applied to the names of binary files associated with RAW fields. Syntax is:
/INCLUDE <file> [<namespace>.][<prefix>] [<suffix>]
To specify only suffix, the null-token ("") may be used as prefix.
A namespace may also be specified in an /INCLUDE directive by prepending it to prefix. The namespace and prefix are separated by a dot (.). The dot is required whenever a namespace is specified: if the prefix is empty, the third token should be just the namespace followed by a trailing dot. If a namespace is specified, that namespace, relative to the including fragment's root namespace, becomes the root namespace of the included fragment. If no namespace is specified in the /INCLUDE directive, then the current namespace (specified by a previous /NAMESPACE directive) is used as the root namespace of the included fragment. That is, if the current namespace is current_space, then the statement:
/INCLUDE file newspace.
is equivalent to
/NAMESPACE newspace
/INCLUDE file
/NAMESPACE current_space
As a result, if no namespace is provided, and there has been no previous /NAMESPACE directive, the included fragment will have the same root namespace as the including fragment.
The /INCLUDE directive has no scope: it is processed immediately. It appeared in Standards Version 3. The optional prefix and suffix appeared in Standards Version 9. The optional namespace appeared in Standards Version 10.
- }
The <parent-field> code may not be an alias., starting with Standards Version 7,. It appeared in Standards Version 6.
- /NAMESPACE
The /NAMESPACE directive changes the current namespaceforsubsequentfieldspecificationlines. Syntax is:
/NAMESPACE <subspace>
The subspace specified is relative to the current fragment's root namespace. If subspace is the null-token ("") the current namespace will be set back to the root namespace. Otherwise, the current namespace will be changed to the concatenation of the root namespace with subspace, with the two parts separated by a dot:
rootspace.subspace
If rootspace is empty, the intervening dot is omitted, and the current namespace is simply subspace.
By default, all field codes, both field names for newly specified fields, and field codes used as inputs to fields or targets for aliases, are placed in the current namespace, unless they start with an initial dot, in which case the current namespace is ignored, and they're placed instead in the fragment's root namespace. See the Namespaces section for further details.
The /NAMESPACE directive has no scope: it is processed immediately. For the effects of changing the current namespace on included fragments, see the /INCLUDE directive above. The effects of a /NAMESPACE directive never propagate upwards to parent fragments. It appeared in Standards Version 10.
- . It appeared in Standards Version 6.
- is honoured. It appeared in Standards Version 6.
- .
In Standards Version 8 and earlier, its effect also propagates upwards back to the parent fragment, and affects subsequent metadata. Starting with Standards Version 9, this no longer happens. As a result, a /VERSION directive which indicates a version of 9 or later never propagates upwards; additionally, /VERSION directives found in subfragments included in a Version 9 or later fragment aren't propagated upwards into that fragment, regardless of the Version of the subfragments. The /VERSION directive appeared in Standards Version 5. dot (.) is allowed in Standards Version 5 and earlier. The ampersand, semicolon, less-than sign, greater-than sign, and vertical line (& ; < > |) are allowed in Standards Version 4 and earlier. Furthermore, due to the lack of an escape or quoting mechanism (see Tokens above), Standards Version 5 and earlier also prohibit whitespace and the comment delimiter (#) in field names.
The field name may not be INDEX, which is a special, implicit field which contains the integer frame index. Standards Version 5 and earlier also prohibit FILEFRAM, which was an alias for INDEX. Field names are case sensitive. Standards Version 3 and 4 restrict field names to 50 characters. Standards Version 2 and earlier restrict field names to 16 characters. Additionally, the filesystem may put restrictions on the length and acceptable characters of a RAW field name, regardless of Standards Version.
Starting in Standards Version 7, if the field name beginning a field specification line contains exactly one forward slash character (/), the line is assumed to specify a metafield. See the /META directive above for further details. A field name may not contain more than one forward slash.
Starting in Standards Version 10, any field name may be preceded by a namespace tag. The namespace tag and the field name are separated by a dot (.). See the Namespaces section, following, for details.
Namespaces
Beginning with Standards Version 10, every field in a Dirfile is contained in a namespace. Every namespace is identified by a namespace tag which consist of the same restricted set of characters used for field names. Namespaces nest arbitrarily deep. Subnamespaces are identified by concatenating all namespace tags, separating tags by dots (.), with the outermost namespace leftmost:
topspace.subspace.subsubspace
Each fragment has an immutable root namespace. The root namespace of the primary format file is the null namespace, identified by the null-token (""). The root namespace of other fragments is specified when they are introduced (see the /INCLUDE directive). Each fragment also has a current namespace which may be changed as often as needed using the /NAMESPACE directive, and defaults to the root namespace. The current namespace is always either the root namespace or else a subspace under the root namespace.
If a field name or field code starts with a leading dot, then that name or code is taken to be relative to the fragment's root space. If it does not start with a dot, it is taken to be relative to the current namespace.
For example, if the both the root namespace and current namespace of a fragment start off as rootspace, then:
aaaa RAW UINT8 1
.bbbb RAW UINT8 1
cccc.dddd RAW UINT8 1
.eeee.ffff RAW UINT8 1
/NAMESPACE newspace
gggg RAW UINT8 1
.hhhh RAW UINT8 1
iiii.jjjj RAW UINT8 1
.kkkk.llll RAW UINT8 1
specifies, respectively, the fields:
rootspace.aaaa,
rootspace.bbbb,
rootspace.cccc.dddd,
rootspace.eeee.ffff,
rootspace.newspace.gggg,
rootspace.hhhh,
rootspace.newspace.iiii.jjjj, and
rootspace.kkkk.llll.
Note that a field may specify deeper subspaces under either the root namespace or the current namespace (meaning it is never necessary to use the /NAMESPACE directive). Note also that there is no way for metadata in a given fragment to refer to fields outside the fragment's root space.
There is one exception to this namespace scoping rule: the implicit INDEX vector is always in the null (top-level) namespace, and namespace tags specified with it, either explicitly or implicitly, even a fragment root namespace, are ignored. So, in a fragment with root namespace rootspace, and current namespace rootspace.subspace,
INDEX,
.INDEX,
namespace.INDEX, and
.namespace.INDEX,
all refer to the same INDEX field.
Field Types
There are eighteen field types. Of these, fourteen are of vector type (BIT, DIVIDE, INDIR, LINCOM, LINTERP, MPLEX, MULTIPLY, PHASE, POLYNOM, RAW, RECIP, SBIT, SINDIR, and WINDOW) and four are of scalar type (CARRAY, CONST, SARRAY, and STRING). The thirteen vector field types other than RAW fields are also called derived fields, since they derive their value from one or more input vector fields. Any other vector field may be used as an input vector, including the implicit INDEX field, but excluding SINDIR string vectors.
Five of these derived fields (DIVIDE, LINCOM, MPLEX, MULTIPLY, and WINDOW) have more than one vector input field. In situations where these input fields have differing sample rates, the sample rate of the derived field is the same as the sample rate of the first (left-most) input field specified. Furthermore, the input fields are synchronised by aligning them on frame boundaries, assuming equally-spaced sampling throughout a frame, and using the last sample of each input field which did not occur after the sample of the derived field being computed. That is, if the first and second input fields have sample rates s1 and s2, the derived field also has sample rate s1 and, for every sample of the derived field, n, the n'th sample of the first field is used (since they have the same sample rate by definition), and the sample number used of the second field, m, is computed as:
m = floor((n * s2) / s1).
Starting in Standards Version 6, certain scalar field parameters in the field specifications may be specified using CONST or CARRAY fields, instead of literal values. A list of parameters for which this is allowed is given below in the Field Parameters section.
The possible fields types are:
- BIT
The BIT vector field type extracts one or more bits out of an input vector field as an unsigned number. Syntax is:
<fieldname> BIT ) unsigned 64-bit integer. If num-bits is omitted, it is assumed to be one.
The extracted bits are interpreted as an unsigned integer; the SBIT field type is a signed version of this field type. The optional num-bits parameter appeared in Standards Version 1.
- CARRAY
The CARRAY scalar field type is a list of constants fully specified in the format specification metadata. Syntax is:
<fieldname>.) CARRAY appeared in Standards Version 8.
- CONST
The CONST scalar field type is a constant fully specified in the format specification metadata. Syntax is:
<fieldname> CONST <type> <value>
where type may be any supported native data type (see the description of the RAW field type below), and value is the numerical value of the constant interpreted as indicated by type. CONST appeared in Standards Version 6.
- DIVIDE
The DIVIDE vector field type is the quotient of two vector fields. Syntax is:
<fieldname> DIVIDE <field1> <field1>
The derived field is computed as:
fieldname = field1 / field2.
It was introduced in Standards Version 8.
- INDIR
The INDIR vector field type performs an indirect translation of a CARRAY scalar field to a derived vector field based on a vector index field. Syntax is:
<fieldname> INDIR <index> <array>
where index is the vector field, which is converted to an integer type, if necessary, and array is the CARRAY field. The nth sample of the INDIR field is the value of the mth element of array (counting from zero), where m is the value of the nth sample of index. When index is not a valid element number of array, the corresponding value of the INDIR is implementation dependent. INDIR appeared in Standards Version 10.
- LINCOM
The LINCOM vector field type is the linear combination of one, two or three input vector fields. Syntax is:
<fieldname> LINCOM [<n>] <field1> <a1> <b1> [<field2> <a2> <b2> [<field3> <a3> <b3>]]
where n, if present, indicates the number of input vector fields (1, 2, or 3). The derived field is computed as:
fieldname = (a1 * field1 + b1) + (a2 * field2 + b2) + (a3 * field3 + b3)
with the field2 and field3 terms included only if specified.. In standards Version 6 and earlier, n is mandatory.
- LINTERP
The LINTERP vector field type specifies a table look up based on another vector field. Syntax is:
<fieldname>.
- MPLEX
The MPLEX vector field type permits the multiplexing of several low sample rate fields into a single data field of higher sample rate. Syntax is:
<fieldname> MPLEX <input> <index> <count> [<period>]
where input is the input vector containing the multiplexed fields, index is the vector containing the mutliplex index, count is the value of the multiplex index when the computed field is stored in input, and period, if present and non-zero, is the number of samples between successive occurrances of the value count in the index vector. A period of zero (or, equivalently, it's omission) indicates that either the value count is not equally spaced in the index vector, or else that the spacing is unknown. Both count and period are integers, and period may not be negative.
At every sample n, the derived field is computed as:
fieldname[n] = (index == count) ? input[n] : fieldname[n - 1]
The index vector is converted to an integer type for comparison. The value of the derived field before the first sample where index equals count is implementation dependent.
The values of count and period place no restrictions on values contained in index. Specifically, particular values of index (including count) need not be equally spaced (neither by period nor any other spacing); index need not ever take on the value count (in which case the value of the entirety of the derived field is implementation dependent). Different MPLEX field definitions which use the same index vector may specify different periods. MPLEX appeared in Standards Version 9.
- MULTIPLY
The MULTIPLY vector field type is the product of two vector fields. Syntax is:
<fieldname> MULTIPLY <field1> <field2>
The derived field is computed as:
fieldname = field1 * field2.
MULTIPLY appeared in Standards Version 2.
- PHASE
The PHASE vector field type shifts an input vector field by the specified number of samples. Syntax is:
<fieldname> PHASE <input> <shift>
which specifies fieldname to be the input vector field, input, shifted by shift samples. A positive shift indicates a forward shift, towards the end-of-field. Results of shifting past the beginning- or end-of-field is implementation dependent. PHASE appeared in Standards Version 4.
-name = a0 + a1 * input + a2 * input**2 + a3 * input**3 + a4 * input**4 + a5 * input**5
where ** is the element-wise exponentiation operator, and the higher order terms are computed only if the corresponding co-efficients ai are specified. POLYNOM appeared in Standards Version 7.
- RAW
The RAW vector field type specifies raw time streams on disk. In this case, the field name should correspond to the name of the file containing the time stream. Syntax is:
<fieldname> RAW <type> <sample-rate>
where sample-rate is the number of samples per dirfile frame for the time stream and type is a token specifying the native data type:
- UINT8
unsigned 8-bit integer
- INT8
two's complement signed 8-bit integer
- UINT16
unsigned 16-bit integer
- INT16
two's complement signed 16-bit integer
- UINT32
unsigned 32-bit integer
- INT32
two's complement signed 32-bit integer
- UINT64
unsigned 64-bit integer
- INT64
two's complement signed 64-bit integer
- FLOAT32
IEEE-754 standard 32-bit single precision floating point number
- FLOAT64
IEEE-754 standard 64-bit double precision floating point number
- COMPLEX64
a 64-bit complex number consisting of two IEEE-754 standard 32-bit single precision floating point numbers representing the real and imaginary parts of the complex number (Standards Version 7 and later)
- COMPLEX128
a 128-bit complex number consisting of two IEEE-754 standard 64-bit double precision floating point numbers representing the real and imaginary parts of the complex number (Standards Version 7 and later).
For more information on the storage of complex valued data, see dirfile(5). Two additional type names exist: FLOAT is equivalent to FLOAT32, and DOUBLE is equivalent to FLOAT64. Standards Version 9 deprecates these two aliases, but still allows them.
All these type names (except those for complex data, which came later) were introduced in Standards Version 5. Earlier Standards Versions specified data types with single-character type aliases:
-. These single-character type aliases were deprecated in Standards Version 5 and removed in Standards Version 8.
- RECIP
The RECIP vector field type computes the reciprocal of a single input vector field. Syntax is:
<field_name> RECIP <input> <dividend>
where <input> is the input field code and <dividend> is a scalar quantity. The derived field is computed as:
fieldname = dividend / input.
RECIP appeared in Standards Version 8.
- SARRAY
The SARRAY scalar field type is a list of strings fully specified in the format file metadata. Syntax is:
<fieldname> SARRAY <string0> <string1> <string2> ...
Each string is a single token. To include whitespace in a string, enclose it in quotation marks ("), or else escape the whitespace with the backslash character (\). No limit is placed on the number of elements in a SARRAY. SARRAY appeared in Standards Version 10.
- SBIT
The SBIT vector field type extracts one or more bits out of an input vector field as a (two's-complement) signed number. Syntax is:
<fieldname> SBIT ) two's complement signed 64-bit integer. If num-bits is omitted, it is assumed to be one.
The extracted bits are interpreted as a two's complement signed integer of the specified width. (So, if num-bits is, for example, one, then the field can take on the value zero or negative one.) The BIT field type is an unsigned version of this field type. SBIT appeared in Standards Version 7.
- SINDIR
The SINDIR vector field type performs an indirect translation of a SARRAY scalar field to a derived vector field of strings based on a vector index field. Syntax is:
<fieldname> SINDIR <index> <array>
where index is the vector field, which is converted to an integer type, if necessary, and array is the SARRAY field. The nth sample of the SINDIR field is the string value of the mth element of array (counting from zero), where m is the value of the nth sample of index. When index is not a valid element number of array, the corresponding value of the SINDIR is implementation dependent. SINDIR appeared in Standards Version 10.
- STRING
The STRING scalar field type is a character string fully specified in the format file metadata. Syntax is:
<fieldname> STRING <string>
where string is the string value of the field. Note that string is a single token. To include whitespace in the string, enclose string in quotation marks ("), or else escape the whitespace with the backslash character (\). STRING appeared in Standards Version 6.
- WINDOW
The WINDOW vector field type isolates a portion of an input vector based on a comparison. Syntax is:
<fieldname> WINDOW <input> <check> <op> <threshold>
where input is the vector containing the data to extract, check is the vector on which to test the comparison, threshold is the value against which check is compared, and op is one of the following tokens indicating the particular comparison performed:
- EQ
data are extracted where check, converted to a 64-bit signed integer, equals threshold,
- GE
data are extracted where check, converted to a 64-bit floating-point number, is greater than or equal to threshold,
- GT
data are extracted where check, converted to a 64-bit floating-point number, is strictly greater than threshold,
- LE
data are extracted where check, converted to a 64-bit floating-point number, is less than or equal to threshold,
- LT
data are extracted where check, converted to a 64-bit floating-point number, is strictly less than threshold,
- NE
data are extracted where check, converted to a 64-bit signed integer, is not equal to threshold,
- SET
data are extracted where at least one bit set in threshold is also set in check, when converted to a 64-bit unsigned integer,
- CLR
data are extracted where at least one bit set in threshold is not set in check, when converted to a 64-bit unsigned integer,
The storage type of threshold depends on the operator, and follows the interpretation of check. It may never be complex valued.
Outside the region extracted, the value of the derived field is implementation dependent.
Note: with the EQ operator, this derived field type is very similar to the MPLEX field type above. The primary difference is that MPLEX mandates the value of the derived field outside the extracted region, while WINDOW does not. WINDOW appeared in Standards Version 9.
Field Parameters
All input vector field parameters should be field codes (see below). Additionally, the scalar field parameters listedcode<n>
If the angle brackets and element index are omitted from a CARRAY field code used as a parameter, the first element in the field (index zero) is assumed.
Field parameters which may be specified using a scalar field code are:
- BIT, SBIT
bitnum, numbits
- LINCOM
any of the mi, or bi
- MPLEX
count, max
- PHASE
shift
- POLYNOM
any of the ai
- RAW
spf
- RECIP
dividend
- WINDOW
threshold.
Starting in Standards Version 7,
Starting in Standards Version 9, in additional to decimal notation, literal integer parameters may be specified as hexadecimal numbers, by prefixing the number (after an optional '+' or '-' sign) with 0x or 0X, or as octal numbers, by prefixing the number with 0, as described in strtol(3). Similarly, floating point literal numbers (both purely real ones and components of complex literals) may be specified in hexadecimal by prefixing them with 0x or 0X, and using p or P as the binary exponent prefix, as described in the C99 standard. Both uppercase and lowercase hexadecimal digits may be used. In cases where a literal floating point number may apear, the tokens INF or INFINITY, optionally preceded by a '+' or '-' sign, and NAN, optionally immediately followed by '(', then a sequence of characters, then ')', and all disregarding case, will be interpreted as the special floating point values explained in strtod(3).
Field Codes
When specifying the input to a field, either as a scalar parameter, or as an input vector field to a non-RAW vector field, field codes are used. A field code consists of, in order:
- (since Standards Version 10:) optonally, a leading dot (.), indicating this field code is relative to the fragment's root namespace. Without the leading dot, the field code is taken to be relative to the current namespace. (See the discussion in the Namespaces section above for details.)
- (since Standards Version 10:) optionally, a non-null subnamespace followed by a dot (.) indicating a subspace under the current or root namespace. The subnamespace may be made up of any number of namespace tags separated by dots, to nest deeper in the namespace tree.
- (since Standards Version 6:) if the field in question is a metafield (see the /META directive above), the field name of the metafield's parent (which may be an alias) followed by a forward slash (/).
- a simple field name, possibly an alias, indicating a vector or scalar field
- (since Standards Version 7:) optionally, a dot (.) followed by a representation suffix.
A representation suffix may be used used to extract a real number from a complex value. The available suffixes (listed here with their preceding dot) and their meanings are:
- .a
the argument of the input, that is, the angle (in radians) between the positive real axis and the input. The argument is in the range [-pi, pi], and a branch cut exists along the negative real axis. At the branch cut, -pi is returned if the imaginary part is -0, and pi is returned if the imaginary part is +0. If the input is zero, zero is returned.
- .i
the imaginary part of the input (i.e. the projection of the input onto the imaginary axis)
- .m
the modulus of the input (i.e. its absolue value).
- .r
the real part of the input (i.e. the projection of the input onto the real axis)
- .z
(since Standards Version 10:) the identity representation: it returns the full complex value, equivalent to simply omitting the suffix completely. It is only needed in certain cases to force the correct interpretation of a field code in the presence of a namespace tag. To wit, the field code
name.r
may be interpreted as the real-part (via the .r representation suffix) of the field called name. (if such a field exists). To refer to a field called r in the name namespace, the field code must be written:
name.r.z
NB: The first interpretation only occurs with valid representation suffixes; the field code:
name.q
is interpreted as the field q in the name namespace because .q is not a valid representation suffix. Furthermore, ambiguity arises only if both fields "name" and "name.r" are defined. if the field "name" does not exist, but the field "name.r" does, then the original field code is not ambiguous. This is the only representation suffix allowed on SARRAY, SINDIR, and STRING field codes.
If the specified field is purely real, representations are calculated as if the imaginary part were equal to +0.
History
This document describes Versions 10 and earlier of the Dirfile Standards.
Version 10 of the Standards (January 2017) added the INDIR, SARRAY, and SINDIR field types, namespaces, the /NAMESPACE directive, the flac encoding scheme, and the .z representation suffix.
Version 9 of the Standards (April 2012) added the MPLEX and WINDOW field types, the /ALIAS and /HIDDEN directives, the affixes to /INCLUDE, the sie, zzip, and zzslim encoding schemes, along with the optional enc_datum token to /ENCODING. It permitted specification of integer literals in octal and hexadecimal. Finally, it deprecated the type aliases FLOAT and DOUBLE.
Version 8 of the Standards (November 2010) added the DIVIDE, RECIP, and CARRAY field types, made the forward slash on reserved words mandatory, and prohibited using the single-character, and the lzma encoding scheme.)
Referenced By
checkdirfile(1), dirfile(5), dirfile-encoding(5), gd_add(3), gd_add_alias(3), gd_add_bit(3), gd_add_spec(3), gd_alter_affixes(3), gd_alter_bit(3), gd_alter_encoding(3), gd_alter_endianness(3), gd_alter_entry(3), gd_alter_frameoffset(3), gd_alter_spec(3), gd_endianness(3), gd_entry(3), gd_fragment_affixes(3), gd_fragment_namespace(3), gd_frameoffset(3), gd_getdata(3), gd_include(3), gd_linterp_tablename(3), gd_madd_bit(3), gd_move(3), gd_mplex_lookback(3), gd_nframes(3), gd_open(3), gd_protection(3), gd_raw_filename(3), gd_reference(3), gd_rename(3), gd_strtok(3), gd_uninclude(3). | https://www.mankier.com/5/dirfile-format | CC-MAIN-2018-26 | refinedweb | 5,252 | 53.71 |
Hello, I’m new in this comunity! I feel a little bit embarassed to post a so simple question but after a few hours of triyng, I haven’t yet found an answer.
I’m working on a project that uses a number of servos. I’m trying to initialize them by using this code:
#include <Servo.h>
const int numOfServos = 1;
const int servosPins = {9};
const int servosPosition = {0};
Servo servosArray[numOfServos];
(in this case i’m using just one servo, but the idea is to extend that number as far as it works)
Up to that I have no problem, but the error occurs in the Setup function:
void setup() {
for(int k = 0; k < numOfServos; k++) {
servosArray[k].attach(servosPins[k]);
servosArray[k].write(servosPosition[k]);
}
}
Especially with the argument of the attach and write methods.
I get the error: “invalid types ‘const int[int]’ for array subscript” | https://forum.arduino.cc/t/initialize-servo-array-on-setup/528881 | CC-MAIN-2022-21 | refinedweb | 151 | 60.55 |
Question:
I'm writing a class which I intend to use to create subroutines, constructor as following:
def __init__(self,menuText,RPC_params,RPC_call): #Treat the params #Call the given RPC_call with the treated params
The problem is that I want to call the function on the pattern "rpc.serve.(function name here)(params)", where rpc is a serverProxy object that I'm using to call XMLRPC functions, and serve.-function name- is the method I'm calling on the XMLRPC-server.
I've looked at Calling a function from a string with the function's name in Python, but seeing how my serverProxy object doesnt know which "remote attributes" it have, I cant use the getattr() function to retrieve the method.
I've seen a example by making a dictionary to call a given function, but is there no way to make the function truly dynamic by creating the function call as you would create a String? Like running a String as a function?
Solution:1
You can use
getattr to get the function name from the server proxy, so calling the function like this will work:
getattr(rpc, function_name)(*params)
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/05/tutorial-dynamic-function-calls-in.html | CC-MAIN-2018-51 | refinedweb | 211 | 52.43 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
2
results of 2
Quoting Larry Bates <larry.bates@...>:
> Joel Parker wrote:
> > I'm packaging an existing program that has roughly this file structure:
> >
> > program.py
> > data\
> > datafile.xml
> >
> > When I use py2exe to package it, the program works, but fails when it looks
> for
> > dist\library.zip\data\datafile.xml
> >
> > Is there a way to tell py2exe to include a set of data files in
> library.zip?
> >
> > Thanks,
> >
>
> This question seems to come up about every couple of weeks. IMHO the best
> answer is to not try to make py2exe into an installer but rather user Inno
> Installer to include data files, documentation, configuration files, etc.
> into
> your application. Once you learn how to use Inno, it will make your
> applications much easier to distribute and they will be better because you
> can
> include the necessary "pieces" for users. You can include any/all data files
> that may need to go along with your application. You can put appropriate
> shortcuts on desktop, quicklaunch, start button. Include shortcuts to
> documentation, unit tests, etc. Make necessary registry entries. Register
> COM
> objects. Install and start services. Basically anything you need to do
> during
> the install and it all gets wrapped up into a single setup.exe file for
> distribution. It is WELL worth your while to spend a couple of days working
> with Inno Installer, it will be repaid MANY times over in time saved later.
Hi Larry, thank you for the help.
My question isn't so much about packaging data files with the program (I'm
installing some other things as well), but about where my program is expecting
to find the files.
In my example above, it's looking for the data directory in the library.zip
file. Is this common? Will it require manually inserting files into library.zip
using an installer package, or a change in the base program to make it look
somewhere else?
Thanks,
--
Joel Parker
Does anyone could help me generate an executable program that functions for the bounce.py with py2exe?
Thank you
bounce.py:
from visual import *
from numpy import *
ball = sphere(pos=(0,4,0), color=color.red)
example = zeros((10,10),dtype=float)
print (example)
setup.py:
from distutils.core import setup
import py2exe
opts = {
'py2exe': {'packages':['numarray']
}
}
setup(windows=[{"script" : "bounce.py"}], options=opts)
_________________________________________________________________
Invite your mail contacts to join your friends list with Windows Live Spaces. It's easy! | http://sourceforge.net/p/py2exe/mailman/py2exe-users/?viewmonth=200805&viewday=1 | CC-MAIN-2015-22 | refinedweb | 430 | 68.16 |
git / github workflow in stash
With the addition (in latest master and dev brsnches) of the new
ghcommand for interfacing with github (create, fork, or issue pull requests), I figure it is time to start writing down some simple git in stash tutorials. This is probably also a good way to expose and work on the remaining bugs in the stash git implementation.
hey @jonb If I use version control at all, I tend to make repos in bitbucket since it lets you make private repos unlike github where you have to pay for that privilege. Usually I'm doing ad-hoc stuff with all sorts of either private apis / access keys all over the place so I don't feel comfortable putting stuff into public until it's more or less being wrapped up. Stash will work with other git services like bitbucket correct?
I've been using dropbox syncing constantly for working on apps/scripts between my mac and iOS, and I really would like to move over to using legit version control exclusively for everything, but to be honest I just don't feel like I have enough of a mastery of git to trust myself with it and it feels so finicky in so many ways.
TBH I would love to see version control as a built-in first-party citizen of Pythonista kind of like it is in xCode. Perhaps @jonb @omz collaboration to bake stash into Pythonista?
stash git should, in theory, work with bitbucket.
If you have the ability to use ssh, this works the best -- slightly slower but more robust. you would then use git+ssh:// type urls instead of https. The gh module is specifically for creating /forking repos in github, if bitbucket has a similar api i suspect we could extend this to a more generic repo provider interface.
Has any progress been made with interfacing StaSh with Bitbucket? I am trying to share a project between my iPad and iPhone on Pythonista and I would like to use Bitbucket + StaSh to do revision control and be able to push from one device and pull from another. Bitbucket (as mentioned above) allows me to keep the project private without paying monthly for the service, and as this is a non-revenue project I do not wish to pay for monthly hosting.
I have found, however, that StaSh isn't playing nicely with Bitbucket at the moment. Any advice?
I tried the command:
git clone localFolder
And got the error:
class 'dulwich.errors.GitProtocolError'>: unexpected http response 401
Figured out the 401 error. It is complaining about credentials.
I created a new local repo on my iPad and tried to push it to an empty repo on Bitbucket. I added a remote for the Bitbucket repo using:
git remote [name] [URL]
And then tried to push using:
Git push [name] Username: [user] Password: [pass]
But I got the same error as above
class 'dulwich.errors.GitProtocolError'>: unexpected http response 401
Is there some reason that StaSh can't talk to Bitbucket? Can I use SSH keys? If so, how so I do that?
Also, in the process I added a bad remote to my repo on the iPad. Is there some StaSh command to remove a remote? The help doesn't list one. I suppose I could dig through the source to find how it added one and reverse that... if I had to.
EDIT:
Trashed the repo on the iPad because there is no way to remove remotes.
Went back to Bitbucket and turned the repo off of "Private"
Then I went back to Pythonista -> StaSh window and tried clone repo again using:
git clone [URL]
And successfully got the repo to clone on my iPad
I then added some files to the repo from the iPad by moving files from another folder in Pythonista and then added to the repo using:
git add * git commit Commit Message: [message] Author Name: [name] Save [y/n]y Author Email: [email] Save [y/n]y
This worked okay too, but then when I tried to push back to the repo using:
git push Username: [user] Password: [pass]
I ALWAYS get the same error from before:
class 'dulwich.errors.GitProtocolError'>: unexpected http response 401
And I am certain my credentials are correct for the repo (I have logged into and out of it on the website many times today to verify this). I have tried using my email instead of username. Nothing I try works to allow me to push back to the Bitbucket repo.
Would it be possible for someone to try this and let me know what I am doing wrong, or add Bitbucket capability to dulwich if necessary? Thanks!!
I get the same issue with github.com private repos. I cloned a private repo, pull changes, edited a file locally, comit and tried to push. The push fails with 401.
Its able to clone and pull from a private repo, but cant push.
Is anyone able to use stash git with private repos? Wondering if its only a few of us having issues.
Have you tried Working Copy?
@khilnani
It occurs to me that when we add the use/pass to the repo for push, we might not be honorng an existing password, thus end up with something like
@user:pass@github.com" rel="nofollow">.....
Can you try creating a new remote which points to the bare url without any buser/pass info? (i.e), then push to that url?
also, i should add that ssh might be better for private repos in general. you iust have to set up the keys and then it just works.
Have you tried Working Copy?
I already have that but wanted to find something inside pythonista since i have projects with many files to open in app back and forth :(
Good suggestion tho.
also, i should add that ssh might be better for private repos in general. you iust have to set up the keys and then it just works.
Hmmm.. For some reason i didnt try ssh when i use that everywhere else. I guess, i assumed it wouldnt work since Pythonista is an 'app'.
I did try your other suggestion and that didnt work. Seems like on ssh keys work with git@ urls and not https at all.
Might send a PR to ... Noticed it mentions ssh in an upcoming update :)
Thanks for helping!!! @JonB
did ssh work for you?
Here is the section of an updated git tutorial (though i can't push it because I am in the midst of removing the gittle dependencies in stash git, so dont have a fully working git)
Setting up ssh keys
In some cases, you may need or want to use ssh instead of https. This is a somewhat more reliable way of pushing, though it can be a little slower. This might also work for private repos.
With guthub, the process is facilitated by the
ghcommand in stash:
[git_tutorial]$ gh create_key stash
creates a key, and adds it to your github account. Note this command uses the stored github password, so you will have to create a key in your keychain, or better yet just use git push on an existing repo.
If you are using a non-github ssh, bitbucket, etc, you can create an ssh key
$ ssh-keygen -trsa -b2048 pbcopy ~/.ssh/id_rsa.pub
creates a key, and copies the public key to the clipboard. You can then paste this into whatever you use for setting up the keys on the server.
Next, we need to add a remote to an existing repo:
[git_tutorial]$ git remote originssh ssh://git@github.com/jsbain/stash_git_tutorial.git
Now you can fetch/push,etc from originssh
[git_tutorial] git push originssh
@JonB ssh keys worked after setting up in pythonista and github. Thanks!
Am still working through a few scenarios:
1 - moving files. Noticed there is no git mv command.
2 - adding directories with sub directories. Seems like need to add complete relative path of the files in each directory (* works) while in the main dir with .git
3 - files deleted without git rm. eg. deleting from pythonista GUI. git status, git pull etc. all fail with a IO exception
4 - Files deleted remotely - after a git pull, any file deleted remotely is auto staged to be added instead of being removed locally
yes, removing files is probably not handled well... how does git handle rm without git rm?.
I am in the process of removing gittle dependencies, and will then be able to use the most recent dulwich. Some improvements have already been made to dulwich.porcelain, and this would let us make pull requests to dulwich...
yes, removing files is probably not handled well... how does git handle rm without git rm?
from what ive seen, it treats them as if git rm was called and auto stages the removes.
git fetchOR
git fetch originOR even
fit fetch git@gihub...gives an error -
stash: <type 'exceptions.Exception'>: url must match a remote name, or must start with http:// or https://
i tried git reset after manually deleting a file. didnt seem to help. Still get
stash: <type 'exceptions.OSError'>: [Errno 2] No such file or directory: '/private/var/mobile/Containers/Shared/AppGroup/A90BB332.......ACD7F/Pythonista3/Documents/pythonista-scripts/test'
I am in the process of removing gittle dependencies, and will then be able to use the most recent dulwich. Some improvements have already been made to dulwich.porcelain, and this would let us make pull requests to dulwich...
cool !!!!
Thanks so much for helping out!!!
Tried
gh create_key stash, which failed because it could not import
jwt. Tried pip installing jwt, which partially failed installing dependency
typing. Trying the original command again, which failed with syntax error in
jwk.py(which makes sense if it depends on typing).
This with latest vanilla version of Pythonista 3 and an updated version of stash.
hmm, @mikael, can you post the traceback? I don't have jwt installed, must be a module collision somewhere
@JonB, not sure how to get a traceback. Here's what happens:
gh create_key stash no github found in /private/var/mobile/Containers/Shared/AppGroup/447A26CB-FA57-4E8A-8C34-082F55AD274F/Pythonista3/Documents/site-packages/stash/lib Installing pygithub master ... Opening: Save as: /private/var/mobile/Containers/Data/Application/33F31BE9-F7A7-4092-8AFE-9E3C77723213/tmp//pygithub.zip 3168996 Done stash: <type 'exceptions.ImportError'>: No module named jwt
stashconf py_traceback 1
then run the command
Thanks. As I started stash to do this, I got a tip with that exact same line for enabling tracebacks. Funny - or scary, depending on how you see the world.
Here's the trace:
Traceback (most recent call last): File "/private/var/mobile/Containers/Shared/AppGroup/447A26CB-FA57-4E8A-8C34-082F55AD274F/Pythonista3/Documents/site-packages/stash/system/shruntime.py", line 498, in exec_py_file exec code in namespace, namespace File "site-packages/stash/bin/gh.py", line 44, in <module> import github File "/private/var/mobile/Containers/Shared/AppGroup/447A26CB-FA57-4E8A-8C34-082F55AD274F/Pythonista3/Documents/site-packages/stash/lib/github/__init__.py", line 37, in <module> from MainClass import Github, GithubIntegration File "/private/var/mobile/Containers/Shared/AppGroup/447A26CB-FA57-4E8A-8C34-082F55AD274F/Pythonista3/Documents/site-packages/stash/lib/github/MainClass.py", line 34, in <module> import jwt ImportError: No module named jwt
Hmm, okay, pygithub now depends on pyjwt. It used to be dependency free, so could just install it directly.
pip install pyjwt
should fix the issue, though i will have to update gh to maybe use pip to install pygithub (or install a specific version). | https://forum.omz-software.com/topic/2829/git-github-workflow-in-stash | CC-MAIN-2021-31 | refinedweb | 1,940 | 72.56 |
📸 Swift Camera — Part 1
Create a custom camera view using AVFoundation
iOS 11 brings lots of new cool features like Machine Learning and Augmented Reality. So you might want to test those features or create awesome apps. But if you notice that some of them need a custom camera and accessing camera frames. iOS have lots of API’s for us to access device camera, capture image and process it.
AVFoundation is the framework you should be looking at. Since this framework is huge and there is lots of ways to achieve the desired features I decided to write set of blog posts about the following.
- Create custom camera view
- Take picture from custom camera
- Record Video using
- Detect Face and Scan QR code
(If you want some specific things please do ask me in the comments. I will try to write/learn about it)
To all those nerds who wants to skip the blog post and see the code in action. I got you covered. Here is the Github Repo. Keep an eye on this repo because I will be adding all features in the same app and if you want to improve the code quality PR/Issues are welcome.
For others,
Let’s create a custom camera
Step 1:
Create new project in xCode by selecting File → New → Project → Single View Application (under iOS tab)
Give it a nice name and select swift as Language. Click Next → Create and save the project file. if everything goes well, you could see a project like below
Step 2:
Select Main.storyboard ,drag
UIView to your view controller and set top , bottom , leading, trailing to 0. This view will serve as “view finder” or “preview view” for your camera.
Control + drag our preview view to
ViewController.swift and create an IBOutlet named
previewView
Step 3
At the top of your ViewController file, import
AVFoundation framework.
import AVFoundation
Create the below instance variables so that we can access anywhere in the ViewController file.
var captureSession: AVCaptureSession?
var videoPreviewLayer: AVCaptureVideoPreviewLayer?
here
captureSession helps us to transfer data between one or more device inputs like camera or microphone and view
videoPreviewLayer helps to render the camera view finder in our ViewController
Step 4
You will need a capture device, device input and preview layer to setup and start the camera. To make it simpler we will try to do everything related to camera setup in
viewDidLoad
Get an instance of the AVCaptureDevice class to initialise a device object and provide the video as the media type parameter. You can even select which capture device you can choose like a dual camera or standard camera but for this post, we will just get default device (rear camera).
let captureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
Get an instance of the AVCaptureDeviceInput class using the previous device object. This will serve as a middle man to attach our input device to capture device. Here there is a chance that input device might not be available, so wrap it inside a do…catch to handle errors
do {
let input = try AVCaptureDeviceInput(device: captureDevice)
} catch {
print(error)
}
Initialise our
captureSession object and add the input device to our session
captureSession = AVCaptureSession()
captureSession?.addInput(input)
Next configure our preview View so that we can see the live preview of camera.
- Create an
AVCaptureVideoPreviewLayerfrom our session
- Configure the layer to resize while maintaining original aspect
- Set preview layer frame to our ViewController view bounds
- Add the preview layer as sublayer to our
previewView
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
previewView.layer.addSublayer(videoPreviewLayer!)
Finally, start our
captureSession to start video capture
captureSession?.startRunning()
Step 5
Since the simulator does NOT have a camera to check our code. you might need a real device to run the code. So connect your phone and hit run, you should be see a live camera.
Meh! The app will crash because we forgot to add one last thing. That is “Privacy — Camera Usage Description” in our plist. Add this to Info.plist file and give some nice message because that is what user will see when app is requesting for camera permission.
Now if you hit run you should be seeing the magic!
There are some things which I haven’t included here like checking user permission, adding capture output. But if you got the time you can read about all of them in Apple Documentation here.
Photo Capture Programming Guide
Describes how to set up the photo capture pipeline introduced in iOS 10.
developer.apple.com
On next part, I will write about having
AVCapturePhotoOutputand use its delegate method to take a picture and save it to iPhone Camera Roll. Follow me here or on twitter to get updates. If you like this post share and comments
Part 2 and 3 are out. You can check it out from below link | https://medium.com/@rizwanm/https-medium-com-rizwanm-swift-camera-part-1-c38b8b773b2?source=post_page-----a3d6b584695b---------------------- | CC-MAIN-2019-43 | refinedweb | 812 | 62.98 |
Program to Multiply Two Numbers
#include <stdio.h> int main() { double a, b, product; printf("Enter two numbers: "); scanf("%lf %lf", &a, &b); // Calculating product product = a * b; // Result up to 2 decimal point is displayed using %.2lf printf("Product = %.2lf", product); return 0; }
Output
Enter two numbers: 2.4 1.12 Product = 2.69
In this program, the user is asked to enter two numbers which are stored in variables a and b respectively.
printf("Enter two numbers: "); scanf("%lf %lf", &a, &b);
Then, the product of a and b is evaluated and the result is stored in product.
product = a * b;
Finally, product is displayed on the screen using
printf(
).
printf("Product = %.2lf", product);
Notice that, the result is rounded off to the second decimal place using
%.2lf conversion character. | https://www.programiz.com/c-programming/examples/product-numbers | CC-MAIN-2021-04 | refinedweb | 132 | 66.13 |
I think the Chart functionality can be enhanced, for example, to visualize the process of back testing (just like the strategy tester in MT5), instead of show the chart only after the testing is done, so that it is easier to watch how the strategy works in each step from the chat.
Meng Xiaofeng
@Meng Xiaofeng
Posts made by Meng Xiaofeng
- RE: Backtrader 2.0?
- RE: How to know the first index in the next function of indicator?
many thanks for anwser!
but still the question: is it possible to get the first index and use the index explicitly?
- How to know the first index in the next function of indicator?
Hi,
I am trying to make my own indicator in which i want to find which bar in the history is higher than current high.
how do I know which one is the first bar( -1, -2, -3 ....) so that the loop can have a end point?
class MyInd(bt.Indicator): lines = ('example',) def next(self): for (i = -1; i > ?; i--) if self.data.high[i] > self.data.high[0]: distance = -i; break;
Thanks!
- RE: Sell/Buy signal labels are missing in the plotting of forex data
Now I got your idea! didn't realise there is such buy/sell observer.
now i know that the backtrader has a very flexilbe design and love it more.
Just a remark: can't the Sell/buy observer be a bit smarter so that it can always plot properly?
Thanks!
-
- RE: Sell/Buy signal labels are missing in the plotting of forex data
it seems that it is because of the data in forex csv file: the number is too small,
it will work if i change the price to bigger values, such as from "000000;1.123210;1.123240;1.123200;1.123240;0"
to "000000;3210;3240;3200;3240;0".
so it looks a bug in backtrader when handling precision of the price.
- RE: Sell/Buy signal labels are missing in the plotting of forex data
@backtrader Hi sorry for the format issue, it's my first post and I have fixed it.
Could you have a look again?
Thanks!
- RE: Sell/Buy signal labels are missing in the plotting of forex data
@backtrader said in Sell/Buy signal icons are missing in the plot:
position as this
Wow, thanks for your quick reply!
The "buy/sell Icons" i meant is as following (in the black circle that i drew):
but they are missing in first picture (1 minutes csv data).
any idea?
- Sell/Buy signal labels are missing in the plotting of forex data
Hello!
I am using the 1-minutes forex csv data for backtesting, odd thing observed is that the buy/sell icons are missing in the plotting.
using daily csv data from yahoo is fine, so looks there is some tricky issue in the csv part.
Could anybody please help me with the issue?
the code is
from datetime import datetime import backtrader as bt class SmaCross(bt.SignalStrategy): def __init__(self): sma1, sma2 = bt.ind.SMA(period=1), bt.ind.SMA(period=60) crossover = bt.ind.CrossOver(sma1, sma2) self.signal_add(bt.SIGNAL_LONG, crossover) if __name__ == '__main__': # Create a cerebro entity cerebro = bt.Cerebro() # Add a strategy #cerebro.addstrategy(MyStrategy, period=15) cerebro.addstrategy(SmaCross) # sh: 000001.SS # BT: BTC-USD # SP500: ^GSPC #data0 = bt.feeds.YahooFinanceData(dataname='BTC-USD', fromdate=datetime(2018, 1, 1), # todate=datetime(2019, 6, 1), decimals=5) data0 = bt.feeds.GenericCSVData( dataname='./eurusd-1m/DAT_ASCII_EURUSD_M1_201904.csv', fromdate=datetime(2019, 4, 1), todate=datetime(2019, 4, 10), nullvalue=0.0, dtformat=('%Y%m%d %H%M%S'), #tmformat=('%H:%M:%S'), datetime=0, time=-1, high=2, low=3, open=1, close=4, volume=-1, openinterest=-1, timeframe=bt.TimeFrame.Minutes, compression = 1, separator=';', decimals=5, headers=False ) cerebro.resampledata(data0, timeframe=bt.TimeFrame.Minutes, compression=30) # Set our desired cash start cerebro.broker.setcash(1000000.0) # Add a FixedSize sizer according to the stake #cerebro.addsizer(bt.sizers.FixedSize, stake=2) cerebro.addsizer(bt.sizers.PercentSizer, percents=90) # Set the commission - 0.1% ... divide by 100 to remove the % cerebro.broker.setcommission(commission=0.0005) # Print out the starting conditions print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue()) # Run over everything cerebro.run() # Print out the final result print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue()) cerebro.plot(style='bar')
The CSV data looks as following:
20190401 000000;1.123210;1.123240;1.123200;1.123240;0 20190401 000100;1.123230;1.123230;1.123140;1.123190;0 20190401 000200;1.123210;1.123310;1.123200;1.123310;0 20190401 000300;1.123330;1.123360;1.123310;1.123340;0 20190401 000400;1.123330;1.123330;1.123250;1.123250;0
The plotting: | https://community.backtrader.com/user/meng-xiaofeng | CC-MAIN-2020-10 | refinedweb | 793 | 60.11 |
Releasing today, Navi is a new and ambitious router framework that focuses on solving the complex issue of JavaScript app page routing by handling all the heavy lifting.
With an easy to use API, Navi provides:
- A first-class React integration
- Code-splitting
- Page loading transitions
- Generated JSON site maps
- Accessible page title management
- And more...
Getting Started with Navi
Navi can be attached to an existing React app, or implemented from the beginning with Create React App by running the following command:
npx create-react-app my-navi-project
Creating a new React project called `my-navi-project` under a directory of the same name with Create React App.
After moving into the project directory, you now need to install Navi as a dependency, including the React functions for it:
yarn add navi react-navi
Step 1: Create Some Routes
Within the
srcdirectory of our project, create a directory called
pagesand put an
index.jsfile inside which will be the control center of our routing.
This file uses Navi's
createSwitchand
createPageAPIs to create the routing switch and the pages for each route.
// Import dependencies from navi and react import { createPage, createSwitch } from 'navi' import * as React from 'react' import { NavLink } from 'react-navi' // Create the switch export default createSwitch({ paths: { // Create the index route '/': createPage({ title: "Navi", content: <div> <h2>My Navi React Project</h2> <p>This is the index route!</p> <nav><NavLink href='/about'>See the about page</NavLink></nav> </div> }), // Create the about route '/about': createPage({ title: "About", getContent: () => import('./About') }), } })
Creating two routes, one for the base path
/, and one for the
/about URL route.
The next step is to create the
About.jsfile within our
pagesdirectory, for which you have already set up a route above.
This page is just a React component:
import * as React from 'react' import { NavLink } from 'react-navi' export default function About() { return ( <div> <h2>This is the about page</h2> <p>This route was compiled and handled by Navi, including all the heavy lifting for SEO, creating sitemaps including this page, code-splitting, etc!</p> <nav><NavLink href="/">Back to the index</NavLink></nav> </div> ) }
A small React component that renders an about page.
Step 2: Create the Navigation
With routes in place, it's time to tell our app how to use them. You can take the
src/index.jsfile that Create React App generates and modify it slightly.
import * as React from 'react' import * as ReactDOM from 'react-dom' import { createBrowserNavigation } from 'navi' import pages from './pages' import App from './App' async function main() { let navigation = createBrowserNavigation({ pages }) // Wait until async content is ready. await navigation.steady() ReactDOM.render( <App navigation={navigation} />, document.getElementById('root') ); } // Start the app main()
The
index.js file that renders our App.
Step 3: Handle the Navigation in `App.js`
Now that Navi is set up to deal with the navigation, has routes to handle, and our app renders the App component, you need to modify the Create React App generated
App.jsfile to handle our routes inside a layout.
import React, { Component } from 'react'; import { NavProvider, NavRoute, NavNotFoundBoundary } from 'react-navi'; import logo from './logo.svg'; import './App.css'; class App extends Component { render() { return ( <NavProvider navigation={this.props.navigation}> <div className="App"> <img src={logo} <NavNotFoundBoundary render={renderNotFound}> <NavRoute /> </NavNotFoundBoundary> </div> </NavProvider> ); } } const renderNotFound = () => ( <div className="App-error"> <h1>404 - Page not found.</h1> </div> ) export default App;
The
App.js file that handles the navigation, loads the content for the route and handles a 404.
With the above, Navi's React components
NavProvider,
NavRoute, and
NavNotFoundBoundaryall work together to handle navigation based on the currently accessed route.
All of this functionality exists within a layout made with JSX that you have defined above, which wraps around the content of each route.
This is just an example of a basic setup with Navi, but it's API is much more powerful and is built to handle a lot more cases to suit your needs. Visit the documentation for Navi to find out more.
Note: This project omits the included Service Worker that comes with Create React App to be more brief since it is optional. You are free to include this yourself.
Deploying with Now
If you are not familiar with Now and want to get setup, read our Getting Started guide.
Step 1: Creating a `now.json` file
First, create a
now.jsonfile in the root of your project. This will achieve a few things:
- Build the app using the latest Now 2.0 platform.
- Set a project name for all of the project's deployments.
- Build the application using the @now/static-build builder and configure the builder to look in the build directory for the built app.
- Sets paths for Now to route users to depending on a specific path, otherwise fall back to the index.html file (the app).
{ "version": 2, "name": "my-navi-project", "builds": [ {"src": "package.json", "use": "@now/static-build", "config": { "distDir": "build" }} ], "routes": [ {"src": "^/static/(.*)", "dest": "/static/\$1"}, {"src": "^/favicon.ico", "dest": "/favicon.ico"}, {"src": "^/manifest.json", "dest": "/manifest.json"}, {"src": "^/(.*)", "dest": "/index.html"} ] }
A
now.json file that handles building your Navi project and routes users depending on specific paths.
Note: You can also statically generate each file with Navi! If you want to deploy multiple static files with a bit more configuration and some SEO benefits, read their Static Rendering guide.
Step 2: Instructing Now to Build
The
now.jsonfile just created uses the @now/static-build builder which requires instructions on how to build the app. You can pass this instruction to the builder by creating a
now-buildscript in the
package.jsonfile:
{ "scripts": { ... "now-build": "yarn build" } }
The addition of a
now-build script that instructs Now to build the application using Create React Apps existing build script.
Step 3: Deploy
Finally, the only thing left to do to get our application live is to deploy it with Now:
now
Deploying with Now.
When Now is deploying, it will provide you with a live progress indicator of your build and a link to your deployment, such as the following:
Conclusion
Navi is an exciting routing framework that does all the hard work of applying the navigation methods and API for you. We are excited to see this project grow and to see what users do with it. | https://zeit.co/blog/painless-routing-react-navi-now | CC-MAIN-2019-39 | refinedweb | 1,061 | 55.54 |
2.6.32-stable review patch. If anyone has any objections, please let us know.------------------From: Henrique de Moraes Holschuh <hmh@hmh.eng.br>commit 347a26860e2293b1347996876d3550499c7bb31f upstream.Take advantage of the new events capabilities of the backlight class tonotify userspace of backlight changes.This depends on "backlight: Allow drivers to update the core, andgenerate events on changes", by Matthew Garrett.Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>Cc: Matthew Garrett <mjg@redhat.com>Cc: Richard Purdie <rpurdie@linux.intel.com>Signed-off-by: Len Brown <len.brown@intel.com>--- Documentation/laptops/thinkpad-acpi.txt | 51 +++++++------------------------- drivers/platform/x86/thinkpad_acpi.c | 24 ++++++++++++++- 2 files changed, 35 insertions(+), 40 deletions(-)--- a/Documentation/laptops/thinkpad-acpi.txt+++ b/Documentation/laptops/thinkpad-acpi.txt@@ -460,6 +460,8 @@ event code Key Notes For Lenovo ThinkPads with a new BIOS, it has to be handled either by the ACPI OSI, or by userspace.+ The driver does the right thing,+ never mess with this. 0x1011 0x10 FN+END Brightness down. See brightness up for details. @@ -582,46 +584,15 @@ with hotkey_report_mode. Brightness hotkey notes: -These are the current sane choices for brightness key mapping in-thinkpad-acpi:+Don't mess with the brightness hotkeys in a Thinkpad. If you want+notifications for OSD, use the sysfs backlight class event support. -For IBM and Lenovo models *without* ACPI backlight control (the ones on-which thinkpad-acpi will autoload its backlight interface by default,-and on which ACPI video does not export a backlight interface):--1. Don't enable or map the brightness hotkeys in thinkpad-acpi, as- these older firmware versions unfortunately won't respect the hotkey- mask for brightness keys anyway, and always reacts to them. This- usually work fine, unless X.org drivers are doing something to block- the BIOS. In that case, use (3) below. This is the default mode of- operation.--2. Enable the hotkeys, but map them to something else that is NOT- KEY_BRIGHTNESS_UP/DOWN or any other keycode that would cause- userspace to try to change the backlight level, and use that as an- on-screen-display hint.--3. IF AND ONLY IF X.org drivers find a way to block the firmware from- automatically changing the brightness, enable the hotkeys and map- them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN, and feed that to- something that calls xbacklight. thinkpad-acpi will not be able to- change brightness in that case either, so you should disable its- backlight interface.--For Lenovo models *with* ACPI backlight control:--1. Load up ACPI video and use that. ACPI video will report ACPI- events for brightness change keys. Do not mess with thinkpad-acpi- defaults in this case. thinkpad-acpi should not have anything to do- with backlight events in a scenario where ACPI video is loaded:- brightness hotkeys must be disabled, and the backlight interface is- to be kept disabled as well. This is the default mode of operation.--2. Do *NOT* load up ACPI video, enable the hotkeys in thinkpad-acpi,- and map them to KEY_BRIGHTNESS_UP and KEY_BRIGHTNESS_DOWN. Process- these keys on userspace somehow (e.g. by calling xbacklight).- The driver will do this automatically if it detects that ACPI video- has been disabled.@@ -1465,3 +1436,5 @@ Sysfs interface changelog: and it is always able to disable hot keys. Very old thinkpads are properly supported. hotkey_bios_mask is deprecated and marked for removal.++0x020600: Marker for backlight change event support.--- a/drivers/platform/x86/thinkpad_acpi.c+++ b/drivers/platform/x86/thinkpad_acpi.c@@ -22,7 +22,7 @@ */ #define TPACPI_VERSION "0.23"-#define TPACPI_SYSFS_VERSION 0x020500+#define TPACPI_SYSFS_VERSION 0x020600 /* * Changelog:@@ -6083,6 +6083,12 @@ static int brightness_get(struct backlig return status & TP_EC_BACKLIGHT_LVLMSK; } +static void tpacpi_brightness_notify_change(void)+{+ backlight_force_update(ibm_backlight_device,+ BACKLIGHT_UPDATE_HOTKEY);+}+ static struct backlight_ops ibm_backlight_data = { .get_brightness = brightness_get, .update_status = brightness_update_status,@@ -6237,6 +6243,12 @@ static int __init brightness_init(struct ibm_backlight_device->props.brightness = b & TP_EC_BACKLIGHT_LVLMSK; backlight_update_status(ibm_backlight_device); + vdbg_printk(TPACPI_DBG_INIT | TPACPI_DBG_BRGHT,+ "brightness: registering brightness hotkeys "+ "as change notification\n");+ tpacpi_hotkey_driver_mask_set(hotkey_driver_mask+ | TP_ACPI_HKEY_BRGHTUP_MASK+ | TP_ACPI_HKEY_BRGHTDWN_MASK);; return 0; } @@ -6313,6 +6325,9 @@ static int brightness_write(char *buf) * Doing it this way makes the syscall restartable in case of EINTR */ rc = brightness_set(level);+ if (!rc && ibm_backlight_device)+ backlight_force_update(ibm_backlight_device,+ BACKLIGHT_UPDATE_SYSFS); return (rc == -EINTR)? -ERESTARTSYS : rc; } @@ -7712,6 +7727,13 @@ static struct ibm_struct fan_driver_data */ static void tpacpi_driver_event(const unsigned int hkey_event) {+ if (ibm_backlight_device) {+ switch (hkey_event) {+ case TP_HKEY_EV_BRGHT_UP:+ case TP_HKEY_EV_BRGHT_DOWN:+ tpacpi_brightness_notify_change();+ }+ } } | https://lkml.org/lkml/2010/4/22/506 | CC-MAIN-2018-30 | refinedweb | 714 | 59.19 |
how to add/subtract 2 days in the target day in python?
I have a problem in adding or subtracting 2 days to the target day... here is my code :
import datetime target_date = datetime.date(2011,2,7)
thanks
Answers
Use datetime.timedelta:
import datetime target_date = datetime.date(2011,2,7) delta = datetime.timedelta(days=2) new_date = target_date - delta print new_date # 2011-02-05
Need Your Help
How do I search through a folder for the filename that matches a regular expression using Python?
smoothdivscroll logo parade dont start
Implement Repository Pattern in Asp.Net MVC with SOA architecture
wcf unit-testing asp.net-mvc-4 architecture repository-patternWe have starting new project in our company. We finalize the architecture as follows | http://unixresources.net/faq/8351150.shtml | CC-MAIN-2018-43 | refinedweb | 122 | 53.68 |
Closed Bug 1120003 Opened 6 years ago Closed 6 years ago
direct calls code that touches network state hard to reason about, has issues
Categories
(Hello (Loop) :: Client, defect, P2)
Tracking
(firefox36-)
Blocking Flags:
People
(Reporter: dmose, Assigned: jaws)
References
Details
Attachments
(1 file, 4 obsolete files)
+++ This bug was initially created as a clone of Bug #1118061 +++ [Tracking Requested - why for this release]: Given this amount of user training, I'm concerned that a substantial number of users may interpret every non-answer or busy as "the system is broken right now", which would effectively make the product seem extremely broken to those people, even when it's not. In the short term, we need to audit all the possible callStateReasons for call termination and expose any appropriate reasons as "contact unavailable" rather than "something went wrong". This includes the callStateReasons that come from the websocket, as well as the various other things that come from other parts of the client. Medium-term (i.e. in another bug), we may want to stop overloading that variable with data sourced from different places. One known issue is that the "closed" reason is sent by the server when someone clicks the "X" gadget in the call window. The bug here _might_ be that the unload handler on that window isn't cleaning up the websocket and sending a better reason (presumably "reject").
> Medium-term (i.e. in another bug), we may want to stop overloading that > variable with data sourced from different places. Or at least namespace them to avoid collisions.
OS: Mac OS X → All
Hardware: x86 → All
QA Contact: dmose
Assignee: nobody → jaws
Status: NEW → ASSIGNED
Hey Mark, can you look over the changes that Dan and I put together here? Thanks! We want to abstract the FAILURE_REASONS but don't want to do it in the same patch because it will add extra complexity for the reviews and also code archaeology.
Attachment #8550431 - Attachment is obsolete: true
Attachment #8550465 - Flags: review?(standard8)
Comment on attachment 8550465 [details] [diff] [review] Patch Review of attachment 8550465 [details] [diff] [review]: ----------------------------------------------------------------- From what I can tell, there's no actual functionality change in this patch - if that's right, then the patch title needs updating to mention that this is reworking the websocket and rest codes into constants. If I'm not right, please re-request review pointing out what I've missed. ::: browser/components/loop/content/shared/js/conversationStore.js @@ +376,3 @@ > console.error("Failed to get outgoing call data", err); > var failureReason = "setup"; > if (err.errno == 122) { Shouldn't we change the 122 to REST_ERRNOS.USER_UNAVAILABLE as well? ::: browser/components/loop/content/shared/js/websocket.js @@ +230,4 @@ > * @param {Object} event The websocket onmessage event. > */ > _onmessage: function(event) { > + var msgObject; I think msgData would be clearer than msgObject. ::: browser/components/loop/test/standalone/webapp_test.js @@ +20,5 @@ > stubGetPermsAndCacheMedia, > fakeAudioXHR, > dispatcher, > + feedbackStore, > + WEBSOCKET_REASONS = loop.shared.utils.WEBSOCKET_REASONS;; nit: double ;
Attachment #8550465 - Flags: review?(standard8) → review+
(In reply to Mark Banner (:standard8) from comment #8) > From what I can tell, there's no actual functionality change in this patch - > if that's right, then the patch title needs updating to mention that this is > reworking the websocket and rest codes into constants. That's right; will fix. > ::: browser/components/loop/content/shared/js/conversationStore.js > @@ +376,3 @@ > > console.error("Failed to get outgoing call data", err); > > var failureReason = "setup"; > > if (err.errno == 122) { > > Shouldn't we change the 122 to REST_ERRNOS.USER_UNAVAILABLE as well? Fixed, and this uncovered a bug where we weren't importing that either, so I've fixed that as well. > ::: browser/components/loop/content/shared/js/websocket.js > @@ +230,4 @@ > > * @param {Object} event The websocket onmessage event. > > */ > > _onmessage: function(event) { > > + var msgObject; > > I think msgData would be clearer than msgObject. Agred; fixed. > > ::: browser/components/loop/test/standalone/webapp_test.js > @@ +20,5 @@ > > stubGetPermsAndCacheMedia, > > fakeAudioXHR, > > dispatcher, > > + feedbackStore, > > + WEBSOCKET_REASONS = loop.shared.utils.WEBSOCKET_REASONS;; > > nit: double ; Fixed.
Attachment #8551941 - Attachment description: Hoist Loop REST errnos and websocket reasons, patch=jaws,dmose, r=Standard8 → [landed fx-team] Hoist Loop REST errnos and websocket reasons, patch=jaws,dmose, r=Standard8
Attachment #8551941 - Flags: review+
Flags: in-testsuite+
The previous commit missed updating webapp.js, I've pushed a fix with rs=mikedeboer over irc:
Note that any uplifts will want both commits.
backlog: --- → Fx38+
The two patches in here are code cleanup. They only need to be uplifted if something that depends on them gets uplifted. I believe the investigation results are now captured in other bugs and documentation.
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
Keywords: leave-open, ux-userfeedback
Resolution: --- → FIXED
Summary: direct calls sometimes tell the user "something went wrong" when they should say "caller unavailable", part 2 → direct calls code that touches network state hard to reason about, has issues
Iteration: --- → 38.1 - 26 Jan
Does not seem important enough to track.
tracking-firefox36: ? → - | https://bugzilla.mozilla.org/show_bug.cgi?id=1120003 | CC-MAIN-2020-45 | refinedweb | 814 | 52.9 |
.
48 thoughts on “Data about Data”
Stupid question as i have no clue of about the details of FS semantics, but why is nobody using xattr’s for metadata about files?
Speed?
anon:
xattrs have various issues:
They are not always that efficient. Especially for large values. Although this is slightly better on modern filesystems.
They are not supported on all filesystems. FAT is the most common removable media FS for instance and it has no xattr support.
They require write permissions to set.
They are shared between all users, whereas we want desktop metadata to be per-user based. Not everyone wants to have the same emblems, etc for shared files.
In order not to scribble over each other’s metadata, should apps use namespaced attribute names? e.g.
metadata::nautilus-key-name
or
metadata::org.gnome.evince.key-name
instead of just both using
metadata::key-name
someone:
Yes, that sounds like a good idea. We can have a set of cross desktop standard keys, then application specific keys can be prefixed with the app name.
metadata::nautilus- seems like a good approach. Or maybe metadata::nautilus:: (or is that confusing?)
Just app name is generally unique (since /usr/bin is a shared namespace).
Maybe ?
Very, very nice. We (gedit) are very happy with this (gedit also has an xml metadata format which we are unhappy with). One question popped up. gedit has a file browser plugin which shows a file listing. It has support for showing emblems, but getting emblems for files was never implemented. So I guess with this you can simply query for maybe metadata::nautilus-emblems? What is the result of that? I can imagine a list of maybe file names, or theme icon names? Also, how does it related to the GEmblem stuff?
It’s a pity extended attributes are not suitable.
Perhaps one could use xattrs if possible and fall back
to your scheme if not available.
That would have the benefit of possibly being faster and also
the attributes would be copied with cp and tar etc.
It might not be worth the bother though?
commenting on your specific points…
alexl wrote:
> xattrs have various issues:
> They are not always that efficient. Especially for large values.
> Although this is slightly better on modern filesystems.
Surely they are or can made be as efficient as your scheme
> They are not supported on all filesystems. FAT is the most common
> removable media FS for instance and it has no xattr support.
I wouldn’t see this as a major issue TBH.
If you want you could fallback to your scheme as I mentioned above
> They require write permissions to set.
True. Note the permissions on xattrs on solaris and NFS4 are
independent of the file permissions I think.
> They are shared between all users, whereas we want desktop metadata > to be per-user based. Not everyone wants to have the same emblems, > etc for shared files.
You could just add the user id as an xattr and ignore files without a matching user.
I was going to suggest metadata::nautilus::* but wasn’t sure more than one :: was supported in file attribute names.
Alexl, you’re a hero! Looking forward to your postcard :).
Just curious, what’s the performance differential within Nautilus itself? Is it any faster? Does it use less resources? Vice versa?
Sounds like a great plan.
Does this mean emblems will work in gio now?
Tomeu Vizoso:
Thats the old tracker metadata standard, even tracker is not using that anymore.
Pádraig Brady:
The performance issue with xattrs comes from them being stored in a separate block, which cause extra seeks to open, something which is comparable to file open in cost (i.e. like mimetype sniffing very expensive, see my various post about this). xattrs are stored in the inode until they don’t fit, and recent filesystems have larger inodes by default, so its better there. However, once you pass a certain size you still get the performance issue.
Could it be fixed? Of course, kernel space can do whatever userspace can do. But are the kernel people interested in solving this issue? Not the once i’ve spoken to at least.
William Lachance:
I haven’t measured, but I don’t think its a large difference, the previous nautilus code used in-process hashtable lookups, so its fairly efficient too, although limited in scope to nautilus.
Axel:
Well, the emblem data is availible, we still need to read it in various places like the file selector. But its doable.
Sorry for the stupid question, but for what data is this metadata store being used? So far I found mentions of emblems and icon locations… What else? Icon sizes and comments probably, also thumbnails?
I’m wondering what happens if the metadata store gets lost of corrupted – is that “important” data, or is it more “nice-to-have”?
oliver:
For nautilus the current use is:
custom icon
per-folder default view
per-folder background
per folder (spatial) geometry
per folder icon view and list view settings (colums, zoom level, etc)
icon position in manual layout mode (e.g. on desktop)
icon scaling
emblems
file annotations (a note you can add on a file)
thumbnails are stored in another way (see thumbnail spec)
Would this fix the bug that when I rename or move a folder with photos and videos inside, and I open the folder then all thumbnails has gone and need to regenerate again??
just asking, anyway thanks for your hard work!!
Christian:
Thumbnail handling is not affected
I think data should be *shared as far as possible*. (the user doesn’t care about what an application’s “relation” to a specific file is, the user cares about his/her relation to the file!)
Cryptic metadata like the current nautilus is a good example of a bad idea — no other applications access and use emblems.
+1 for xattrs if it works on modern desktop filesystems, like ext4.
+1 on storing _without_ nautilus prefix when It’s *user data*, not *user config*. Example of data: Emblems, annotations; example of config: icon size and position. The user data should be available in all desktop applications, not just nautilus!
ulrik:
I don’t quite understand what you mean by relationship to apps? Sounds weird. Keys are namespaced to avoid conflicts with multiple apps using the same name for what may be two different things. This is standard computer science and is not something that users will really see. Of course we would not use such namespacing for shared keys. The name of the key and wheter or not other apps use emblem should be unrelated.
For ext4, xattrs are not significantly different from ext3, except that the default inode size is larger so more data fit before things go slow.
And, all the other reasons xattrs are not a good idea still hold.
This generally sounds pretty solid. Am I correct in thinking that all clients on the *same* machine share access to the journal and see the changes before rotation? Say, nautilus adds some metadata and also does something that causes a file to be added to “recent items” – if gnome-shell is watching the items and updates its display, will it immediately have access to the new metadata (to show emblems, say)?
owen: Yes, all clients see journal updates immediately, since they share the mapping, the only uncertain thing is whether that reaches the disk or not.
alexl:
Just making the point that if the user marks a file “urgent” or “green” (or any emblem or label), for the user it’s the _file_ that is important and not what the file is like in nautilus.. I think it is clear in the “new” Gnome — with activities, centered around documents, nothing should be unique just for nautilus, none of the user data about files should be specific to nautilus, it should *be specific to the file* across all applications in the desktop.
That’s why emblems are pointless, since they only show up in nautilus.
ulrik:
Emblems only show up in nautilus because nothing else can read the old nautilus metadata store. With this metadata store all apps can read and use file emblems.
Now, emblems are clearly a global metadata that should not use a prefixed name. However other data may be truly application specific. Take for example the coordinates of the icons on the desktop directory. Thats not really useful for other apps, and if another app wants to store an “icon_pos” key its likely that sharing this with nautilus (i.e. use the same name) would break shit.
You are my hero. Again.
Hotness! Would it be possible for Totem to store the last playback position, subtitle and audio track choices from DVDs as metadata for the DVD?
This sounds very nice. Is the metadata ever resync’d if a file gets moved by “mv”, and could removable-volume metadata be stored on the volume so that you could see the same emblems on files when you move a USB key between computers?
You’re a hero for taking this on and fixing it. Metadata has been a thorn in our side since the first days of Nautilus. I chickened out of working on this for years.
Ian
At least mime-type as an xattr would be by far the most usefull (Considering the prevelant hack of name.type filenames…). Using a single xattr should also be small enough ;).
Maybe adopt a policy of ‘file creater writes mimetype’ across all linux apps….. Never sniff again!! utopia…… I guess it isn’t worth the effort :).
You are my deepest love. Great work, Alex! Looking forward to seeing the new system. We use a lot of NFS homedirs at work, so this would be awesome.
sri
I suppose the biggest disappointment with metadata for me is that it never seems portable to other systems.
I’m under the impression that under this approach, if I put files with notes and emblems on them on a USB key and hand it to my girlfriend, or if I SFTP them to another machine (even if it’s running Nautilus), the metadata won’t carry with them.
I still dream of a day when I can annotate a bunch of my files and share them with their annotations. Right now, it seems as though a few lucky file formats get their own metadata standards, like ID3v2 tags on MP3s and EXIF data on JPEGs
I wish instead of doing “gvfs-copy /tmp/testfile /tmp/testfile2”, I could use cp and any standard command and have my files’ metadata carried with. The relationship between a file and its metadata when its stored in a separate database rather than being attached to a file is so fragile. It’s the reason why I generally cannot use photo management applications that use databases (or offer to create a copy of every photo in your collection?!).
I suppose a large issue with this is that most file formats today don’t expect metadata being tagged on them, and it would be difficult to convince Apple and MicroSoft to engage in a general-onfile metadata standard, I think. And one that could be efficient, since I suppose you don’t really want to be a parsing a file for the metadata of each file.
Sigh.
Oh, and not to be a total downer/tool, I’m quite pleased with the solution that has been come up with much kudos to you for it
I hope this will help see metadata used more regularly and broadly on the desktop.
Also, perhaps in the future, I will be able to hover over files with notes and see a tooltip containing it, or something, in nautilus and other applications
I want to be like Alex when I grow up. This is awesome.
Will ondisk format be text-indexed for fast value search down the dir hierarchy ? Оr “search” works only through libtracker ?
Not sure what you mean by value search, but there is no reverse mapping or index. Search is meant to be implemented by having tracker or equivalent index the metadata.
anon:
Offtopic, but storing mimetype in xattrs *sounds* like an awesome idea!
However, I the sniffing done on Linux is really a great feature — it’s so good that file extensions are not really necessary. In comparison with OS X, if you change the file extension there it has no idea what’s in a file anymore, without file extension, no way it’s reading a pdf! (where Mac OS 9 and earlier had filetype bliss, 99% of files had well-defined filetypes in metadata, OS X is a big step backwards!)
So it *sounds* like a good idea but I still think sniffing is a real-world really neat feature.
I see the issue with not using xattrs and instead a separate metadata database as breaking ‘mv’, ‘cp’, etc and requiring the user to KNOW about the metadata database when they want to copy it along to another system. This should be more transparent to the user, and even if there are (even non-negligible) performance benefits I don’t think it benefits simplicity or clarity to the user.
However, having a global metadata spec is wonderful, so keep working on it.
For some context about performance i compared the gvfs metadata store with xattrs on ext4 in Fedora 11.
The test case is a directory with 10000 empty files, each file having one xattr key and one metadata key (same key and value in both cases, although each file has different value). Timings are made with dropped caches and with an average over 4 runs (although the times were pretty stable):
$ time gvfs-ls -a “standard::size” > /dev/null
real 0m1.231s
$ time gvfs-ls -a “standard::size,metadata::*” > /dev/null
real 0m1.364s
$ time gvfs-ls -a “standard::size,xattr::*” > /dev/null
real 0m8.336s
Things are even worse when also sniffing the file content:
$ time gvfs-ls -a “standard::size,standard::content-type” > /dev/null
real 0m6.411s
$ time gvfs-ls -a “standard::size,standard::content-type,xattr::*” > /dev/null
real 0m42.433s
$ time gvfs-ls -a “standard::size,standard::content-type,metadata::*” > /dev/null
real 0m7.654s
40 seconds???? Whats up with that?
Talked with Alex about the perf side of this on ext4 a bit today, and one of the big problems when the xattrs are stored outside the inode is that even though the xattr blocks for consecutively created files may be contiguous on disk (at best), because of ext3/4 directory hashing, readdir returns things in essentially random order. Alex retested reading xattrs on all files in a directory w/ an LD_PRELOAD which does presorting, and the time went from 33s to 2s. Bummer when things conspire against you.
Very well done! There is a way to have a look at the code?
Daniele:
The code is in gvfs git.
is this related to Zeitgeist ?
Is it possible to install glib from git (for developing something that uses metadata attributes) and be able to downgrade to the glib shipped by the distribution again?
Robin: Yes, just build in a separate location and use LD_LIBRARY_PATH to use it.
alexl: I tried that now, but the MetaData DBUS service wasn’t started. Then I added the location to the DBUS services path and then I get “DBus error org.freedesktop.DBus.Error.Spawn.ChildExited: Launch helper exited with unknown return code 1”.. Any idea what might be wrong?
Alex, I (and some others) are having trouble in Ubuntu Karmic Koala with the icon positions of symlinks on the Desktop. It has been suggested that the new metadata handling in Nautilus may be behind this. Basically, mounted drives, folders and files in the Desktop folder return to their positions on restart, but symlinks don’t. Instead they stack downwards on the left of the Desktop, messing with the carefully-chosen arrangement.
Can you see any reason why this is happening? It appeared a few weeks ago now, and could well coincide with the new metadata system release.
Stephen:
For various reasons metadata for the desktop special icons are not stored in gvfs metadata (this is due to the desktop dir being a virtual in-memory location, not a gvfs location). Instead they are stored in gconf. It could be that this is not working for some reason. | https://blogs.gnome.org/alexl/2009/06/24/data-about-data/ | CC-MAIN-2017-22 | refinedweb | 2,745 | 72.36 |
Last year I worked with the Google Translate API to translate SMS messages. After showing the rest of the team, they wanted a demo they could show off to other developers at conferences we attended. Based on that, I set out to create a frontend with React that could display the translations in real-time.
Building the WebSocket
What's a WebSocket?
For this demo, I decided that using a WebSocket would be a great solution. If you haven't used a WebSocket before, it's a protocol that allows a client and server to communicate in real-time. WebSockets are bi-directional, meaning the client and server can both send and receive messages. When you first connect to a WebSocket, the connection is made by upgrading an HTTP protocol to the WebSocket protocol and is kept alive as long as it goes uninterrupted. Once established, it provides a continuous stream of content. Exactly what we need to receive incoming, translated SMS messages.
Create the WebSocket Server in Node
As an initial step to creating the WebSockets, the server requires a path to allow for client connections. Starting with the original server file from my previous post, we can make a few minor changes to create the WebSocket server and the events and listeners required by the client.
Using the
ws package on NPM, we can quickly create what we need to get this working.
npm install ws
Once installed, include the package in your server file, and create the WebSocket server.
WS allows a
path option to set the route the client uses to connect.
const express = require('express'); const WebSocket = require('ws'); const app = express(); const server = http.createServer(app); const wss = new WebSocket.Server({ server, path: "/socket" });
With this bit of code, the client now has a place to connect to the WebSocket route
/socket. With the server ready to go, you need to now listen for a
connection event. When the client connects, the server uses the following to set up the other listeners we need:
wss.on('connection', (ws) => { ws.isAlive = true; ws.translateTo = 'en'; ws.on('pong', () => { ws.isAlive = true; }); ws.on('message', (message) => { translateTo = message; }); });
There are two main points to call out:
- On connection, we set the property
isAliveto
true, and listen for the
pongevent. This event is for the server to check and maintain a connection with the client. The server sends a
pingand responds with
pongto verify it's still a live connection.
- Here I set up
translateToas a property to store.
translateTois set through each client using a dropdown. When someone using our booth demo app selects a different language, that action sets this to translate the SMS texts into the requested language.
Keeping the Connection Alive
One essential item to be concerned with is checking for clients that disconnect. It's possible that during the disconnection process, the server may not be aware, and problems may occur. With a good friend
setInterval(), we can check if our clients are still there and reconnect them if needed.
setInterval(() => { wss.clients.forEach((ws) => { if (!ws.isAlive) return ws.terminate(); ws.isAlive = false; ws.ping(null, false, true); }); }, 10000);
Sending Messages to the Client
Now that the WebSocket is connected and monitored, we can handle the inbound messages from Nexmo, the translation, and the response to the client. The method
handleRoute needs to be updated from its original state to add the response for each client.
const handleRoute = (req, res) => { let params = req.body; if (req.method === "GET") { params = req.query } if (!params.to || !params.msisdn) { res.status(400).send({ 'error': 'This is not a valid inbound SMS message!' }); } else { wss.clients.forEach(async (client) => { let translation = await translateText(params, client.translateTo); let response = { from: obfuscateNumber(req.body.msisdn), translation: translation.translatedText, originalLanguage: translation.detectedSourceLanguage, originalMessage: params.text, translatedTo: client.translateTo } client.send(JSON.stringify(response)); }); res.status(200).end(); } };
The
wss.clients.forEach method iterates through each connection, and sends off the SMS parameters from Nexmo to the Google Translate API. Once the translation comes back, we can decide what data the front-end should have, and pass it back as a string as I've done here with
client.send(JSON.stringify(response)).
To recap what has happened here: Each client connects to the WebSocket server by calling the
/socket route and establishing a connection. An SMS message goes from the sender's phone to Nexmo, which then calls the
/inboundSMS route. The app passes the text message to Google Translate API for each connected client, and then finally sends it back to the client UI.
Next, let's build the UI parts to display it on the screen.
WebSockets with React
With the WebSocket server running, we can move on to the display of the messages on screen. Since I enjoy using React, and more importantly, React Hooks, I set out to locate something to help with connecting to WebSockets. Sure enough, I found one that fit my exact need.
The demo app UI is built with
create-react-app, and I used the Grommet framework. These topics are out of scope for this post, but you can grab my source code and follow along.
Connecting to the WebSocket
The first step here is to establish a connection and begin two-way communication. The module I found is
react-use-websocket, and it made setting this up super simple.
npm install react-use-websocket
There are tons of these React hook libraries out there that help you create some impressive functionality in a short amount of time. In this instance, importing the module and setting up a couple of items for the configuration is all it took to get a connection.
import useWebSocket from 'react-use-websocket'; const App = () => { const STATIC_OPTIONS = useMemo(() => ({ shouldReconnect: (closeEvent) => true, }), []); const protocolPrefix = window.location.protocol === 'https:' ? 'wss:' : 'ws:'; let { host } = window.location; const [sendMessage, lastMessage, readyState] = useWebSocket(`${protocolPrefix}//${host}/socket`, STATIC_OPTIONS); //... }
In the component, we import the
useWebSocket method to pass the WebSocket URL and the object
STATIC_OPTIONS as the second argument. The
useWebSocket method is a custom hook that returns the
sendMessage method,
lastMessage object from the server (which is our translated messages), and the
readyState which is an integer to give us the status of the connection.
Receiving Incoming Messages
Once
react-use-websocket makes the connection to the server, we can now start listening for messages from the
lastMessage property. When receiving incoming messages from the server, they populate here and update the component. If your server has multiple message types, you discern that information here. Since we only have one, it's an easier implementation.
const [messageHistory, setMessageHistory] = useState([]); useEffect(() => { if (lastMessage !== null) { setMessageHistory(prev => prev.concat(lastMessage)) } }, [lastMessage]); return ( <Main> {messageHistory.map((message, idx) => { let msg = JSON.parse(message.data); return ( <Box> <Text>From: {msg.from}</Text> <Heading level={2}>{msg.translation}</Heading> </Box> ) })} </Main> )
The built-in hook
useEffect runs every time the state is updated. When
lastMessage is not null, it adds the new message to the end of the previous message state array, and the UI updates using the
map function to render all of the messages. It is in the
messageHistory where all of the JSON strings we passed from the server are stored. The main functionality of our WebSocket is complete, but I still want to add a few more items.
Sending Messages to the Server
Since this is a translation demo, having more than one language is an excellent way to show the power of the Google Translate API in conjunction with Nexmo SMS messages. I created a dropdown with languages to pick. This dropdown is where bi-directional communication happens with the server, and the app sends the selected language from the client.
const languages = [ { label: "English", value: "en"}, { label: "French", value: "fr"}, { label: "German", value: "de"}, { label: "Spanish", value: "es"} ]; <Select labelKey="label" onChange={({ option }) => { sendMessage(option.value) setTranslateValue(option.label) }} options={languages} value={translateValue}
Here, the
sendMessage function from
react-use-websocket is how we can send information back to our server and consume it. This process is where the event handler we set up comes in handy from earlier. It is this dropdown that determines what language the Google Translate API translates the message into and displays on the screen.
Connection Status Display
Since this is a demo in a conference environment, I thought having a connectivity indicator would be a good idea. As long as the front-end remains connected to the WebSocket, the light displays green.
const CONNECTION_STATUS_CONNECTING = 0; const CONNECTION_STATUS_OPEN = 1; const CONNECTION_STATUS_CLOSING = 2; function Status({ status }) { switch (status) { case CONNECTION_STATUS_OPEN: return <>Connected<div className="led green"></div></>; case CONNECTION_STATUS_CONNECTING: return <>Connecting<div className="led yellow"></div></>; case CONNECTION_STATUS_CLOSING: return <>Closing<div className="led yellow"></div></>; default: return <>Disconnected<div className="led grey"></div></>;; } } //.... <Status status={readyState} /> //...
The
Status component uses the
readyState to switch between the various statuses and indicates that to the user. If it turns red, you know something is wrong with the WebSocket server, and you should check into it.
Once everything is up and running, it looks something like this:
Try It Out
The demo application code is on our community GitHub organization, and you can try it out for yourself as well. I've created a README that should help you get through the setup and run it locally on your server or deploy it to Heroku. I've also provided a Dockerfile, if you'd prefer to go that route. Let me know what you think of it, and if you have any trouble, feel free to reach out and submit an issue on the repo. | https://developer.vonage.com/blog/20/03/11/real-time-sms-demo-with-react-node-and-google-translate-dr | CC-MAIN-2022-40 | refinedweb | 1,608 | 56.15 |
I need help with this, im writing a league system and i need to order the teams by the points they have received max to min, im lost to where to begin any help would be good.
public class League { /* instance variables */ private Team name; private int points; /** * Constructor for objects of class League. */ public League(Team aname) { super(); name = aName; points = 0; } /** * Returns the receiver's name Team */ public Dancer getName() { return this.name; } /** * Returns the receiver's points */ public int getPoints() { return points; } /** * Sets the receiver's points */ public void setPoints(int aPoints) { this.points = aPoints; }
method for order points
public void orderPoints() { } | https://www.daniweb.com/programming/software-development/threads/311384/league-system | CC-MAIN-2017-26 | refinedweb | 105 | 76.96 |
Hey everyone,
I' m currently having trouble getting the following code to work.
What I'm trying to do is getting my program to do read from a text file which has the following information:
234323 c
343212 d
323432 a
763634 b
The corresponding information shown above will correspond as follow '234323' is customer ID and 'c' correspond that it belongs to '23'4323' and so on...
What I am trying to do is getting one my function to display an error message if the information obtain in the text file is not equal to a, b,c or d.
I got the first part of my function working already which required the data found in the text file to be greater then 0. If any part of the text file contain negative number it will prompt the user with an invalid input and advise them of the invalid line.
Now for second part, I need help in which required the program to read the text file. If the information contain in the text file is not equal to a, b, c or d then the program will prompt with the error line.
Here is my current code:
#include <iostream> #include <fstream> #include <string> using namespace std; const int ArrayRecord = 300; int CustID[ArrayRecord]; char StationID[ArrayRecord]; string Address[ArrayRecord]; int Checkfile(int & Record) { ifstream openfile ( "tolldata.txt" ); //Declare Input File stream as openfile //then open the call marks.txt file while (!openfile.eof() && Record < ArrayRecord && !openfile.fail()) { //While loop, As long as the statement not equal to end of file, //record is lesser then ArrayRecord, and opening file does not fail //Then perform the following. openfile >> CustID[Record]; //Open marks.txt file and store into //StudentID[Record] Array. if (CustID[Record] < 0 || openfile.fail()) /*If StudentID[Record] is greater then 999999 or lesser then 0 or record file fail to open, prompt user with error message and error line.*/ { cout << "\n\tError - Invalid or miss match Customer ID record.\n"; cout << "\tPlease refer to line " << Record +1 << " of tolldata.txt text file.\n\n" ; system( "pause" ); //perform system pause and exit //record and terminate program exit(1); return 0; } openfile >> StationID[Record]; if (StationID[Record] != 'A' || StationID[Record] != 'a' || StationID[Record] != 'B' || StationID[Record] != 'b' || StationID[Record] != 'C' || StationID[Record] != 'c' || StationID[Record] != 'D' || StationID[Record] != 'd' || openfile.fail()) { cout << "\n\tONE\n\tError - Invalid or miss match Customer ID record.\n"; cout << "\tPlease refer to line " << Record +1 << " of tolldata.txt text file.\n\n" ; system( "pause" ); //perform system pause and exit //record and terminate program exit(1); return 0; } Record++; } openfile.close(); } int main() { int Record = 0; ifstream openfile ( "tolldata.txt" ); /*Declare input file stream mark.txt*/ if (openfile.is_open())/*If text file is open excute the following*/ { cout << "\t****************************************************\n\n"; cout << "\t Please wait...\n\n" << "\t The following file tolldata.txt has been found.\n\n"; cout << "\t****************************************************\n\n"; Checkfile(Record); } else /*If text file doesn't exist display message and terminate program*/ { cout << "\t****************************************************\n\n"; cout << "\tError - Opening tolldata.txt file has fail.\n"; cout << "\tThe program will now terminate.\n\n"; cout << "\t****************************************************\n\n"; } system("pause"); return 0; }
Any help in solving my problem would much appreciated. So feel free to reply to my post.
Regards,
tuannie | https://www.daniweb.com/programming/software-development/threads/43711/need-help-in-reading-characters-from-a-text-file | CC-MAIN-2017-26 | refinedweb | 549 | 74.39 |
Created attachment 264723 [details]
unclean testcase; no FAIL lines indicates success
See bug 336682 comment 24 and the attached testcase.
From comment 24:
> Er, really? Does that break pages that try to have a |var ononline| or that
> assign something like a number to a global ononline variable? See bug 336359
> comment 12.
>
> If it does, is that really what we want?
The testcase here indicates that window.ononline/onoffline is treated like a handler when assigned a function, albeit with the quirkiness of bug 380538.
I don't see how this breaks pages that assign numbers/booleans to global vars named ononline/onoffline, but I could have missed something.
If it doesn't break such pages (i.e. if the properties are replaceable), then I don't think we have a problem, really... Need to figure out whether it does break them or not, basically.
If there are other ways for authors to do this, do we really want to pollute the global namespace every time we add new events?
(In reply to comment #1)
> If it doesn't break such pages (i.e. if the properties are replaceable)
Well, what I did was doing
var ononline = 1;
directly in a <script> and then checking the value later in a load handler (after the testcase triggers the events, not that it should matter). Anything else needs to be checked?
If you have a list of uses we shouldn't break with new window.on* handleres, they could be turned into a mochitest.
Jst will figure out what to do :)
content/events/test/test_bug336682.js has 'todo()'s because of this bug. However this bug is resolved, that test should be updated.
Setting to B1 per conversation with JST.
Per discussion with Chris Double and Dave Camp, we'll be removing the support for window.ononline and window.onoffline in favor of the standard and more flexible window.addEventListener() method of registering listeners for these new events. Patch coming up.
Created attachment 270792 [details] [diff] [review]
Fix, remove support for window.on{on|off}line
Fix checked in.
With this patch, did we stop supporting |body.ononline = function () {}| as well? If so, is that what we want to do? | https://bugzilla.mozilla.org/show_bug.cgi?id=380618 | CC-MAIN-2016-22 | refinedweb | 369 | 77.13 |
IRC log of mediaann on 2010-09-21
Timestamps are in UTC.
10:56:56 [RRSAgent]
RRSAgent has joined #mediaann
10:56:56 [RRSAgent]
logging to
10:57:05 [wbailer]
zakim, this will be mawg
10:57:05 [Zakim]
ok, wbailer; I see IA_MAWG()7:00AM scheduled to start in 3 minutes
10:59:46 [Zakim]
IA_MAWG()7:00AM has now started
10:59:54 [Zakim]
+joakim
11:00:44 [joakim]
joakim has joined #mediaann
11:01:13 [Zakim]
+wbailer
11:01:18 [joakim]
Hi, waiting others
11:02:25 [Zakim]
+ +1.617.588.aaaa - is perhaps Luis
11:04:44 [joakim]
zakim, this is mawg
11:04:44 [Zakim]
joakim, this was already IA_MAWG()7:00AM
11:04:45 [Zakim]
ok, joakim; that matches IA_MAWG()7:00AM
11:05:03 [joakim]
invite Zakim #mediaann
11:05:10 [Chris]
Chris has joined #mediaann
11:05:16 [joakim]
zakim, who's here?
11:05:16 [Zakim]
On the phone I see joakim, wbailer, Luis
11:05:17 [Zakim]
On IRC I see Chris, joakim, RRSAgent, Zakim, wbailer, raphael, tmichel, trackbot
11:05:47 [joakim]
Hi Chris, we are taliking on the phone
11:06:00 [Zakim]
+ +329331aabb
11:06:01 [Zakim]
+ +34.93.00.aacc
11:06:09 [joakim]
about "all actions points have been reopened"
11:06:33 [joakim]
chair: joakim
11:06:36 [raphael]
trackbot, start telecon
11:06:38 [trackbot]
RRSAgent, make logs public
11:06:40 [trackbot]
Zakim, this will be MAWG
11:06:40 [Zakim]
ok, trackbot, I see IA_MAWG()7:00AM already started
11:06:41 [trackbot]
Meeting: Media Annotations Working Group Teleconference
11:06:41 [trackbot]
Date: 21 September 2010
11:06:46 [raphael]
Chair: Joakim
11:07:04 [raphael]
Regrets: wonsuk, daniel
11:07:21 [raphael]
Scribe: raphael
11:07:24 [joakim]
scribe: Raphael
11:07:27 [raphael]
scribenick: raphael
11:07:52 [joakim]
11:07:59 [joakim]
11:08:00 [raphael]
Agenda:
11:08:04 [joakim]
11:08:08 [raphael]
Topic: 1. Admin
11:08:27 [raphael]
Joakim: I suggest to accept the minutes of the f2f meeting (3 days)
11:08:30 [raphael]
[silence]
11:08:43 [raphael]
RESOLUTION: minutes of the f2f meeting accepted
11:09:01 [raphael]
Joakim: thanks for Thierry for the organization and the dinner in Sophia Antipolis
11:09:05 [joakim]
11:09:07 [raphael]
Topic: 2. Action Items
11:09:20 [raphael]
Joakim: we have a lot of Actions !!!
11:09:25 [raphael]
How do we manage?
11:09:43 [raphael]
List in the tracker:
11:10:15 [stegmai]
stegmai has joined #mediaann
11:10:29 [wbailer]
11:10:36 [raphael]
Werner: ACTION-270
11:11:00 [Zakim]
+florian
11:11:02 [raphael]
... we have a clear answer about subtitles, and less for semantic annotations
11:11:18 [stegmai]
zakim, who is here?
11:11:18 [Zakim]
On the phone I see joakim, wbailer, Luis, chris, raphael, florian
11:11:19 [Zakim]
On IRC I see stegmai, Chris, joakim, RRSAgent, Zakim, wbailer, raphael, tmichel, trackbot
11:12:17 [raphael]
Thierry: should we discuss now the proposed text resolving the issue?
11:13:10 [raphael]
Joakim: I suggest to close the action, but more peope need to read the lc comment before closing it
11:13:17 [raphael]
close ACTION-270
11:13:18 [trackbot]
ACTION-270 Draft an answer for the comment 2389 closed
11:13:39 [raphael]
ACTION: Thierry to send to the group the list of LC Comments that needs to be reviewed for moving on
11:13:39 [trackbot]
Created ACTION-312 - Send to the group the list of LC Comments that needs to be reviewed for moving on [on Thierry Michel - due 2010-09-28].
11:14:23 [raphael]
Raphael: ACTION-271
11:14:35 [raphael]
... it is done
11:17:49 [raphael]
... in a nutshell, embedded named graphs syntax is not yet ready (might be part of RDF 2.0)
11:18:05 [raphael]
... no other ways for doing such complex annotations
11:18:39 [raphael]
... except with an event-based modeling, suggested by Yves Raimond
11:18:50 [raphael]
... similar to what has been done with the Programmes web site in BBC
11:19:19 [tmichel]
11:19:25 [raphael]
Veronique: problem with this approach is to have multiple annotations about the same fragment that co-exist
11:19:30 [raphael]
close ACTION-271
11:19:30 [trackbot]
ACTION-271 Contact the SW Coordination Group about how to do semantic annotation with ma:description and ma:relation alternatives closed
11:19:45 [raphael]
Wonsul: ACTION-272
11:20:05 [raphael]
See
11:20:10 [raphael]
close ACTION-272
11:20:10 [trackbot]
ACTION-272 Start a new section in the Ontology document providing example of usage (e.g. subtitles, powder, etc.) closed
11:20:30 [raphael]
Thierry: ACTION-273
11:20:34 [raphael]
... I have done it
11:21:33 [raphael]
... the tables are done, but need to be filled now
11:22:19 [raphael]
... it seems to me that for ogg, e.g. the technical information is in the codecs (video/audio) and not in the container formats
11:22:22 [raphael]
... need to be checked
11:22:29 [raphael]
... I suggest to keep this action opened
11:22:35 [raphael]
Thierry: ACTION-274
11:22:38 [raphael]
... I have done this
11:22:55 [raphael]
close ACTION-274
11:22:55 [trackbot]
ACTION-274 Check all mapping tables and add N/A when it is needed closed
11:23:02 [raphael]
Joakim: ACTION-275
11:23:23 [raphael]
... it is not yet finished, I need to coordinate with Wonsuk
11:23:26 [raphael]
... ongoing
11:23:32 [raphael]
Joakim: ACTION-276
11:24:25 [raphael]
... it is dependent on ACTION-311
11:24:28 [raphael]
... not yet finished
11:25:21 [raphael]
close ACTION-276
11:25:21 [trackbot]
ACTION-276 Change the type definition into the new syntax closed
11:25:32 [raphael]
... it is actually the same one than 311
11:25:47 [raphael]
Chris: ACTION-277
11:25:49 [raphael]
... still open
11:26:07 [raphael]
Was the conclusion to add a ne
11:26:16 [raphael]
s/a ne/new property?
11:26:29 [raphael]
Chris: no, it is about adding an example
11:26:42 [RRSAgent]
I have made the request to generate
raphael
11:26:55 [raphael]
Werner: ACTION-278
11:26:57 [wbailer]
11:26:59 [raphael]
... it is done
11:27:50 [raphael]
... made the changes to follow the suggestion
11:28:02 [raphael]
... we need to update the ontology document
11:28:33 [raphael]
... we will keep copyright though we are aware that there will be some overlap with what Policy is providing
11:28:58 [raphael]
... rational is a strong need from the industry to have a simple field for this, see email thread
11:30:07 [raphael]
... I don't see her reply of Jean Pierre on this thread
11:30:13 [raphael]
s/her/a
11:31:16 [wbailer]
action: wonsuk to update definition of ma:policy according to comments from PLING
11:31:16 [trackbot]
Created ACTION-313 - Update definition of ma:policy according to comments from PLING [on WonSuk Lee - due 2010-09-28].
11:31:31 [raphael]
close ACTION-278
11:31:31 [trackbot]
ACTION-278 Draft a response to the comment LC-2417 closed
11:31:38 [raphael]
Thierry: ACTION-279
11:31:44 [raphael]
... it is done, we have a new css
11:31:49 [raphael]
... now in the dev space
11:32:06 [raphael]
... we have to wait for Wonsuk to use this new css
11:32:14 [raphael]
close ACTION-279
11:32:14 [trackbot]
ACTION-279 Look into problems with the CSS rule closed
11:32:36 [raphael]
Joakim: ACTION-280
11:32:49 [raphael]
... we have changed about this section about conformance
11:32:53 [raphael]
... but we need to read it again
11:32:58 [raphael]
... so leave this action open
11:33:06 [raphael]
Werner: ACTION-281
11:33:19 [raphael]
... contact Florian ?
11:33:35 [raphael]
... seems a WebIDL issue encountered by Florian
11:33:57 [raphael]
... we could delete this sentence about HTML5
11:34:11 [raphael]
... I think it is done already
11:34:16 [raphael]
close ACTION-281
11:34:16 [trackbot]
ACTION-281 Follow up the comment of Robin about the HTML 5 compatibility issue by contacting Florian closed
11:34:26 [raphael]
Wonsuk: ACTION-282
11:34:43 [raphael]
... someone has an idea if this has been done?
11:34:56 [RRSAgent]
I have made the request to generate
raphael
11:35:51 [raphael]
Werner: ACTION-284
11:36:01 [raphael]
... I got a response that I use to draft the table
11:36:25 [raphael]
... I sent this to the editors of the ontology document
11:36:37 [raphael]
... we can leave this action open
11:36:54 [raphael]
... it is just the editors that need to include in the doc
11:37:03 [raphael]
close ACTION-284
11:37:03 [trackbot]
ACTION-284 Mail David Singer about the inclusion of media type parameters in the ma:format property closed
11:37:45 [raphael]
ACTION: Wonsuk to update the description of the ma:format in the ontology doc using the input from Dave Singer (see Werner email)
11:37:45 [trackbot]
Created ACTION-314 - Update the description of the ma:format in the ontology doc using the input from Dave Singer (see Werner email) [on WonSuk Lee - due 2010-09-28].
11:38:17 [raphael]
Joakim: ACTION-285
11:38:29 [raphael]
... I'm looking for the diff with ACTION-280
11:38:42 [raphael]
Werner: it seems the same
11:38:56 [raphael]
close ACTION-285
11:38:56 [trackbot]
ACTION-285 Check the normative statemens according to the editorial comment of Robin closed
11:39:00 [raphael]
duplicate actions
11:39:08 [raphael]
Thierry: ACTION-286
11:39:15 [raphael]
... done
11:39:23 [raphael]
close ACTION-286
11:39:23 [trackbot]
ACTION-286 Update the XPATH columns of the mapping tables closed
11:39:32 [raphael]
Felix: ACTION-287
11:39:50 [raphael]
... Felix will work on this tonight
11:40:15 [raphael]
Joakim: ACTION-289
11:40:36 [raphael]
... leave it open
11:40:46 [raphael]
Chris: ACTION-290
11:40:58 [raphael]
... it is done
11:41:04 [raphael]
close ACTION-290
11:41:04 [trackbot]
ACTION-290 Make updates to api document related to dropping type attribute of identifier closed
11:41:15 [raphael]
Thierry: ACTION-291
11:41:26 [raphael]
... the file is now in the proper namespace
11:41:42 [raphael]
... coneg is not yet implemented for serving either the rdf or the html
11:41:46 [raphael]
... conneg is still pending
11:42:05 [tmichel]
11:42:06 [raphael]
Thierry: ACTION-292
11:42:13 [raphael]
... done
11:42:17 [raphael]
close ACTION-292
11:42:17 [trackbot]
ACTION-292 Draft response to ivan closed
11:43:05 [raphael]
Chris: ACTION-295
11:43:12 [raphael]
... has been done during the f2f
11:43:20 [raphael]
close ACTION-295
11:43:20 [trackbot]
ACTION-295 Chanage title to plural in api doc closed
11:43:50 [raphael]
... the web site needs to be updated too
11:44:09 [raphael]
ACTION: Joakim to update the group web site to reflect the plural in the title and to make working his cvs
11:44:09 [trackbot]
Created ACTION-315 - Update the group web site to reflect the plural in the title and to make working his cvs [on Joakim Söderberg - due 2010-09-28].
11:44:37 [raphael]
Thierry: I have also done the action of wonsuk
11:44:47 [raphael]
close ACTION-296
11:44:48 [trackbot]
ACTION-296 Change title to plural in ontology doc closed
11:45:05 [raphael]
Thierry: ACTION-297
11:45:33 [raphael]
Chris: only the title has been changed !!!
11:45:41 [raphael]
... we need to change this everywhere in the document !!!
11:45:42 [raphael]
+1
11:45:49 [florian_pa]
+1
11:46:53 [raphael]
I have re-opened the action-296 and re-assigned to Thierry
11:47:08 [Chris]
also change the mentioning of the "API for Media Resource 1.0" to "API for Media Resources 1.0" in the ontology document please
11:47:14 [raphael]
Thierry: regarding the contact to HTML, I have no response
11:47:24 [raphael]
... I will meet the staff contact this afternoon
11:49:00 [raphael]
Joakim: I think we should let open this action until TPAC and make sure we talk with this group
11:49:27 [raphael]
Joakim: ACTION-298
11:49:32 [raphael]
... I sent an email to Dominique
11:49:58 [raphael]
... asking if there will be a binding for a RESTful API
11:51:04 [raphael]
... it seem they have just discussed this at the moment
11:51:14 [raphael]
... and suggested to bring this to the coordination group
11:51:22 [raphael]
... to see if more people are interested
11:51:53 [raphael]
... I can follow-up this suggestion
11:51:59 [raphael]
+1
11:52:44 [raphael]
ACTION: pursue development of WebIDL for REST at the Coordination Group meeting
11:52:44 [trackbot]
Sorry, couldn't find user - pursue
11:53:05 [raphael]
ACTION: joakim to pursue development of WebIDL for REST at the Coordination Group meeting
11:53:05 [trackbot]
Created ACTION-316 - Pursue development of WebIDL for REST at the Coordination Group meeting [on Joakim Söderberg - due 2010-09-28].
11:53:13 [RRSAgent]
I have made the request to generate
raphael
11:54:21 [wbailer]
raphael: group chair should have received invitation to coordination group
11:54:50 [wbailer]
... chair should send short mail with progress from group and any topics for cc telecon
11:55:34 [wbailer]
close action-299
11:55:34 [trackbot]
ACTION-299 Change the API interface about Date closed
11:55:50 [Zakim]
-raphael
11:56:06 [wbailer]
action 300 is pending, joakim has sent mail, but not received an answer
11:56:06 [trackbot]
Sorry, couldn't find user - 300
11:56:39 [wbailer]
chris: action 301 is still open
11:56:45 [wbailer]
... same for 302
11:57:16 [wbailer]
joakim: not sure what to do about action 303, what exactly to tell them?
11:58:33 [wbailer]
from Comment LC-2419
11:58:53 [wbailer]
"note: integration with DAP's Media Capture is likely desirable."
11:59:16 [tmichel]
11:59:30 [wbailer]
thierry: drafted answer fpr 2395
11:59:37 [wbailer]
s/fpr/for/
11:59:49 [wbailer]
close action-304
11:59:49 [trackbot]
ACTION-304 Draft answer for lc-2395 closed
12:00:19 [wbailer]
joakim: concerning action 305, sent mail to doug, no answer yet
12:00:44 [wbailer]
close action-306
12:00:44 [trackbot]
ACTION-306 Add paragraph to section 2 about support for several metadata sources closed
12:00:59 [wbailer]
close action-307
12:00:59 [trackbot]
ACTION-307 Fix description of parameters in getoriginaldata (LC-2410) closed
12:01:04 [tmichel]
action traker problem ...
12:01:04 [trackbot]
Sorry, couldn't find user - traker
12:01:09 [tmichel]
Based on the changelog, it looks like Wonsuk did this, see e.g.:
12:01:09 [tmichel]
12:01:30 [wbailer]
chris: worked on editorial issues in comment LC-2394, did not draft answer yet
12:02:06 [wbailer]
... open issues: do we need to have a final solution before drafting a response?
12:02:20 [wbailer]
... or can we respond that we are in the process of looking at this
12:02:45 [wbailer]
thierry: wait for response from other groups and have resolution
12:03:12 [wbailer]
... if there are many other points, we can send response with resolutions for some of the points
12:03:22 [wbailer]
close action-308
12:03:22 [trackbot]
ACTION-308 Take care of editorial issues in comment LC-2394 closed
12:03:38 [Zakim]
-florian_pa
12:03:55 [wbailer]
thierry: sent proposal for normative/non-normative markup
12:04:04 [wbailer]
... leave open until agreed
12:04:49 [wbailer]
concerning action 310, werner to send png version to chris
12:05:07 [wbailer]
[adjourned]
12:05:13 [Zakim]
-Luis
12:05:15 [Zakim]
-joakim
12:05:16 [Zakim]
-chris
12:05:17 [Zakim]
-wbailer
12:05:18 [Zakim]
IA_MAWG()7:00AM has ended
12:05:20 [Zakim]
Attendees were joakim, wbailer, +1.617.588.aaaa, +329331aabb, +34.93.00.aacc, raphael, chris, florian_pa
12:05:23 [wbailer]
rrsagent, draft minutes
12:05:23 [RRSAgent]
I have made the request to generate
wbailer
12:37:48 [raphael]
raphael has left #mediaann
14:07:30 [tmichel]
tmichel has joined #mediaann
14:09:47 [Zakim]
Zakim has left #mediaann | http://www.w3.org/2010/09/21-mediaann-irc | CC-MAIN-2016-30 | refinedweb | 2,797 | 65.25 |
maxout¶
paddle.fluid.layers.
maxout(x, groups, name=None, axis=1)[source]
MaxOut Operator.
Assumed the input shape is (N, Ci, H, W). The output shape is (N, Co, H, W). Then \(Co = Ci / groups\) and the operator formula is as follows:
$$ y_{si+j} = max_{k} x_{gsi + sk + j} $$ $$ g = groups $$ $$ s = \frac{input.size}{num\_channels} $$ $$ 0 \le i < \frac{num\_channels}{groups} $$ $$ 0 \le j < s $$ $$ 0 \le k < groups $$
Please refer to Paper: - Maxout Networks: - Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks:
- Parameters to Name. Usually name is no need to set and None by default.
- Returns
A 4-D Tensor with same data type and data format with input Tensor.
- Return type
Variable
- Raises
ValueError– If axis is not 1, -1 or 3.
ValueError– If the number of input channels can not be divisible by groups.
Examples
import paddle.fluid as fluid input = fluid.data( name='data', shape=[None, 256, 32, 32], dtype='float32') out = fluid.layers.maxout(input, groups=2) | https://www.paddlepaddle.org.cn/documentation/docs/en/api/layers/maxout.html | CC-MAIN-2020-24 | refinedweb | 173 | 50.73 |
10 June 2010 05:22 [Source: ICIS news]
SINGAPORE (ICIS news)--The Asian glycol ethers prices slid $50-80/tonne (€42-66/tonne) this week with further downside expected in the near term due to falling feedstock normal butanol values and the recent downturn in upstream olefins markets, traders and producers said late on Wednesday.
Butyl glycol (BG) prices in ?xml:namespace>
PM solvent and acetate values plummeted $70-80/tonne from the previous week and were assessed in the $1,590-1,640/tonne CFR China/SE Asia band, according to ICIS pricing.
The recent slide in feedstock n-butanol prices, which were assessed at $1,430-1,520/tonne CFR NE Asia last Friday, was one of the key driving factors to weaker BG prices, traders and producers said.
N-butanol values have plunged from $1,650-1,700/tonne CFR NE Asia seen a month back to the current early $1,400/tonne CFR region, according to ICIS pricing.
BG offers in bulk for June or early July shipments were reported at $1,550/tonne CFR China/SE Asia this week, while buying sentiment looked bearish with indications at $1,500/tonne CFR or below.
Market participants said they expected that BG prices could slide further in the coming few weeks due to weaker feedstock values and also limited demand from regional buyers who were on the sidelines.
Stocks in
“Cost-wise there is room to go below $1,500/tonne as [feedstock] ethylene oxide [EO] and n-butanol prices have dropped a lot,” said a southeast Asia-based buyer.
But some suppliers were also caught with higher costs as they were importing from other regions like
A similar trend was seen in the PM solvent and acetate markets with buy-sell indications heard from $1,550/tonne CFR China/SE Asia onwards, with some containers reported sold at $1,640/tonne CFR SE Asia.
The recent downturn in upstream propylene prices, and its subsequent impact on feedstock propylene oxide values was the key reason for the downturn in these markets, said players.
PO prices in
($1 = CNY6.83 / $1 = € | http://www.icis.com/Articles/2010/06/10/9366584/asias-glycol-ethers-plunge-50-80tonne-on-weaker-feedstock.html | CC-MAIN-2014-42 | refinedweb | 353 | 51.31 |
On 08/02/16 05:06, Mateusz Guzik wrote: > On Mon, Aug 01, 2016 at 09:49:03PM -0400, Michael Butler wrote: >> In the non-SMP case, ADAPTIVE_MUTEXES is not defined and a subsequent >> reference to mtx_delay causes compilation of kern_mutex.c to fail >> because KDTRACE_HOOKS may be, >> > > Indeed, fixed in r303655. > > Thanks for reporting. >
I've noticed another failure in the same file, caused by r303643. It's failing to compile here due to errors about SYSINIT(9), it looks like #include <sys/kernel.h> is missing. I have made a local patch which compiles and afdter a reboot seems to work fine: Index: head/sys/kern/kern_sx.c =================================================================== --- head/sys/kern/kern_sx.c (revision 303658) +++ head/sys/kern/kern_sx.c (working copy) @@ -58,6 +58,7 @@ #if defined(SMP) && !defined(NO_ADAPTIVE_SX) #include <machine/cpu.h> +#include <sys/kernel.h> #endif #ifdef DDB -- Guido Falsi <m...@madpilot.net> _______________________________________________ freebsd-current@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" | https://www.mail-archive.com/freebsd-current@freebsd.org/msg167465.html | CC-MAIN-2018-47 | refinedweb | 167 | 62.34 |
How to disable system beep in windows
You can find and configure it under Device Manager|View|Show Hidden Devices|Non Plug and
Play|Beep|Action|Properties|Driver, then set the "Startup Type:" to "Disabled"
You can find and configure it under Device Manager|View|Show Hidden Devices|Non Plug and
Play|Beep|Action|Properties|Driver, then set the "Startup Type:" to "Disabled"
One of the neat things about *nix is the ability to work with many different shells. As with anything else in the *nix world there is bound to be heated debates over which shell is better. Whether you want to use good ol' sh or (my personal favorite) bash you can change your default shell using the commands below.
The traffic within Amazon EC2 and S3 is free, so you can have setups as funky as you wish. Remeber what it takes to build Flickr or Livejournal datacenter? Now you can do similar setups from home (unbelievable) and just let Amazon take care of the networking and hardware. This is so much more ‘WebOS’ than Google’s walled garden.
I applaud Amazon..
Download ipython
or run ipython
for windows
Download Bitbucket
download 7zip and unzip it
use it to unzip bitbucket
and
right click on mycomputer and goto advanced tab, add c:\python\ or whichever dir python is python24 or python25 to path
and then open a new cmd window goto where bitbucket is unzipped
run python setup.py install
now you ve installed bitbucket, same method of installation for most of python packages
now open ipython
run
import bitbucket
bitbucket.c
press tab
it autocompletes
type it out as
a= bitbucket.connect('Access', 'Secret')
a is a bitbucket object to interact with S3
try out a. press tab
Labels: amazon s3 ipython bitbucket 7zip
it’s a ~10 times faster yum metadata parser that is reported to also use a lot less memory. This might make fedora suddenly useable on a whole bunch of my machines. I look forward to trying it out.
Should be really easy:Should be really easy:
Labels: yum fedora
AJAX without XML Compares using XML, JavaScript Objects, and JSON
For the technically minded readers out there who want to get a look into the technical issues behind running a wildly popular SNS, here’s a link to a presentation given by the CTO at the MySQL Users Conference this year. I believe Batara Kesuma gave this presentation in Japan as well, as the content of the presentation is familiar and was covered by some of the local IT press. (I doubt they attended the conference in Santa Clara)
J.
There's a truly frightening amount of new options in the PostgreSQL.conf file. Even once-familiar options from the last 5 versions have changed names and parameter formats. It is intended to give you, the database administrator, more control, but can take some getting used to.
What follows are the settings that most DBAs will want to change, focused more on performance than anything else. There are quite a few "specialty" settings which most users won't touch, but those that use them will find indispensable. For those, you'll have to wait for the book.
Remember: PostgreSQL.conf settings must be uncommented to take effect, but re-commenting them does not necessarily restore the default values!
listen_addresses: Replaces both the tcp_ip and virtual_hosts settings from 7.4. Defaults to localhost in most installations, allowing only connections on the console. Many DBAs will want to set this to "*", meaning all available interfaces, after setting proper permissions in the pg_hba.conf file, in order to make PostgreSQL accessable to the network. As an improvment over previous versions, the "localhost" default does permit connections on the "loopback" interface, 127.0.0.1, enabling many server browser-based utilities.
max_connections: exactly like previous versions, this needs to be set to the actual number of simultaneous connections you expect to need. High settings will require more shared memory (shared_buffers). As the per-connection overhead, both from PostgreSQL and the host OS, can be quite high, it's important to use connection pooling if you need to service a large number of users. For example, 150 active connections on a medium-end single-processor 32-bit Linux server will consume significant system resources, and 600 is about the limit on that hardware. Of course, beefier hardware will allow more connections.
work_mem: used to be called sort_mem, but has been re-named since it now covers sorts, aggregates, and a few other operations. This is non-shared memory, which is allocated per-operation (one to several times per query); the setting is here to put a ceiling on the amount of RAM any single operation can grab before being forced to disk. This should be set according to a calculation based on dividing the available RAM (after applications and shared_buffers) by the expected maximum concurrent queries times the average number of memory-using operations per query.
Consideration should also be paid to the amount of work_mem needed by each query; processing large data sets requires more. Web database applications generally set this quite low, as the number of connections is high but queries are simple; 512K to 2048K generally suffices. Contrawise, decision support applications with their 160-line queries and 10 million-row aggregates often need quite a lot, as much as 500MB on a high-memory server. For mixed-use databases, this parameter can be set per connection, at query time, in order to give more RAM to specific queries.
maintenance_work_mem: formerly called vacuum_mem, this is the quantity of RAM PostgreSQL uses for VACUUM, ANALYZE, CREATE INDEX, and adding foriegn keys. You should raise it the larger your database tables are, and the more RAM you have to spare, in order to make these operations as fast as possible. A setting of 50% to 75% of the on-disk size of your largest table or index is a good rule, or 32MB to 256MB where this can't be determined.
checkpoint_segments: defines the on-disk cache size of the transaction log for write operations. You can ignore this in mostly-read web database, but for transaction processing databases or reporting databases involving large data loads, raising it is performance-critical. Depening on the volume of data, raise it to between 12 and 256 segments, starting conservatively and raising it if you start to see warning messages in the log. The space required on disk is equal to (checkpoint_segments * 2 + 1) * 16MB, so make sure you have enough disk space (32 means over 1GB).
max_fsm_pages: sizes the register which tracks partially empty data pages for population with new data; if set right, makes VACUUM faster and removes the need for VACUUM FULL or REINDEX. Should be slightly more than the total number of data pages which will be touched by updates and deletes between vacuums. The two ways to determine this number are to run VACUUM VERBOSE ANALYZE, or if using autovacuum (see below) set this according to the -V setting as a percentage of the total data pages used by your database. fsm_pages require very little memory, so it's better to be generous here.
vacuum_cost_delay: If you have large tables and a significant amount of concurrent write activity, you may want to make use of a new feature which lowers the I/O burden of VACUUMs at the cost of making them take longer. As this is a very new feature, it's a complex of 5 dependant settings for which we have only a few performance tests. Increasing vacuum_cost_delay to a non-zero value turns the feature on; use a reasonable delay, somewhere between 50 and 200ms. For fine tuning, increasing vacuum_cost_page_hit and decreasing vacuum_cost_page_limit will soften the impact of vacuums and make them take longer; in Jan Wieck's tests on a transaction processing test, a delay of 200, page_hit of 6 and limit of 100 decreased the impact of vacuum by more than 80% while tripling the execution time.
These settings allow the query planner to make accurate estimates of operation costs, and thus pick the best possible query plan. There are two global settings worth bothering with:
effective_cache_size: tells the query planner the largest possible database object that could be expected to be cached. Generally should be set to about 2/3 of RAM, if on a dedicated server. On a mixed-use server, you'll have to estimate how much of the RAM and OS disk cache other applications will be using and subtract that.
random_page_cost: a variable which estimates the average cost of doing seeks for index-fetched data pages. On faster machines, with faster disks arrays, this should be lowered, to 3.0, 2.5 or even 2.0. However, if the active portion of your database is many times larger than RAM, you will want to raise the factor back towards the default of 4.0. Alternatively, you can base adjustments on query performance. If the planner seems to be unfairly favoring sequential scans over index scans, lower it; if it's using slow indexes when it shouldn't, raise it. Make sure you test a variety of queries. Do not lower it below 2.0; if that seems necessary, you need to adjust in other areas, like planner statistics.
log_destination: this replaces the unintuitive syslog setting in prior versions. Your choices are to use the OS's administrative log (syslog or eventlog) or to use a seperate PostgreSQL log (stderr). The former is better for system monitoring; the latter, better for database troubleshooting and tuning.
redirect_stderr: If you decide to go with a seperate PostgreSQL log, this setting allows you to log to a file using a native PostgreSQL utility instead of command-line redirection, allowing automated log rotation. Set it to True, and then set log_directory to tell it where to put the logs. The default settings for log_filename, log_rotation_size, and log_rotation_age are good for most people.
As you tumble toward production on 8.0, you're going to want to set up a maintenance plan which includes VACUUMs and ANALYZEs. If your database involves a fairly steady flow of data writes, but does not require massive data loads and deletions or frequent restarts, this should mean setting up pg_autovacuum. It's better than time-scheduled vacuums because:
Setting up autovacuum requires an easy build of the module in the contrib/pg_autovacuum directory of your PostgreSQL source (Windows users should find autovaccuum included in the PGInstaller package). You turn on the stats configuration settings detailed in the README. Then you start autovacuum after PostgreSQL is started as a seperate process; it will shut down automatically when PostgreSQL shuts down.
The default settings for autovacuum are very conservative, though, and are more suitable for a very small database. I generally use something aggressive like:
-D -v 400 -V 0.4 -a 100 -A 0.3
This vacuums tables after 400 rows + 40% of the table has been updated or deleted, and analyzes after 100 rows + 30% of the table has been inserted, updated or deleted. The above configuration also lets me set my max_fsm_pages to 50% of the data pages in the database with confidence that that number won't be overrun, causing database bloat. We are currently testing various settings at OSDL and will have more hard figures on the above soon.
Note that you can also use autovacuum to set the Vacuum Delay options, instead of setting them in PostgreSQL.conf. Vacuum Delay can be vitally important for systems with very large tables or indexes; otherwise an untimely autovacuum call can halt an important operation.
There are, unfortunately, a couple of serious limitations to 8.0's autovacuum which will hopefully be eliminated in future versions:.
The first step to learning how to tune your PostgreSQL database is to understand the life cycle of a query. Here are the steps of a query:.
There are several postmaster options that can be set that drastically affect performance, below is a list of the most commonly used and how they effect performance:
Note that many of these options consume shared memory and it will probably be necessary to increase the amount of shared memory allowed on your system to get the most out of these options.
Obviously the type and quality of the hardware you use for your database server drastically impacts the performance of your database. Here are a few tips to use when purchasing hardware for your database server:
SET STATISTICS ;.
Here is a short list of other items that may be of help.
What was your create statement?
You can always put a line in like:
web.debug(web.delete('todotable',int(todo_id), _test=True))
and run the resulting query to see what's wrong. The short version
seems to be that you didn't name your column 'id'.
Hum, I don't have any problems, and I don't do anything special ...
- I use PostgreSQL, so I created my database using UTF-8 encoding.
- my Python modules start with "# -*- coding: utf-8 -*-".
- all my modules and my templates are utf-8 encoded (I use Vim, so I
use ":set encoding=utf-8", but it should work with any good
text-editor).
The only 'encoding trick' I use is when I want to print an exception,
catched from a bad database query. I need to do something like this :
===============
except Exception, detail:
print "blablabla : %s" % str(detail).decode('latin1')
return
===============
... since the exception message (which is in french) seems to be latin1
encoded.
That's all :)
Jonathan
ps : it should be the same with sqlite
sudo yum remove kdemultimedia
sudo yum install kdemultimedia-kmix
Unicode is a complex solution to a complex problem of meeting a simple need. The need is to permit software to handle the writing systems of (nearly) all the human languages of the world. The Unicode standard does this remarkably well, and most importantly, does it in such a way that you, the programmer, don't have to worry much about it.
What you do have to understand is that Unicode strings are multi-byte (binary) strings and therefore have some special requirements that ASCII strings do not. The good news is that you're using Python, which has a sensible approach to handling Unicode strings. Let's look at one:
Python tries to treat Unicode strings as much like ASCII strings as possible. For the most part, if you have a Unicode string in Python, you can work with it exactly like you would an ASCII string. You can even mingle them. For example, if you concatenate the above variables, you'll get a Unicode string that looks like this:
Since the one string is Unicode, Python automatically translates the other to Unicode in the process of concatenation and returns a Unicode result. (Be sure to read section 3.1.3 of the Python tutorial for more examples and detail.) The great consequence here is that, internally, your code doesn't have to worry much about what's Unicode: it just works.
So far, we've looked at Unicode strings as live objects in Python. They are straightforward enough. The trick is actually getting the Unicode string in the first place, or sending it somewhere else (to storage, for instance) once you're done with it.
Unicode in its native form will not pass through many common interfaces, such as HTTP, because those interfaces are only designed to work with 7- or 8-bit ASCII. Therefore, Unicode data is generally stored or transmitted through network systems in encoded form, as a string of ASCII characters. There are many possible ways to encode thusly. (The various encodings are documented in depth elsewhere.)
Encodings are a significant source of confusion for newcomers to Unicode. The common mistake is to think that an encoded string (of UTF-8, for instance) is the same thing as Unicode, when it's actually one of many possible ways to encode Unicode in ASCII form. There is only one Unicode. (You can play around with the Unicode database through Python's Unicodedata module.) There are many encodings, all of which point back to the one Unicode. Different encodings are more or less useful depending on your application.
In the web development context, there is only one encoding that will likely be of interest to you: UTF-8. For contrast, however, we will also look at UTF-16, another encoding that is particularly affiliated with XML. UTF-8 is the most common encoding in the web environment because it looks a lot like the ASCII equivalent of the text (at least until you start encountering extended characters or any of the thousands of glyphs that aren't part of ASCII). Consequently, UTF-8 is perceived as friendlier than UTF-16 or other encodings. More importantly, UTF-8 is the only Unicode encoding supported by most web browsers, although most web browsers support a large number of legacy non-Unicode encodings. On the other hand, UTF-16 looks like ASCII-encoded binary data. (Which it is.) Let's look at these two encodings.
The important thing to note is that the result of calling the encode method is an ASCII string. We've taken a Unicode string and encoded it into ASCII that can be stored or transmitted through any mechanism that handles ASCII, like the Web.
For comparison, let's look at the encoded versions of the following string:
In UTF-8 (note the ASCII equivalents showing through):
In UTF-16:
Now, let's decode these encoded strings in the python command line:
When we decode the string as foo and look at it, we get a Unicode string with Unicode escape characters for non-ASCII characters. The Python console (at least the one I'm using) doesn't implement a Unicode renderer and so it has to display the escape codes for the non-ASCII glyphs. However, if this same original string had been decoded by a web browser or text editor that did implement a Unicode renderer, you'd see all the correct glyphs (provided the necessary fonts were available!)
So, in the process of looking at these examples, we've introduced the one method and one function Python provides for encoding and decoding with Unicode strings:
In Python 2.2 and later, there's also a symmetric method for decoding (available only for 8-bit strings):
One of the nifty things about Python's encoding and decoding functions is that it's really easy to convert between encodings. For example, if we start with the following UTF-16, we can easily convert it to UTF-8 by decoding the UTF-16 and re-encoding it as UTF-8.
Now, let's take a step back and hypothesize a web application that has the following fundamental components:
You want this application to handle multi-lingual text, so you're going to take advantage of Unicode. The first thing you will probably want to do is set up a sitecustomize.py file in the Lib directory of your python installation and designate a Unicode encoding (probably UTF-8) as the default encoding for Python.
Important: as of Python 2.2, as far as I can tell, you can only call the setdefaultencoding method from within sitecustomize.py. You cannot perform this step from within your application! I don't understand why Guido set it up this way, but I'm sure he had his reasons.
This setting has a profound effect on python execution because your programs will all automatically encode Unicode strings to this encoding whenever:
You can, of course, bypass default encoding by manually encoding the string first with the .encode function, just as in the earlier examples.
If you don't set the default encoding to UTF-8, you will have to be rigorous about manually encoding Unicode data at appropriate times throughout your applications.
Note that the default encoding has little to do with decoding. (It merely serves as the default if you use the unicode function or decode method without specifying a codec.) You still must manually decode all encoded Unicode strings before you can use them. For example, if your servlet receives UTF-8 from a web browser POST, Apache will deliver that information as an ASCII string full of escape sequences, and your code will have to decode it as above with the unicode() function.
As of this writing, Webware does not meddle with decoding: it simply passes the POST through in the request object. If you are using dAlchemy's FormKit to handle web forms for your application, you can have FormKit automatically handle decoding. Otherwise, you need to find an appropriate place in your code to ensure that all incoming encoded Unicode gets decoded into Python Unicode objects before they get used for anything.
This brings up an important point that will haunt you as you start working with Unicode. It can be difficult to debug Unicode problems because one's development tools usually do not themselves implement Unicode rendering, or they only do so partially (which can be even worse!) You may not be able to trust what you see. For example, just because it looks "wrong" on the console doesn't mean it will look "wrong" in a web browser, properly decoded.
Now, when we try to print foo (above) in the console, which coerces the Unicode through the default encoding (UTF-8), we get a different kind of jibberish:
Here, the escape codes in the UTF-8 are being incorrectly interpreted by the console as extended ASCII escape codes. The result is garbage. (Your results may vary depending on the console you're using.) Knowing that my Python console does support extended ASCII (basically Latin-1), I could try encoding it as Latin-1 and printing the result:
The encoding attempt fails with an exception because there are no Cyrillic characters in Latin-1! Basically, I'm out of luck.
On the other hand, because in another example from above I'm only using characters that appear in extended ASCII, I can print the following string in the PythonWin console:
But if I try the exact same thing in a "DOS box" console, which evidently uses a different character set, I get crud:
In order for your Unicode web pages you look right, you have to make sure that any information you serve to web browsers goes along with the instruction that they treat it as encoded Unicode (UTF-8 in most cases). There are a couple ways to do this. The best is to configure your web server to specify an encoding in the header it sends along with your page. With Apache, you do this by adding an AddDefaultCharset line to your httpd.conf (see
You can also embed tags in your pages that are intended to tip off the browser to the nature of the data. Such META tags are theoretically of a lower precedence than the web server's header, but they might prove useful for some browsers or situations.
You can easily verify whether your encoding directives are working by hitting your pages with a browser and then looking in the drop-down menus of the browser for the encoding option. If the correct encoding is selected (automatically) by your browser, then your header instructions are set properly.
If the browser is expecting the right encoding and your Python's default encoding is set to match, you can confidently write your Unicode string objects as output. For instance, with Webware, you simply use self.write() as normal, and whether your Python strings are ASCII or Unicode, the browser gets UTF-8 and correctly displays the results.
Convention dictates that a well-behaved browser will also return form input in whatever encoding you've specified for the page. That means that if you send a user a form on a UTF-8 page, whatever they type into the boxes will be returned to you in UTF-8. If it doesn't, you're in for an interesting ride, because most web browsers default to ISO-8859-1 (Latin-1) encoding, which is not actually a Unicode encoding, and is in any case incompatible with UTF-8. If you try to decode Latin-1 as UTF-8, you will raise an exception. For example:
Luckily, you can use Python's unicode() and .encode methods to translate to and from Latin-1, and you can use Python's try/except structure to prevent crashes. What you have to understand is that it's all left up to you, and that includes trapping any invalid data that tries to enter your program.
The last detail is the database. Every database has its unique handling of Unicode (or lack thereof.)
In theory, you can always store Unicode in its ASCII-encoded form in any relational database. The downside is that you're storing ASCII gobbledygook, so you will have an awkward time taking advantage of the powerful filtration features of the SQL language. If all you want to do is stash and retrieve data in bulk, this may not be a problem. However, if you ask the database more sophisticated questions, such as for a list of all the names that include "Björn," the database won't find any, unless you ask it to match "Bj\xc3\xb6rn" instead. You can probably work around this issue, but most modern relational databases are now supporting the storage and handling of Unicode transparently.
It happens that PostGreSQL (as of this writing) only supports UTF-8 natively in and out of the database, so that is what I use with it. Microsoft SQL Server – like everything else Microsoft makes – uses an elusive system called MBCS (Multi-byte Character System) which is built (exclusively) into Windows. Other RDBMS will have their own preferences. In my experience, the database itself isn't really much of an issue when it comes to Unicode. The issue is the middleware your application uses to communicate with that database.
With PostGreSQL, I use pyPgSQL as the database interface for my web applications. PyPgSQL does a lot for me with regard to Unicode. When properly configured, I can confidently rely on it to handle any Unicode encoding and decoding between my application and the database. That means I can INSERT and UPDATE data in the database with python Unicode strings and it just works. I can also SELECT from the database and I get back Unicode objects that I don't have to decode myself.
With Microsoft SQL Server, I use ADO as my database interface. ADO performs similarly for SQL Server as pyPgSQL does for PostGreSQL, although ADO is only available for python applications running on win32. | http://pylab.blogspot.com/2006_09_01_archive.html | CC-MAIN-2017-34 | refinedweb | 4,465 | 59.74 |
Hi allIf I want to introduce a new kind of addresses in apr, what is there to take care of ?Assume, I would introduce a 64bit address instead of the traditional IPv4.Besides putting it in the union, I would also have to change the salen (to the complete length of the struct) and the ipaddr_len to the length of my address type - correct ? In addition to that the inet_ntop would have to be chosen correctly. Now where I'm wondering is about the rest - how to adopt the existing code into using a different addressing scheme. If I can open the functionality by calling the traditional socket just with a different family, how would I have to change the code ?Regards, Peter The referenced struct:struct apr_sockaddr_t { apr_pool_t *pool; char *hostname; char *servname; apr_port_t port; apr_int32_t family; /** How big is the sockaddr we're using? */ apr_socklen_t salen; /** How big is the ip address structure we're using? */ int ipaddr_len; /** How big should the address buffer be? 16 for v4 or 46 for v6 * used in inet_ntop... */ int addr_str_len; /** This points to the IP address structure within the appropriate * sockaddr structure. */ void *ipaddr_ptr; /** If multiple addresses were found by apr_sockaddr_info_get(), this * points to a representation of the next address. */ apr_sockaddr_t *next; /** Union of either IPv4 or IPv6 sockaddr. */ union { /** IPv4 sockaddr structure */ struct sockaddr_in sin;#if APR_HAVE_IPV6 /** IPv6 sockaddr structure */ struct sockaddr_in6 sin6;#endif#if APR_HAVE_SA_STORAGE /** Placeholder to ensure that the size of this union is not * dependent on whether APR_HAVE_IPV6 is defined. */ struct sockaddr_storage sas;#endif } sa;}; | http://mail-archives.apache.org/mod_mbox/apr-dev/200811.mbox/raw/%3Cee73e03b0811022140i552f9c31lec871f43641179ac@mail.gmail.com%3E/2 | CC-MAIN-2016-22 | refinedweb | 258 | 63.09 |
After applying the supplied function to each item of a given iterable, the map() function returns a map object which is an iterator of the results (list, tuple, etc.)
Historical development of the maps function
Computations in functional programming are performed by mixing functions that receive parameters and return a substantial value (or values). These functions do not change the program’s state or modify its input parameters. They convey the outcome of a calculation. Pure functions are the name given to these types of functions.
Theoretically, applications written functionally will be more accessible to:
- You to code and use each function separately.
- You can debug and test specific functions without looking at the rest of the application.
- You should be aware of this because you will not be dealing with state changes throughout the curriculum.
Functional programming often represents data with lists, arrays, and other iterables and a collection of functions that operate on and alter it. There are at least three regularly used ways for processing data in a functional style:
- Mapping is applying a transformation function to an iterable to create a new one. The transformation function is called on each item in the original iterable to produce items in the new one.
- Filtering is the process of generating a new iterable by applying a predicate or a Boolean-valued function on an iterable. It is possible to Filter off any items in the original iterable that cause the predicate function to return false elements in the new iterable.
- Applying a reduction function to an iterable to obtain a single cumulative result is known as reducing.
However, the Python community demanded some functional programming features in 1993. They wanted the following:
- Anonymous functions
- A map() function – is a function that returns a map.
- filter() function- is a function that allows you to filter data.
- reduce() function- is a function that reduces the size of an object.
Thanks to a community member’s input, several functional features are introduced to the language. Map(), filter(), and reduce() are now essential components of Python’s functional programming paradigm.
This tutorial will cover one of these functional aspects, the built-in function map(). You’ll also learn to use list comprehensions and generator expressions to achieve the same map() functionality in a Pythonic and legible manner.
Introduction to Python’s map()
You can find yourself in a position where you need to conduct the same action on all of the items in an input iterable to create a new iterable. Using Python for the loop is the quickest and most popular solution to this problem. You may, however, solve this problem without needing an explicit loop by using map().
You’ll learn how the map() works and how to use it to process and change iterables without using a loop in the parts that follow.
map() cycles through the items of an input iterable (or iterables) and returns an iterator that results from applying a transformation function on each item. map() accepts a function object and an iterable (or several iterables) as parameters and returns an iterator that yields transformed objects on-demand, according to the documentation. In a loop, map() applies the function to each item in the iterable, returning a new iterator with modified objects on demand. Any Python function taking the same number of arguments as the number of iterables you send to the map() qualifies. Note that map()’s first argument is a function object, which implies you must pass the function without running it. That is, without the need for a parentheses pair.
The transformation function is the first input to map(). Put another way, it’s the function that turns each original item into a new (transformed) one. Even though the Python documentation refers to this argument function, it can be any Python callable. Built-in functions, classes, methods, lambda, and user-defined functions are all included.
map() is usually a mapping function because it maps every input iterables’ item to a new item in an output iterable; map() does this by applying a transformation function to each item in the input iterable.
The syntax is as follows:
map(fun, iter)
Python’s map() Parameters
fun: It’s a function that a map uses to pass each element of an iterable to. It’s iterable that has to be mapped. The map() function accepts one or more iterables.
Returns: After applying the supplied function to each item of a given iterable, returns a list of the results (list, tuple, etc.) The map() (map object) returned value can then be provided to functions like list() (to build a list) and set() (to construct a set).
Example 1: program demonstrating how the map() function works
# Return double of n def multiplication(n): return n * n # We double all numbers using map() num_vals = (3, 4, 5, 6) result = map(multiplication, num_vals) print(list(result))
Example 2: Using lambda expression to achieve the results in example 1
We can alternatively engage lambda expressions to achieve the above result with the map.
# Mulitply all numbers using map and lambda num_vals = (3, 4, 5, 6) result = map(lambda x: x * x, num_vals) print(list(result))
Example 3: Using lambda and map
# Add two lists using map and lambda num_vals_1 = [3, 4, 5] num_vals_2 = [6, 7, 8] result = map(lambda x, y: x * y, num_vals_1, num_vals_2) print(list(result))
Example 4: Using the map on lists
# List of strings laptops = ['HP', 'Dell', 'Apple', 'IBM'] # map() can listify the list of strings individually result = list(map(list, laptops)) print(result)
Example 5: Calculating every word’s length in the tuple:
def funcCalculatingStringLength(n): return len(n) result = map(funcCalculatingStringLength, ('Apple', 'HP', 'Chromebook')) print(result)
Using map() with various function types
With the map, you can use any Python callable (). The callable must take an argument and return a tangible and meaningful value as the only requirement. Classes, instances that implement a specific method called call(), instance methods, class methods, static methods, and functions, for example, can all be used.
The map has several built-in functions that you can use. Think about the following scenarios:
num_vals = [-4, -3, 0, 3, 4] abs_values = list(map(abs, num_vals)) abs_values list(map(float, num_vals)) word_lengths = ["Codeunderscored", "is","the" ,"Real", "Deal"] list(map(len, word_lengths))
Any built-in function is used with map() as long as it takes an argument and returns a value.
When it comes to utilizing map(), employing a lambda function as the first argument is typical. When passing expression-based functions to map(), lambda functions are helpful. For example, using a lambda function, you may re-implement the example of the square values as follows:
num_vals = [1, 2, 3, 4, 5] squared_vals = map(lambda num: num ** 2, num_vals) list(squared_vals)
Lambda functions are particularly beneficial for using map() (). They can be used as the first argument in a mapping (). You can quickly process and change your iterables using lambda functions with map().
Multiple input iterables processing using a map ()
If you send several iterables to map(), the transformation function must accept the same number of parameters. Therefore, each iteration of the map() passes one value from each iterable to the function as a parameter. The iteration ends when the shortest iterable is reached.
Take a look at the following example, which makes use of pow():
list_one = [6, 7, 8] list_two = [9, 10, 11, 12] list(map(pow, list_one, list_two))
pow() returns x to the power of y given two arguments, x, and y. Using several arithmetic operations, you can merge two or more iterables of numeric values with this technique. The final iterable is limited to the length of the shortest iterable, which in this case is first it. Here are a few examples of how lambda functions are used to do various math operations on multiple input iterables:
list(map(lambda a, b: a - b, [12, 14, 16], [11, 13, 15])) list(map(lambda p, q, r: p + q + r, [12, 14], [11, 13], [17, 18]))
To combine two iterables of three elements, you perform a subtraction operation in the first example. The values of three iterables are added together in the second case.
Transforming String Iterables using Python’s map()
When working with iterables of string objects, you might want to consider using a transformation function to transform all items. Python’s map() function can come in handy in some instances. The examples in the following sections will show you how to use map() to alter iterables of string objects.
Using the str Methods
Using some of the methods of the class str to change a given string into a new string is a typical technique for string manipulation. If you’re working with iterables of strings and need to apply the same transformation to each one, map() and related string methods can help:
laptops_companies = ["Microsoft", "Google", "Apple", "Amazon"] list(map(str.capitalize, laptops_companies)) list(map(str.upper, laptops_companies)) list(map(str.lower, laptops_companies))
You may use map() and string methods to make a few modifications on each item in string it. Most of the time, you’d use methods like str.capitalize(), str.lower(), str.swapcase(), str.title(), and str.upper() that don’t require any additional arguments .
You can also utilize methods that take optional parameters with default values, such str.strip(), which takes an optional argument called char and removes whitespace by default:
laptops_companies = ["Microsoft", "Google", "Apple", "Amazon"] list(map(str.strip, laptops_companies ))
A lambda function gives arguments rather than relying on the default value. When you use str.strip() in this way, you depend on the char’s default value. In this scenario, map() removes all whitespace from the with spaces elements. For example, when processing text files, this technique can come in helpful when you need to delete trailing spaces (or other characters) from lines. Keep in mind that removing the newline character with the str.strip() without a custom char will also delete the newline character on the off chance that this is the case.
Using map() in conjunction with other functional tools
So far, you’ve learned how to use map() to perform various iterable-related tasks. You can conduct more complex changes on your iterables if you use map() in conjunction with other functional tools like filter() and reduce(). That’s what the following sections are going to be about.
filter() and map()
You may need to process an input iterable and return another iterable due to removing undesired values from the input iterable. In that instance, Python’s filter() function would be a decent choice. The built-in function filter() accepts two positional arguments:
- function is a predicate or a Boolean-valued function, which returns True or False depending on the input data.
- Any Python iterable will be used as iterable.
Filter() utilizes the identity function if you pass None to function. It means that filter() will examine each item in iterable for its truth value and filter out any wrong things. The input iterables items’ for which filter() returns True is returned by filter().
Consider the following scenario: you need to calculate the square root of all the items in a list. You can use map() and filter() to accomplish this. Because the square root isn’t specified for negative numbers, you’ll get an error because your list potentially contains negative values:
import math math.sqrt(-25)
If the argument is a negative number, math.sqrt() throws a ValueError. To avoid this problem, use filter() to remove any negative numbers and then find the square root of the positive ones that remain. Consider the following scenario:
import math def chek_if_positive(val): return val >= 0 def sanitized_sqrt(nums): vals_cleaned = map(math.sqrt, filter(chek_if_positive, nums)) return list(vals_cleaned) sanitized_sqrt([64, 16, 49, -25, 4])
chek_if_positive() takes a number as an argument and returns True if the number is greater than or equal to zero. To remove all negative values from numbers, send chek_if_positive() to filter(). As a result, map() will only process positive values, and math.sqrt() will not throw a ValueError.
reduce() and map()
Reduce() in Python is a function found in the functools module of the Python standard library. reduce() is a crucial Python functionality that applies a function to an iterable and reduce it to a single cumulative value. Reduction or folding are terms used to describe this type of action. reduce() requires the following two arguments:
- Any Python callable that takes two arguments and returns a value qualifies as a function.
- Any Python iterable can be used as iterable.
reduce() will apply the function to all of the elements in the iterable and compute a final value cumulatively.
Here’s an example of how to use map() and reduce() to compute the total size of all the files in your home directory:
import functools as fn import operator import os import os.path code_files = os.listdir(os.path.expanduser("~")) fn.reduce(operator.add, map(os.path.getsize, code_files))
To acquire the path to your home directory, you use the os.path.expanduser(“~”) in this example. Then, on that path, you call os.listdir() to receive a list of all the files that live there.
os.path is used in the map() operation.
To determine the size of each file, use getsize(). To find the total size of all files, use the add() function. Finally, you utilize operator with reduce(). The final result is the home size’s directory in bytes.
Although reduce() can address the problem in this section, Python also has other tools that can help you develop a more Pythonic and efficient solution. To calculate the total size of the files in your home directory, for example, you can use the built-in function sum():
import os import os.path underscored_files = os.listdir(os.path.expanduser("~")) sum(map(os.path.getsize, underscored_files))
This example is significantly more readable and efficient than the previous one. You can look for further information on Python’s reduce(): From Functional to Pythonic Style, how to use reduce(), and which alternative tools you may use to replace it in a Pythonic fashion.
Processing Iterables(Tuple-Based) with starmap()
Python’s itertools have a function called starmap(). starmap() creates an iterator that applies a function to the arguments in a tuple iterable and returns the results. It comes in handy when working with iterables that have previously been grouped into tuples.
The primary difference between map() and starmap() is that the latter uses the unpacking operator () to unpack each tuple of arguments into many positional arguments before calling its transformation function. As a result, instead of function(arg1, arg2,… argN), the transformation function is called function(args).
According to the official documentation, starmap() is identical to the following Python function:
def starmap(function, iterable): for args in iterable: yield function(*args)
This function’s for loop iterates through the items in iterable, returning altered objects as a result. In the call to function(*args), the unpacking operator is used to unpack the tuples into many positional arguments. Examples of starmap():
from itertools import starmap list(starmap(pow, [(7, 12), (9, 8)]))
Conclusion
The map() function performs a specified function for each item in an iterable. The item is passed as a parameter to the function. The map() function in Python allows you to process and transform all elements in an iterable without using an explicit for loop, a technique known as mapping. When you need to apply a transformation function to each item in an iterable and change it into a new iterable, map() comes in handy. In Python, map() is one tool that supports functional programming.
You’ve learned how the map() works and how to use it to handle iterables in this tutorial. You also learned about several Pythonic utilities that you can use in your code to replace map(). | https://www.codeunderscored.com/maps-function-in-python-with-examples/ | CC-MAIN-2022-21 | refinedweb | 2,641 | 53.1 |
A values and it is quick to retrieve values. Some dictionaries are faster at adding new values, and others are optimized for retrieval. One example of a dictionary type is the hashtable.
A hashtable is a dictionary optimized for fast retrieval. The principal methods and properties of Hashtable are summarized in Table 9-6.
In a Hashtable, each value is stored in a "bucket." The bucket is numbered, much like an offset into an array.
Because the key may not be an integer, it must be possible to translate the key (e.g., "Massachusetts") into a bucket number. Each key must provide a GetHashCode( ) method that will accomplish this magic.
Remember that everything in C# derives from object. The object class provides a virtual method GetHashCode( ), which the derived types are free to inherit as is or to override.
A trivial implementation of a GetHashCode( ) function for a string might simply add up the Unicode values of each character in the string and then use the modulus operator to return a value between 0 and the number of buckets in the Hashtable. It is not necessary to write such a method for the string type, however, as the CLR provides one for you.
When you insert the values (the state capitals) into the Hashtable, the Hashtable calls GetHashCode( ) on each key provided. This method returns an int, which identifies the bucket into which the state capital is placed.
It is possible, of course, for more than one key to return the same bucket number. This is called a collision. There are a number of ways to handle a collision. The most common solution, and the one adopted by the CLR, is simply to have each bucket maintain an ordered list of values.
When you retrieve a value from the Hashtable, you provide a key. Once again the Hashtable calls GetHashCode( ) on the key and uses the returned int to find the appropriate bucket. If there is only one value, it is returned. If there is more than one value, a binary search of the bucket's contents is performed. Because there are few values, this search is typically very fast.
The key in a Hashtable can be a primitive type, or it can be an instance of a user-defined type (an object). Objects used as keys for a Hashtable must implement GetHashCode( ) as well as Equals. In most cases, you can simply use the inherited implementation from Object.
Hash tables are dictionaries because they implement the IDictionary interface. IDictionary provides a public property Item. The Item property retrieves a value with the specified key. In C#, the declaration for the Item property is:
object this[object key] {get; set;}
The Item property is implemented in C# with the index operator ([]). Thus, you access items in any Dictionary object using the offset syntax, as you would with an array.
Example 9-17 demonstrates adding items to a Hashtable and then retrieving them with the Item property.
namespace Programming_CSharp { using System; using System.Collections; public class Tester { static void Main( ) { // Create and initialize a new Hashtable. Hashtable hashTable = new Hashtable( ); hashTable.Add("000440312", "Jesse Liberty"); hashTable.Add("000123933", "Stacey Liberty"); hashTable.Add("000145938", "John Galt"); hashTable.Add("000773394", "Ayn Rand"); // access a particular item Console.WriteLine("myHashTable[\"000145938\"]: {0}", hashTable["000145938"]); } } } Output: hashTable["000145938"]: John Galt
Example 9-17 begins by instantiating a new Hashtable. We use the simplest constructor accepting the default initial capacity and load factor (see the sidebar, Load Factor), the default hash code provider, and the default comparer.
We then add four key/value pairs. In this example, the social security number is tied to the person's full name. (Note that the social security numbers here are intentionally bogus.)
Once the items are added, we access the third item using its key.
Dictionary collections provide two additional properties: Keys and Values. Keys retrieves an ICollection object with all the keys in the Hashtable, as Values retrieves an ICollection object with all the values. Example 9-18"); // get the keys from the hashTable ICollection keys = hashTable.Keys; // get the values ICollection values = hashTable.Values; // iterate over the keys ICollection foreach(string key in keys) { Console.WriteLine("{0} ", key); } // iterate over the values collection foreach (string val in values) { Console.WriteLine("{0} ", val); } } } } Output: 000440312 000123933 000773394 000145938 George Washington Abraham Lincoln Ayn Rand John Galt
Although the order of the Keys collection is not guaranteed, it is guaranteed to be the same order as returned in the Values collection.
IDictionary objects also support the foreach construct by implementing the GetEnumerator method, which returns an IDictionaryEnumerator.
The IDictionaryEnumerator is used to enumerate through any IDictionary object. It provides properties to access both the key and value for each item in the dictionary. Example 9-19"); // Display the properties and values of the Hashtable. Console.WriteLine( "hashTable" ); Console.WriteLine( " Count: {0}", hashTable.Count ); Console.WriteLine( " Keys and Values:" ); PrintKeysAndValues( hashTable ); } public static void PrintKeysAndValues( Hashtable table ) { IDictionaryEnumerator enumerator = table.GetEnumerator( ); while ( enumerator.MoveNext( ) ) Console.WriteLine( "\t{0}:\t{1}", enumerator.Key, enumerator.Value ); Console.WriteLine( ); } } } Output: hashTable Count: 4 Keys and Values: 000440312: George Washington 000123933: Abraham Lincoln 000773394: Ayn Rand 000145938: John Galt | http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+9.+Arrays+Indexers+and+Collections/9.8+Dictionaries/ | CC-MAIN-2016-50 | refinedweb | 865 | 58.48 |
> whois.zip > MITCPYRT.H
/* *---------------------------------------------------------------- * * $Source: G:/TMAIL\RCS\MITCPYRT.H $ * $Revision: 2.1 $ * $Date: 1993/06/17 03:33:35 $ * $State: Exp $ * $Author: pbh $ Max E. Metral * $Locker: pbh $ * * $Log: MITCPYRT.H $ * Revision 2.1 1993/06/17 03:33:35 pbh * Unknown changes. * * Revision 2.0 93/05/12 10:44:00 pbh * beta 0.13 check in * MEWEL support added * * Revision 0.3 92/08/27 10:39:48 pbh * 0.2a alpha check in * * Revision 0.2 92/06/26 12:12:49 pbh * alpha 0.1a * * Revision 0.1 92/05/21 14:29:01 pbh * Initial check in, working pre alpha, * pbh's first pass at memetral's code * * * * *---------------------------------------------------------------- */ #ifndef MIT_COPYRIGHT #define MIT_COPYRIGHT /* * This software is being provided to you, the LICENSEE, by the * Massachusetts Institute of Technology (M.I.T.) under the following * license. By obtaining, using and/or copying this software, you agree * that you have read, understood, and will comply with these terms and * conditions: * * Permission 1994. */ #endif /*MIT_COPYRIGHT*/ | http://read.pudn.com/downloads/sourcecode/internet/1658/WHOIS/MITCPYRT.H__.htm | crawl-002 | refinedweb | 167 | 71.71 |
Hi,
We are running complex Java Sikuli code inside a Docker container, with the following specification :
Sikuli 1.1.2
Java : openjdk version "1.8.0_171"
System : Ubuntu 16.04.4 LTS
Xfce Desktop 4.12
Docker 18.09.0
Sikuli WaitScanRate: 5
Sikilu MinSimilarity: 0.7
-Dsikuli.Debug=3
We are testing an Angular web application running in Chromium Version 64.0.3282.167
While searching for images, we avoid searching in the whole screen and we give enough wait time for Sikuli to do its work. We use only region.exits methods (not wait or found). We have implemnted almost all the Sikuli best practices we found in the documentation and elsewhere...
Each test suite is run inside a fresh and distinct Docker container, and we launch up to five suites (hence maximum of five containers, but usually one or two container) on the same Docker machine, on the same time. No CPU or memory issues were observed.
Randomly and in different code methods, we get image not found :-(
Evry time an image is not found we take a whole screenshot for debugging. When we re-run the same image search, in the same env (In the same Docker), on machine developper, using the taken debug screenshot, and the same searched image, we are not able to reproduce the unfound image search :-( And we do not have enough details with "-Dsikuli.Debug=3"
How to get more details and logs about what Sikuli is doing and what change from execution to another?
Is there any technique to let Sikuli save its own screenshot where it searched after an image search failure?
Could any lack of graphical ressources impact Sikuli work? How to debug that?
Could any lack of physical resources (CPU/Memory) impact Sikuli image search? (we are not able to reproduce the bug....)
We spent days of investigation to make this code stable :-( and any suggestions or advices are more than welcome.
Many Thanks.
Question information
- Language:
- English Edit question
- Status:
- Answered
- For:
- Sikuli Edit question
- Assignee:
- No assignee Edit question
- Last query:
- 2019-10-22
- Last reply:
- 2019-10-23
No. For more debug, I take a screenshot of the page where Sikuli was not able to find the Image I search... just for debugging purpose....
example to capture screenshot when the image do not appear
import datetime
import shutil
SCREENSHOT = getBundlePath(
#------
def capture_screen():
tmppath = SCREEN.
capturefile = datetime.
savepath = os.path.
shutil.
#------
if not exists(
capture_
Thx for your help. But taking the screenshot is not my problem :-(
My issue is :
Randomly and in different code methods, we get image not found :-( I'm looking for ways to debug what's happening and to make my code more stable. I laready followed all best practices of Sikuli, and did not get yet the reason of this instability.
Thx again
--- I laready followed all best practices of Sikuli, and did not get yet the reason of this instability.
Then the last possible reason is, that in those situations the image does not appear within the given wait time (3 seconds in the standard).
Settings.
As a general advice: switch to the latest SikuliX version 2.0.0 (latest stable on MavenCentral)
This is the release version of the latest 1.1.4-SNAPSHOT.
BTW:
Sikuli WaitScanRate: 5
... might be a misunderstanding and does not maske sense in your situation anyways:
you are saying, that SikuliX should do 5 retries in 1 second, which usually in the average is not possible (only in cases where you consequently work with rather small regions). Raising the waitscanrate costs much cpu and storage. On slower to normal machines the standard of 3 is hard enough.
Congratulations for v 2 version :-) Yes Il will do that switch.
However for Settings.
Ok for Sikuli WaitScanRate, I let it to its default value.
Many Thanks.
--- With a timeout >= 3 seconds already. Any risk with that ?
No risk with that.
You can always use individual timeouts with wait/exists.
You can be generous with the value: as high as it may be, if the image appears earlier, the find is a success and terminated.
Do you mean there is an image in the screenshot you take when an image is not found? | https://answers.launchpad.net/sikuli/+question/685325 | CC-MAIN-2020-10 | refinedweb | 707 | 65.01 |
>>>>> "Albert" == Albert Cervera Areny <albertca@jazzfree.com> writes: Albert> Hi, I'm trying to compile a proogram I've developed using the Albert> standard string and vector classes and it does compile without Albert> problems using g++-2.95 but if I just change the links for g++ Albert> and gcc from 2.95 to 3.2 I get some errors as if string and Albert> vector classes where not defined. Should I add something to be Albert> able to compile it? It is most likely because you are not using the std:: namespace. GCC 2.95 was more lenient in enforcing the std:: namespace, but GCC 3.x adheres to the standard much more closely, so you either need to add std:: in front of string and vector, or add a "using namespace std;" line at the top of your files. --psbN7Yd3eYj.pgp
Description: PGP signature | https://lists.debian.org/debian-user/2003/04/msg02409.html | CC-MAIN-2017-39 | refinedweb | 147 | 74.59 |
Scripting Games 2012 comments: #16 reading environmental variables
Windows maintains a set of environmental variables. Some, but not all, can be seen via the env: PowerShell drive
Get-ChildItem -Path env:
You can also use WMI to see some of the variables
Get-WmiObject -Class Win32_Environment | ft Name, VariableValue –a
Now how do you read them in your scripts?
I noticed a lot of people doing this
$name = (Get-Item env:\Computername).Value
It works but its a bit long winded. A better method is this
$name = $env:COMPUTERNAME
$env: is the environment provider surfaced as a namespace
You can also use this technique with other providers e.g.
PS> $variable:MaximumAliasCount
4096
It doesn’t work with all providers e.g. the registry.
Comment on this Post | http://itknowledgeexchange.techtarget.com/powershell/scripting-games-2012-comments-16-reading-environmental-variables/ | CC-MAIN-2017-47 | refinedweb | 128 | 62.78 |
‘Hello World’ with WSGI
August 31st, 2006. Install wsgiref
wsgiref is the wsgi reference implementation that is now part of python 2.5 standard library. If you are running python version less than 2.5 you will want to do:
$ sudo easy_install wsgiref
2. Get a web server
We’ll use the wsgiref simple server as detailed in the docs (if you want to use a ‘proper’ webserver see the section below on making your wsgi app available via fastcgi). Create a python module, simpletest.py say, and insert:
from wsgiref.simple_server import make_server, demo_app httpd = make_server('', 8000, demo_app) print "Serving HTTP on port 8000..." # Respond to requests until process is killed httpd.serve_forever() # Alternative: serve one request, then exit ##httpd.handle_request()
3. Run it
Start the server:
$ python simpletest.py
Then visit
Bingo! We’ve got our first working wsgi app (demo_app should output ‘Hello world!’ followed by a list of variable values).
4. Make our own Hello World app
We haven’t yet written anything ourselves — we’re just using the demo_app bundled with wsgiref. So change simpletest.py to be:
def simple_app(environ, start_response): """Simplest possible application object""" status = '200 OK' response_headers = [('Content-type','text/plain')] start_response(status, response_headers) return ['My Own Hello World!\n'] from wsgiref.simple_server import make_server, demo_app httpd = make_server('', 8000, simple_app) print "Serving HTTP on port 8000..." # Respond to requests until process is killed httpd.serve_forever()
Run this and visit and you should see a blank page containing ‘My Own Hello World!’.
5. Using a Class
Finally for completeness here’s the same application but done as a class:
class SimpleApp: """Produce the same output, but using a class """ def __init__(self, environ, start_response): self.environ = environ self.start = start_response def __iter__(self): status = '200 OK' response_headers = [('Content-type','text/plain')] self.start(status, response_headers) yield 'My Own Hello world!\n' from wsgiref.simple_server import make_server, demo_app # httpd = make_server('', 8000, simple_app) # the same but using a class httpd = make_server('', 8000, SimpleApp) print "Serving HTTP on port 8000..." # Respond to requests until process is killed httpd.serve_forever()
Serving an WSGI App via FastCGI
This section explains how to serve your WSGI app via FastCGI (other methods using scgi or even cgi take an almost identical approach).
1. Install a fastcgi interface to wsgi:
Use flup which provides a fastcgi and scgi interface to wsgi:
$ sudo easy_install flup
2. Install a simple standalone fastcgi implementation:
- Download
- Install this somewhere you can import it as import fcgi
3. Attach your wsgi application to this fcgi server
Create a python file (server.fcgi) and paste in the following:
#!/usr/bin/env python from myapplication import app # Assume app is your WSGI application object from fcgi import WSGIServer WSGIServer(app).run()
Now you can just point your webserver at this file (make sure you’ve configured it to handle .fcgi files using fastcgi) and your app is available via fastcgi. | http://www.rufuspollock.org/2006/08/31/a-very-simple-introduction-to-wsgi/ | crawl-002 | refinedweb | 482 | 59.3 |
For some reason I'm having a brain cramp. I'm guessing this is really bad due to the functions within the vector class, but figured I might as well query "the mob".
Bad, okay, or just safer to swap out the STL vector with an int*?Bad, okay, or just safer to swap out the STL vector with an int*?Code:
#include <vector>
struct foo
{
std::vector<int> fooVec;
};
foo var1;
memset(&var1, 0, sizeof(foo));
(Yes in my example zeroing the memory is pointless, but imagine it having like 15+ variables in addition to the vector that I wish to quickly initialize to Zero) | http://cboard.cprogramming.com/cplusplus-programming/96778-simple-question-memset-vector-int-0-sizeof-vector-int-printable-thread.html | CC-MAIN-2016-18 | refinedweb | 106 | 68.5 |
Hi Lindie,
No answer yet. Thanks for checking in with me.
I just tried this as well. I almost have what I want. I am just missing how to combine the extensions and their file size.
I am getting results like this:
Extension .ico | Filesize 19790 bytesExtension .ico | Filesize 19790 bytesExtension .pyd | Filesize 132096 bytesExtension .dll | Filesize 73216 bytesExtension .pyd | Filesize 9728 bytes
I would like this to be more of a summary where the extension are listed only once with the file sizes summed. My script is below.
import osimport glob
os.chdir(r'C:\Python33\DLLs')
for filename in glob.glob ('*'): filesize = os.path.getsize(filename) extension = os.path.splitext(filename)[1] print("Extension",extension,"|","Filesize", filesize,"bytes")
Hello, Thank you, XXXXX XXXXX continue to look for a professional to assist you. Please let me know if I can be of any further assistance while you wait. Best,
Hello,
Thank you for responding. I also need to include a count of the particular extension. This is a pretty long post and perhaps this requirment was lost. It is in the first post of mine. Do you think you would be able to include a count for the extension as shown below - just a number at the end of the row?
Extension .dll | Total space used (NNN) NNN-NNNNbytes | 2
Awesome. Thank you. I am not in a super rush and I can wait. No problem. I am new to programming and just can't figure out how to get the the file size, extension list, and count all together.
Thanks.
I did not think of min/max/average. I you could do that it would be great! That would be all I needed. | http://www.justanswer.com/computer-programming/7t4cl-give-directory-report-extensions.html | CC-MAIN-2016-22 | refinedweb | 285 | 79.16 |
I wrote this little code after learning the first 2 lessons of the C++ series to test myself, but I need a little help understanding a few features of C++.
It runs just how I want it to, but I still have 3 questions.It runs just how I want it to, but I still have 3 questions.Code:#include <iostream> using namespace std; int main() { int answer; cout<<"What is 3 times 4?\n"; cin>> answer; cin.ignore(); if ( answer < 8 || answer > 16 ) { cout<<"You answered: " << answer << "\n That is way off, try again"; } else if ( answer >= 8 && answer <= 16 && answer != 12) { cout<< "You answered: " << answer << "\n That is close, try again."; } else if ( answer == 12 ) { cout<< "You answered: " << answer << "\n That is correct!"; } cin.get(); }
1. About all those curly braces, why do the If, else, and else if statement need to have the following output in curly braces? An in depth explanation of these curly braces would be highly appreciated.
2. What is up with the int main(); thing? From what I understand your stating that there is an integer variable named main(). So, what does this int main() thing really mean/what does it do?
3. the cin.ignore(); feature is also tough to follow. I know that it gets rid of the enter after the inputted numbers, but in what instances is this necessary? Anytime i have a value inputted, followed by a unwanted enter?
I know this is all lengthy and very wordy, but any help is again, highly appreciated.
Thanks for your time and help. | http://cboard.cprogramming.com/cplusplus-programming/127695-please-help-basic-cplusplus-syntax-cin-ignore-feature.html | CC-MAIN-2015-27 | refinedweb | 262 | 82.65 |
Results 1 to 4 of 4
hi,everyone i got a troublesome issue.My camera's chip is OV511 and i use some ioctl() Functions to print the following informations about it.such as: type=513,channels=1,maxwidth=640,maxheight:480,min width:64,minheight: 48 brightness=40448,contrast=0,hue=32768 colour=49152,whiteness=26880,depth=12 palette=10 ...
- Join Date
- Apr 2007
- 3
problem with USB camera based on Video4Linux
type=513,channels=1,maxwidth=640,maxheight:480,min width:64,minheight: 48
brightness=40448,contrast=0,hue=32768
colour=49152,whiteness=26880,depth=12
palette=10 /* YUV420 format*/
chromakey=0,clipcount=0,width: 320,height: 240
VID_TYPE_SUBCAPTURE 512 /* Can capture subareas of the image */
i grabbed one frame image successfully by using mmap case and throw it to frambuffer which i'd mmapped.Then this image display on screen immediately but just in black and white colour.All the parameters when calling ioctl() Functions such ioctl(vd->fd, VIDIOCGCAP, &(vd->capability),ioctl(vd->fd, VIDIOCGPICT, &(vd->picture),etc haven't been changed while using the default only.I guess the datas throwed to frambuffer should be conveted into RGB format.if it's true how can i do that?Anyone has some useful Function so i can do it easily? I need you help and i'm appreciate.Thanks a lot.
At first, if I understand your post aright, then you should probabably have better posted this in an application section as the hardware of this peripheral seems to work perfectly.
Where are you using those ioctl functions? I presume in a self-written or modified application program or do those ioctls pop up in a GUI program?
Don't you get that frame as a file onto the disks of your computer where you could apply all sorts of programs to that file containing the frame?Bus Error: Passengers dumped. Hech gap yo'q.
- Join Date
- Apr 2007
- 3
Thank you for reply above all,i show those head files in my program.
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
#include <sys/ioctl.h>
#include <stdlib.h>
#include <linux/types.h>
#include <linux/videodev.h>
#include <linux/mman.h>
#include <linux/fb.h>
all ioctl() functions i used is just from above files.After I grabbed one frame image,i can save it into a iamge file or display it directly on screen by frambuffer.But it's just black and white only anyway,that's the point.
Well i'll adopt your suggestion to post in Application section,and thank you again.
I guess when you copy the image file from the camera file system onto your computer's file system and it is already b/w then maybe it's a configuration of the camera itself.
Because a simple file copy is highly unlikely than it changes the contents of the image file.
But if you "grab" as you say the file already with your program then it might copy AND modify it, not at the same time, but in one go, as a sequential process. In that case one of your arguments in one of those ioctl calls might not be the right one.Bus Error: Passengers dumped. Hech gap yo'q. | http://www.linuxforums.org/forum/hardware-peripherals/91636-problem-usb-camera-based-video4linux.html | CC-MAIN-2014-41 | refinedweb | 550 | 68.06 |
This is your resource to discuss support topics with your peers, and learn from each other.
11-19-2012 05:24 PM
Is there any documentation on Webview fuctions allowing control back, foward refresh. if someone could point me in the right direction that would be great thanks in advance.
Solved! Go to Solution.
11-19-2012 07:37 PM
Quick example I threw together... its pretty self explanatory. I will do a full write up on my blog soon:
import bb.cascades 1.0 Page { content: Container { background: Color.create(0.25, 0.25, 0.25) TextArea { text: "Text up here" editable: false textStyle { // size: 30 fontWeight: FontWeight.Bold color: Color.White } } Container { layout: StackLayout { orientation: LayoutOrientation.LeftToRight } Button { id: button1 text: "Back" onClicked: { myWebView.goBack(); } } Button { id: button2 text: "Forward" onClicked: { myWebView.goForward(); } } Button { id: button3 text: "Refresh" onClicked: { myWebView.reload(); } } } Container { ScrollView { scrollViewProperties.pinchToZoomEnabled: true scrollViewProperties.scrollMode: ScrollMode.Both WebView { id: myWebView url: "" } } } } }
Reference from documentation (
Navigates to the previous page in the navigation history.
If there's no previous page, this method does nothing.
Navigates to the next page in the navigation history.
If there's no next page, this method does nothing.
Stops any loading in progress.
If no loading is in progress, this method does nothing.
Reloads the current page.
11-19-2012 08:02 PM
Oh my god your my hero lol thanks great example
11-23-2012 05:19 AM - edited 11-23-2012 05:53 AM
I was looking for these other WebView functions are the avaible yet
Thanks
11-23-2012 05:51 AM
11-23-2012 05:53 AM
ok besides the normal dev cascades site is there other doc i can find them.
11-23-2012 08:24 AM
11-23-2012 08:18 PM | http://supportforums.blackberry.com/t5/Native-Development/WebView-Functions/m-p/2001541/highlight/true | CC-MAIN-2015-48 | refinedweb | 297 | 61.22 |
On Fri, 2004-04-30 at 19:10, Enrico Zini wrote: > Tags are grouped by "facet" or "namespace"[1] > [1] we still haven't agreed on the term, and we have more > important things to do :) You've come to the right place. At debian-devel we are always willing to argue over the meanings of words. First notice that a facet is something that an object has, whereas a namespace is something that a name of an object occupies. Thus, your two terms aren't really alternatives to each other, but complements of each other. One is a material-mode term, the other a formal-mode term. I suggest you say that the tags belong to tagspaces; that each tagspace corresponds to a particular facet of packages; and that each tag signifies a particular property that packages have. -- Thomas | https://lists.debian.org/debian-devel/2004/04/msg02862.html | CC-MAIN-2016-40 | refinedweb | 140 | 66.37 |
String getLastCharacter(String arg) { if (arg == null) { return null; } if (arg.equals("")) { return ""; } int len = arg.length(); return arg.substring(len - 1, len); }There are two guard clauses in this code to deal with the JavaProblems mentioned above. When dealing with Java Strings, it is common for the guard clauses to take up more space than the meat of the code. Also, both guard clauses deal with the same concept: what if the string is too short to have a last character? In one case, the string is null (and thus has zero characters). In the second case, it exists, and has zero characters. Refactored Example:
public class Is { static boolean nullOrEmpty(String arg) { if (arg == null) { return true; } if (arg.length() == 0) { return true; } return false; } } String getLastCharacter(String arg) { if (Is.nullOrEmpty(arg)) { return arg; } int len = arg.length(); return arg.substring(len - 1, len); }By using a class name of "Is" instead of "StringUtils?", the calling code is much more readable. For example, "if (Is.nullOrEmpty(arg))" versus "if (StringUtils?.isNullOrEmpty(arg))". Because the class name is "Is", the externally called method names do not need to start with "is". Because the externally called method name is static, there is no need for a variable declaration in the client code. This avoids code bloat. Resulting Context: Known Uses: 1. HexCalc? (a BaseSix ScientificCalculator? under development) -- JasperPaulsen RelatedPattern?s: UncleBobsNamingConventions, HelperPattern
static boolean nullOrEmpty(String arg) { return arg==null || arg.isEmpty(); }If this successfully compiles, and the people reading/writing the code find this easier to read, yes. Otherwise, no. In an actual program, the method definition appears just once, whereas it is used many times. A modest amount of CodeBloat is acceptable if it occurs OnceAndOnlyOnce. | http://c2.com/cgi/wiki?IsDot | CC-MAIN-2014-52 | refinedweb | 289 | 61.02 |
Lettuce features a number of built-in steps for Django to simplify the creation of fixtures.
Lettuce can automatically introspect your available Django models to create fixture data, e.g.
Background: Given I have options in the database: | name | value | | Lettuce | Rocks |
This will find a model whose verbose name is options. It will then create objects for that model with the parameters specified in the table (i.e. name=Lettuce, value=Rocks).
You can also specify relational information. Assuming a model Profile with foreign key user and field avatar:
Background: Given user with username "harvey" has profile in the database: | avatar | | cat.jpg |
To create many-to-many relationships, assuming User has and belongs to many Group objects:
Background: Given user with username "harvey" is linked to groups in the database: | name | | Cats |
For many-to-many relationship to be created, both models must exist prior to linking.
Most common data can be parsed, i.e. true/false, digits, strings and dates in the form 2013-10-30.
For more complex models that have to process or parse data you can write your own creating steps using the creates_models decorator.
from lettuce.django.steps.models import (creates_models, reset_sequence, hashes_data) @creates_models(Note) def create_note(step): data = hashes_data(step) for row in data: # convert the author into a user object row['author'] = get_user(row['author']) Note.objects.create(**row) reset_sequence(Note)
Two steps exist to test models.
Then features should be present in the database: | name | value | | Lettuce | Rocks | And there should be 1 feature in the database
You can also test non-database model attributes by prefixing an @ to the attribute name. Non-database attributes are tested after the records are selected from the database.
Then features should be present in the database: | name | value | @language | | Lettuce | Rocks | Python |
There are 6 steps that allow you to do a reasonably comprehensive test of sending email, as long as you use Django’s default django.core.mail functionality.
Check the number of emails sent:
Then I have sent 1 email
A more readable step also exists for checking no mail was sent:
Then I have not sent any emails
Check if the body of an email contains the following multiline string:
Then I have sent an email with the following in the body: """ Lettuce is a BDD tool for python, 100% inspired on cucumber. """
Check if part of an email (subject, body, from_email, to, bcc, cc) contains the given text somewhere:
Then I have sent an email with "Lettuce" in the body
You should always test failure cases for your features. As such, there’s a step to make sure that sending email fails as expected. This will cause SMTPException to always be raised:
Given sending email does not work
At some point in your tests, you will likely want to clear your outbox of all previous changes. To clear your emails, and reset any brokenness caused by a previous sending email does not work step, you can use:
Given I clear my email outbox
It is likely that you want this to run after every test to clean up. To do this, simply add the following to your terrain.py:
from lettuce import after, before from lettuce.django.steps.mail import mail_clear
@before.each_background def reset_email(lettuce_object):
mail_clear(lettuce_object) | http://lettuce.it/reference/django.html | CC-MAIN-2019-13 | refinedweb | 549 | 58.42 |
Support Vector Machine: Introduction
This article was published as a part of the Data Science Blogathon
Introduction
In this article, we will be discussing Support Vector Machines. Before we proceed, I hope you already have some prior knowledge about Linear Regression and Logistic Regression. If you want to learn Logistic Regression, you can click here. You can also check its implementation here. By the end of this article., you will get to know the basics involved in the Support Vector Machine.
Table of Contents:
1. What is a Support Vector Machine?
2. What is the decision rule in SVM?
3. Determining the width of the margin boundaries
4. Maximising the width
5. Determining the hyperplane in N-dimensions
What is a Support Vector Machine?
Support Vector Machine is a supervised learning algorithm that is used for both classification and regression analysis. However, data scientists prefer to use this technique primarily for classification purposes. Now, I think we should now understand what is so special about SVM. In SVM, the data points can be classified in the N-dimensional space. For example, consider the following points plotted on a 2- Dimensional plane:
Image Source:
If you look at the left part in the image, you will notice that the plotted data points are not separable with linear equations in the given plane. Does it mean that it cannot be classified? Well, “NO”! Of course, they can be classified. Just look at the image on the right side. Did you find something interesting? You guessed it right, it is separable in 3- Dimensional plane. Now the data points are separated not with the linear classifier in 2- dimensional plane, but with the hyperplane in 3- dimension.
That is why SVMs are so special!
What are the margin boundaries in SVM?
Now we would like our model to predict the new data very well. Of course, it can predict the training data, but at the very least, we would like our model to predict the validation set data very accurately.
Now, imagine the points plotted in a 2-D space. Consider a decision boundary that separates the data into 2 classes very well. Now, the points which are far from the decision boundary can be easily classified into one of the groups. Now, think of those points which are very near to the decision boundary. Don’t worry! We’ll discuss it in more detail. Consider the two points A and B as shown in the figure given below:
Now we can clearly see that point B belongs to the class of green dots as it is far from the decision line. But what about A? Which class it belongs to? You might be thinking that it also belongs to the class of green dots. But that’s not true. What if the decision boundary changes? See the figure given below:
Now if we consider the grey line as our decision boundary, point A is classified as a blue point. On the other hand, it is classified as a green point if we consider the thick red line to be our decision boundary. Now we are in trouble! To avoid this ambiguity, I would like to bring the concept of “margin boundaries”.
A margin boundary is a hyperplane that maximizes the margin between two classes. The decision boundary lies at the middle of the two margin boundaries. The two margin boundaries move till they encounter the first point of a class. To avoid error, we always maximize the width of the margin boundary. It will be discussed in further sections.
What is the decision rule in SVM?
Let us consider the vector w to be perpendicular to the decision boundary (note that margin boundaries and decision boundaries are parallel lines.) We have an unknown vector (point) u and we have to check whether it lies on the green side or the blue side of the line. This can be done using a decision rule that can help us to find its correct class. It is as follows:
Please note that ‘b’ in the above two equations is constant.
But we see that these two equations won’t be helpful till we have a single equation. To convert the above two equations into a single equation, we again define a variable y which can take a value of either 1 or -1. This variable y is multiplied to both the equations given above, here’s the result:
Now you see that the two equations turned out be the same when multiplied by y.
Now our final decision rule turns out to be as follows:
Determine the width of margin boundaries
As we already discussed at the beginning of the article that we want to maximize our width to avoid an error. So the time has come to find out the way for calculating the width of margin boundaries.
From the above equation, it is clear that the width is equal to the reciprocal of the magnitude of the vector w. Since 2 is a constant in the numerator, we can ignore it and keep the numerator equal to 1
Maximizing the Width
Now, we have already found the width in the above section. As I have already mentioned the width has to be maximized till we encounter any point of each of the classes. Now, to maximize the width, we actually have to minimize the following equation:
Now we will minimize it using the Lagrangian method which is as follows:
Once we differentiate it w.r.t. b and w, we get,
Now we have reached the end of this article but not completely. What we see in the above picture is that L has turned out to be a dot product of x_i and x_j.
NOw let’s implement SVM.
Implementation of SVM using sci-kit learn
Now its time for us to look for the implementation of support vector machines using the sklearn library:
First, we’ll introduce the dataset in our IDE:
Python Code:
Now you will notice that there are a lot of categorical columns which has to be converted to binary column:
loan["total_income"]=loan["ApplicantIncome"]+loan["CoapplicantIncome"]
#converting different categorical/object values to float/int here it is float value loan['Gender_bool']=loan['Gender'].map({'Female':0,'Male':1}) loan['Married_bool']=loan['Married'].map({'No':0,'Yes':1}) loan['Education_bool']=loan['Education'].map({'Not Graduate':0,'Graduate':1}) loan['Self_Employed_bool']=loan['Self_Employed'].map({'No':0,'Yes':1}) loan['Property_Area_bool']=loan['Property_Area'].map({'Rural':0,'Urban':2, 'Semiurban':1}) loan['Status_New']=loan['Loan_Status'].map({'N':0,'Y':1})
Now our task is to fill the missing values in the columns such that all categorical values are filled with mode and continuous values with the median:
#filling missing values with mode for caTEGORICAL TYPE of columns loan['Married_bool']=loan['Married_bool'].fillna(loan['Married_bool'].mode()[0]) loan['Self_Employed_bool']=loan['Self_Employed_bool'].fillna(loan['Self_Employed_bool'].mode()[0]) loan['Gender_bool']=loan['Gender_bool'].fillna(loan['Gender_bool'].mode()[0]) loan['Dependents']=loan['Dependents'].fillna(loan['Dependents'].mode()[0]) loan["Dependents"].replace("3+", 3,inplace=True) loan["CoapplicantIncome"]=loan["CoapplicantIncome"].fillna(loan["CoapplicantIncome"].median()) loan["LoanAmount"]=loan["LoanAmount"].fillna(loan["LoanAmount"].median()) loan["Loan_Amount_Term"]=loan["Loan_Amount_Term"].fillna(loan["Loan_Amount_Term"].median()) loan["Credit_History"]=loan["Credit_History"].fillna(loan["Credit_History"].mode()[0])
Now we will drop the categorical columns which are not in binary form:
loan.drop("Gender", inplace=True, axis=1) loan.drop("Married", inplace=True, axis=1) loan.drop("Education", inplace=True, axis=1) loan.drop("Self_Employed", inplace=True, axis=1) loan.drop("Property_Area", inplace=True, axis=1) loan.drop("Loan_Status", inplace=True, axis=1) loan.drop("ApplicantIncome", inplace=True, axis=1) loan.drop("CoapplicantIncome", inplace=True, axis=1)
Now your data set looks like this:
.png)
Now our next task is to divide the data set into training set and validation set in the ratio of 80:20:
X_train, X_test, y_train, y_test = train_test_split(x_feat,y_label, test_size = 0.2) Now it's time to train our model with SVM: clf = svm.SVC(kernel='linear') clf.fit(X_train, y_train) y_pred = clf.predict(X_test)
Here in the above code, we used a linear kernel. For the above code the accuracy, precision and recall turn out to be as follows:
from sklearn import metrics print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) print("Precision:",metrics.precision_score(y_test, y_pred))
Output is:
Accuracy: 0.7642276422764228 Precision: 0.7363636363636363 Recall: 1.0
Now we can see that we have achieved an accuracy of 76.42% which is quite good and better than what we got when we trained the same dataset with logistic regression. You can check out the implementation of logistic regression on the same dataset here.
This was the end of my article. I hope you enjoyed reading out this article. This was all about the introduction toward support vector machines.
About the Author:
Hi! I am Sarvagya Agrawal. I am pursuing B.Tech. from the Netaji Subhas University Of Technology. ML is my passion and feels proud to contribute to the community of ML learners through this platform. Feel free to contact me by visiting my website: sarvagyaagrawal.github.io
The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.
Leave a Reply Your email address will not be published. Required fields are marked * | https://www.analyticsvidhya.com/blog/2021/06/support-vector-machine-introduction/ | CC-MAIN-2022-27 | refinedweb | 1,551 | 57.98 |
Run embedded scripts with DHTML Object Model
Sorry - did search Google high and low, but could not find an answer to this question:
I have a web page with a <script></script> block containing a function Test().
I open this page with a WebBrowser control (Internet Explorer Automation) in my Application.
How do I execute Test()?
DarenThomas
Tuesday, January 20, 2004
If I remember correctly, you can just get hold of the document object from the WebBrowser control, QueryInterface it for IDispatch and then call your function using IDispatch::Invoke.
R1ch
Tuesday, January 20, 2004
hm... does this translate (in VB) into following code:
CallByName WebBrowser, "Test", VbMethod, Empty
(doesn't work, tried WebBrowser.Document as well...)
I am beginning to wonder if it's supported at all...
DarenThomas
Tuesday, January 20, 2004
Following python (in interactive shell) doesn't work either - I believe it does the same as above since I haven't run makepy for Microsoft Internet controls etc.
from win32com.client import *
ie = Dispatch("InternetExplorer.Application")
ie.Navigate("C:\Projects\Test\Test.html")
doc = ie.Document
doc.Test()
I'd have thought so, but I don't know VB I'm afraid.
You could also try the slightly more hacky method of:
WebBrowser.navigate "javascript:test()"
It looks like IDispatch is the right way to do it from C++
I don't know why CallByName doesn't work tho.
Once I wrote code which was calling JS functions in IE control. Unfortunately I can't find it now (never made it to production), but I remember that it was important for some reason to declare the IE object with events:
Private WithEvents oIE As InternetExplorer ' IE control
After that I was able to do
oIE.Test
which apparently used the IDispatch interface.
Alexander Chalucov ()
Tuesday, January 20, 2004
In javascript, you can call it from a gui with:
function test() {
}
</SCRIPT>.
Google for: javascript tutorial DOM function
Barry Sperling
Tuesday, January 20, 2004
the test method isn't on the browser object, but rather on the MSHTML object contained by the browser object.
so WebBrowser.Document.Test() should work, while WebBrowser.Test() should not.
mb
Tuesday, January 20, 2004
You can run things by using CallByName with the Window object. Here is a function from a class that I built to mirror the MS Script Control, but for the WebBrowser:
Public Function Run(ProcedureName As String, ParamArray Parameters() As Variant) As Variant
On Error GoTo ProcErr
Select Case UBound(Parameters())
Case -1
Run = CallByName(Window, ProcedureName, VbMethod)
Case 0
Run = CallByName(Window, ProcedureName, VbMethod, Parameters(0))
....
Here is also another function of the same class that makes it easy to expose internal objects (even if they're private classes) to the WebBrowser's script:
Public Sub AddObject( _
Name As String, _
Object As Object, _
Optional lFlags As WSRTAppObjFlagEnum = wscrObjKeepAlive)
If FlagSet(lFlags, wscrObjKeepAlive) Then
' This object is for the currenlty loaded URL only add it to the script if we can keep it.
ObjKeep Name, Object, lFlags
End If
If StateFlag(wscrStateLoaded) Then
' This object is not going to be kept alive, just add it to the script.
ObjAddToScript Name, Object
End If
End Sub
Wayne
Tuesday, January 20, 2004
Sorry, that last function "AddObject" doesn't do anything but call the real method which is this:
Private Sub ObjAddToScript( _
Name As String, _
Object As Object _
)
On Error GoTo ProcErr
Dim sCode$
sCode = _
"var " & Name & " = null;" & vbCrLf & _
"function _InitObj(oObj){ " & Name & " = oObj; }"
' NOTE: Doesn't matter which language is used to add the object,
' it will be available to both languages.
Window.execScript sCode, LANG_JS_STRING
Me.Run "_InitObj", Object
...
(The ObjKeep method simply caches the object in a collection so when the browser navigates to a new page, it can re-add it using ObjAddToScript.)
For the record: I followed the above linke to the C++ way of doing things. Translation to VB:
Call webBrowser.Document.Script.Test()
does the trick...
DarenThomas
Wednesday, January 21, 2004
Can this be done dynamically. I am writing an IE scripting tool against webbrowser and need to either find out why <SELECT> onchange events are not being fired or manually firing the javascript. I got it to work using the wb.document.script.nameofscript. But I do not want to hard code the script name.
Thoughts!
S Mummert
Wednesday, February 18, 2004
Would call by name not do it at that level?
CallByName(WebBrowser.Document.Script, "nameoffunc", vbWhutever, strParams)
Alex Butler
Thursday, April 29, 2004
Recent Topics
Fog Creek Home | https://discuss.fogcreek.com/joelonsoftware3/105221.html | CC-MAIN-2018-51 | refinedweb | 756 | 61.26 |
RITZ KEINERT DEPARTMENT OF MATHEMATICS IOWA STATE UNIVERSITY AMES, IA 50011 KEINERT@IASTATE.EDU
Contents 1. Introduction 2. Finding a Routine 3. Calling Fortran from Fortran Case Study 1: DGAMMA Case Study 2: DGESVD 4. Calling Fortran from C Case Study 1: DGAMMA Case Study 2: DGESVD Case Study 3: XERROR/XERBLA Case Study 4: DQAGS Case Study 5: CGAMMA Case Study 6: DGESVD Revisited 5. Calling Fortran from C++ Case Study 1: DGAMMA 2 3 4 5 6 8 12 14 16 19 21 23 26 27
Copyright (C) 1996 by Fritz Keinert (keinert@iastate.edu). This document may be freely copied and distributed as long as this copyright notice is preserved. An electronic copy can be found through the authors home page.
Virtually all scientic computation makes use of standard (canned) subroutines at some point. In this report, I have written up some general guidelines and a number of examples for linking Fortran subroutines to main programs written in Fortran, C, or C++. There used to be another section on calling Fortran subroutines from Matlab. I have decided to put that into a separate document because this is considerably harder and probably of less interest to most people. All examples use the CMLIB and LAPACK libraries. The report is intended as a resource for research activities at Iowa State, and as a supplementary reference for numerical analysis courses. The case studies were developed and tested on the three dierent hardware platforms that currently make up Project Vincent. Project Vincent is a large network of centrally administrated workstations at Iowa State University, patterned after Project Athena at MIT. With minor changes, the examples should carry over to settings outside of Project Vincent. I have kept things simple rather than comprehensive. Some of my assertions may not even be quite correct, if I felt that a complete description would just confuse beginners. A lot of things are repeated in each relevant place, because I expect people to just read the parts that they are interested in. The programming examples follow my particular tastes, which may or may not suit you. Feel free to adapt the examples to your style. For example, I strongly believe in declaring all variables in Fortran, and in prototyping all external subroutines in C. It is not strictly necessary, but it lets the compiler spot errors. If you want to reproduce the case studies, you will need access to CMLIB and LAPACK. On Project Vincent, the following statements should be in your .startup.tty le: add math add lapack If you are outside Project Vincent, see section Finding a Routine for more information on getting the source code for CMLIB or LAPACK. The case studies for calling Fortran from C are DGAMMA (double precision gamma function) from FNLIB, which is part of CMLIB. This is the simplest case: a function with one input argument, one return value. DGESVD (double precision singular value decomposition) from LAPACK: Multiple arguments, some of them arrays. XERROR/XERBLA (error message routines from LINPACK/LAPACK): String arguments. DQAGS (double precision quadrature routine) from QUADPACK, which is part of CMLIB: Subroutine passed as argument. CGAMMA (complex gamma function). Same as DGAMMA, but for complex numbers. DGESVD Revisited. More details and suggestions for advanced programmers. The sections on calling Fortran from Fortran and from C++ use a subset of the same examples. Each section requires that you understand the material from previous sections, and that you are already familiar with Fortran, C, and C++. Some important concepts are repeated frequently, for the benet of readers who only read part of this document.
Calling Fortran Subroutines: Finding a Routine 2. Finding a Routine Suppose you need a subroutine for a specic numerical task. Where do you start looking? I would try the following things, in this order: Look on Project Vincent. Wozu in die Ferne schweifen, wenn das Gute liegt so nah. If you nd a suitable subroutine in some locker on Project Vincent, it will be already compiled and ready to go. The Math Department supports two main numerical packages: CMLIB and LAPACK. Documentation for both packages can be accessed dont have CMLIB, you can get the necessary pieces from Netlib (see below). LAPACK is a linear algebra package intended to be ecient on many types of computer architecture. It can also be found on Netlib. CMLIB and LAPACK are the only two libraries illustrated here, but there are others in the math and mathclasses lockers. The Computation Center or other departments support other packages, for example the NAG library. Check olc answers for leads, try which-locker if you know the name of a package, look around the lockers of likely departments, or ask in a local newsgroup. Look in standard software repositories. The main ones I know of are Netlib, accessible through GAMS (Guide to Available Mathematica Software), accessible through. Whatever you nd there needs to be compiled rst. Ask for help. Ask a colleague, or post a question in a relevant newsgroup like sci.math.num-analysis. The turnaround time can be a little longer here, or you may not get a response, but you might get information you could not nd on your own.. Linking Fortran to Fortran is very straightforward. You just have to tell the compiler where the libraries are located. Only one example for each library is provided.
Case Study 1: DGAMMA This case study illustrates linking with CMLIB. CMLIB is located in the math locker. We want to call the double precision gamma function DGAMMA from FNLIB, which is part of CMLIB, declared as double precision function dgamma(x) double precision x ... This function calculates the gamma function (x). The Fortran program prog1.f is program prog1 * double precision x, y, dgamma * print*, enter x: read*, x y = dgamma(x) print*, gamma(x) = ,y end To compile and run the program: % f77 -o prog1 prog1.f /home/math/lib/arch/libcm.a % prog1 enter x: 2.718 gamma(x) = 1.56711274176688 An alternative way to specify linking with CMLIB is % f77 -o prog1 prog1.f -L/home/math/lib/arch -lcm I usually use the rst form for linking with a single library, the second form for linking with two or more libraries in the same directory..
Case Study 2: DGESVD This case study illustrates linking with LAPACK. LAPACK is located in the lapack locker.= 1 2 3 4 5 6
The program is program prog2 * parameter (m=2, n=3, lwork=20) parameter (minmn=min(m,n)) integer info double precision A(m,n), S(minmn), U(m,m), VT(n,n), work(lwork) data A /1,4,2,5,3,6/ * call dgesvd(a,a,m,n,A,m,S,U,m,VT,n,work,lwork,info) if (info .ne. 0) print*, DGESVD returned INFO =, info call print_matrix(U =,U,m,m) call print_matrix(Sigma =,S,1,minmn) call print_matrix(VT =,VT,n,n) * end subroutine print_matrix(label,A,m,n) character*(*) label integer m, n, i, j double precision A(m,n) * print*, label do 10 i = 1, m write (*,(666f20.15)) (A(i,j),j=1,n) continue
10 *
end All of LAPACK is based on the BLAS (Basic Linear Algebra Subroutines). I have put the BLAS in a separate library so that you can use a dierent set of BLAS if you have one. You always need to link with both libraries. We compile and run the program with % f77 -o prog2 prog2.f -L/home/lapack/lib/arch -llapack -lblas % prog2
This section contains some general guidelines on calling Fortran subroutines from a C main program. It is assumed that you already know C pretty well, and that you know how to do compiling and linking of Fortran or C programs.. A Fortran routine is known to C by its name in lowercase, with underscore appended Example: The Fortran subroutine DGAMMA is known to the C compiler as dgamma . All parameters need to be passed by reference Fortran passes all arguments by reference. This means that the addresses of the actual arguments are passed to the subroutine. Anything the subroutine does to its arguments will aect the values of these variables in the main program. C passes all arguments by value. This means that only copies of the values of the arguments are passed. Anything the subroutine does to its arguments will not aect the main program. The arguments are destroyed at the end of the subroutine call. To imitate a call by reference from C, you need to pass pointers to the arguments, instead of the arguments itself. However, an array variable in C is already a pointer to the corresponding storage location, and you dont have to apply the referencing operator & a second time. Note that this means that you can never pass a constant number directly to a Fortran subroutine. You have to assign the constant to a variable rst, then pass the address of the variable. Strings are arrays of characters, so you dont have to apply the referencing operator &, but you also have to pass the length of the string as a separate integer. See case study XERROR/XERBLA. Example: A call by value in C: float f(float x) {...} main() { float x, y; y = f(x); } A call by reference in C: float f(float *x) {...} main() { float x, y; y = f(&x); } The Fortran subroutine subroutine sub(x,y,z) real x, y(10), z(10,10) ... is called from C as
main() { float x, y[10], z[10][10]; sub_(&x,y,z); } The declarations float *x and float x[] mean exactly the same: x is a pointer to a oating-point number. I use the rst notation for x because it is a scalar, and the second for y because it is an array, but that is only for my personal benet; the compiler doesnt care. I use the declaration float (*z)[] because it works, while the more natural float z[][] does not. See the case study DGESVD Revisited for more information on this. Arrays in Fortran are stored by column, arrays in C are stored by row. Subscript numbering in Fortran starts with 1, in C it starts with 0 A two-dimensional array
is strung out into a vector in memory. Fortran does it by columns, that is a11 , a21 , . . . , am1 , a12, a22 , . . . C does it by rows, that is a11 , a12, . . . , a1n, a21 , a22 , . . . If you pass a two-dimensional array between Fortran and C, one language will see the transpose of what the other language sees. There are many ways to deal with this problem, depending on the individual circumstances: It may not matter at all. You may be able to rearrange calculations, for example by using (AB)T = B T AT . Some LINPACK or LAPACK subroutines have a switch to make them use AT instead of A. Maybe you really do need to transpose the array. Do whatever works, just be aware of this problem. There is no problem with one-dimensional arrays. Arrays of dimension higher than 2 are left as an exercise for the reader. Also, keep in mind that the rst entry in a Fortran array is x(1) or A(1,1), in C it is x[0] or A[0][0]. Do all your input/output in C More specically, in decreasing order of preference: The ideal case is when there are no read/write statements at all in the Fortran subroutine. The next best case is when there are some read/write statements in the subroutine, but they are not used (for example error messages that can be turned on or o). The program will run ne, but the presence of the Fortran I/O statements will cause the loader to link in 400K of Fortran I/O library code. If you have the Fortran source code, you may be able to comment out the read/write statements. Many subroutine libraries channel all I/O through a single routine; you may be able to rewrite that one routine in C. I have done this for XERROR from CMLIB and XERBLA from LAPACK; see case study XERROR/XERBLA.
10
If you absolutely must have I/O in both main program and subroutine, use keyboard input/screen output, or use a dierent disk le in each language. That should still work ok. Do NOT read or write to the same disk le from both Fortran and C, unless it is closed and re-opened in between. You will get garbled input or output. You might even uncover some interesting aws in Unix and get a garbled hard disk (unlikely, but conceivable). Use the Fortran compiler for linking There are two steps involved in producing a working program: compiling and linking. Unless you suppress linking with the -c compiler switch, the compiler will invoke the linker automatically and specify the location of the system libraries the linker needs. For whatever internal reasons, the Fortran compiler knows where the C libraries are, but the C compiler does not know where the Fortran libraries are. You could gure out the exact location of all those libraries and specify them explicitly, but why bother? Just use the Fortran compiler for the linking step.
11 Corresponding Declarations
The following table gives a quick reference to corresponding data type declarations in Fortran and C. For explanations, read this overview section and/or the appropriate case study. Common blocks are mentioned in this table for completeness, but they are not mentioned anywhere else in the text; they dont usually come up in calling library subroutines. Fortran C typedef struct { float re, im; } complex; typedef struct { double re, im; } double_complex;
integer x, y(10), z(10,20) int x, y[10], z[20][10]; real float real*8 double complex complex complex*16 double_complex character c character*10 s real function f(x,y,z,c,s,f2) real x,y(10),z(10,20), r character c character*20 s real f2 external f2 r = f(x,y,z,c,s,f2) char c, s[11]; extern float f_(float *x, float y[], float (*z)[], char *c, char s[], float f2_(float *xx), int length_of_s); extern float f2_(float *x); main() { float x, y[10], z[20][10], r; char c, s[21]; r = f_(&x,y,z,&c,s,f2_,strlen(s)); } extern struct {float a; float b[10]; int c; } _BLNK__; extern struct {float a; float b[10]; int c; } cause_;
12
Case Study 1: DGAMMA This case study illustrates the simplest case: calling a simple function with one scalar argument and one return value. We want to call the double precision gamma function DGAMMA from FNLIB, which is part of CMLIB, declared as double precision function dgamma(x) double precision x ... This function calculates the gamma function (x). The C program dgamma.c is #include <stdio.h> #include <math.h> extern double dgamma_(double *x); int main(int argc, char **argv) { double x, y; printf("enter x: "); scanf("%lf",&x); y = dgamma_(&x); printf("gamma(x) = %20.15lg\n",y); return 0; } To compile and run this program on a DecStation or SGI: % f77 -o dgamma dgamma.c /home/math/lib/arch/libcm.a % dgamma enter x: 2.718 gamma(x) = 1.56711274176688 The Dec Alpha Fortran compiler is brain damaged and needs to be told explicitly that this program has NO FORtran MAIN program: % f77 -nofor_main -o dgamma dgamma.c /home/math/lib/arch/libcm.a % dgamma enter x: 2.718 gamma(x) = 1.56711274176688 An alternative way to specify linking with CMLIB is % f77 [-nofor_main] -o dgamma dgamma.c -L/home/math/lib/arch -lcm I usually use the rst form for linking with a single library, the second form for linking with two or more libraries in the same directory. The brackets around -nofor main mean: put this argument in if you are on an Alpha, otherwise leave it out.. You could compile gamma.c with the C compiler rst, for esthetic reasons or because you want to use a dierent C compiler, but you should still use Fortran to do the linking: % cc -c dgamma.c % f77 [-nofor_main] -o dgamma dgamma.o /home/math/lib/arch/libcm.a
13
There is one more thing we can improve. The DGAMMA subroutine does not print anything, but it contains a call to a standard routine XERROR which produces error messages if necessary. That routine and thus the entire Fortran I/O library get linked in. We can reduce the size of the compiled program by about 400K by rewriting XERROR in C. See case study XERROR/XERBLA.
14
Case Study 2: DGESVD This case study illustrates the use of one- and two-dimensional arrays as subroutine arguments. Fortr= The program is #include <stdio.h> #include <math.h> extern void dgesvd_(char *jobu, char *jobvt, int *m, int *n, double (*A)[], int *lda, double (*S)[], double (*U)[], int *ldu, double (*VT)[], int *ldvt, double work[], int *lwork, int *info); void print_matrix(char *label, double (*A)[], int m, int n) { int i, j; #define A(i,j) (*((double *)A + n*i + j)) printf("%s\n",label); for (j=0; j<n; j++) { for (i=0; i<m; i++) printf("%20.15lf ",A(i,j)); printf("\n"); } printf("\n"); } #define max(a,b) ((a) > (b) ? (a) : (b)) #define min(a,b) ((a) < (b) ? (a) : (b)) #define M 2 #define N 3 #define LWORK max(3*min(M,N)+max(M,N),5*min(M,N)-4) 1 2 3 4 5 6
15
int main(int argc, char **argv) { int m = M, n = N, lwork = LWORK, info; double A[N][M] = {1,4,2,5,3,6}, S[min(M,N)][1], U[M][M], VT[N][N], work[LWORK]; char job = a; dgesvd_(&job, &job, &m, &n, A, &m, S, U, &m, VT, &n, work, &lwork, &info); if (info != 0) fprintf(stderr,"DGESVD returned info = %d\n",info); print_matrix("U =",U,M,M); print_matrix("Sigma =",S,min(M,N),1); print_matrix("VT =",VT,N,N); return(0); } Observe that we are entering AT as data, which is seen as A by DGESVD. Fortran returns U , V T in Fortran ordering, which is seen as U T , V by the C main program. Subroutine print matrix actually prints the transpose of its argument, so the printout looks correct. print matrix is explained in more detail in case study DGESVD Revisited. I have declared S as a two-dimensional array so that I could call print matrix on it. Normally, S should be one-dimensional. I use the declaration double (*A)[] for a two-dimensional array because it works, while the more natural double A[][] does not. See case study DGESVD Revisited for more information on this. Again, we include a rewritten error message routine to make the compiled program shorter (see case study XERROR/XERBLA). We compile and run the program with % f77 [-nofor_main] -o dgesvd dgesvd.c xerbla.c -L/home/lapack/lib/arch -llapack -lblas % dgesvd
16
Case Study 3: XERROR/XERBLA This case study illustrates the passing of strings as parameters. It is actually backwards compared to the other examples, because we are writing a C routine to be called from a Fortran library program. It still illustrates the point, and is more useful than a contrived example going the other way. Subroutine packages typically channel all error messages through a single subroutine, so that messages can be turned o or diverted at a central place. If we rewrite this single subroutine in C, the entire Fortran I/O code does not need to be linked in, which saves about 400K of storage per compiled program. I have done this here for routines XERROR from CMLIB, and XERBLA from LAPACK. The denition of XERROR is subroutine xerror(messg,nmessg,nerr,level) character*72 messg integer nmessg, nerr, level where messg is the error message itself, nmessg is the number of characters in the message, nerr the error number, and level the severity (2=fatal, 1=recoverable, 0=warning, -1=warning which is printed only once, even if XERROR gets called several times for this error). Strings in both Fortran and C are arrays of characters. The dierence is that Fortran keeps track of the actual length of a string in a separate (invisible, internal) integer variable, while C indicates the end of a string by a zero byte. One consequence of this is that a C string must be one character longer than a corresponding Fortran string, to hold the zero byte. The other consequence is that Fortran must pass the length of the string as an invisible argument. This argument is passed by value, not by reference, and goes at the end of the argument list. If there are several string arguments, all length arguments go at the end, in the same order as the strings. In this particular example the string length is passsed twice, in the hidden internal variable messg length and also explicitly in nmessg. The subroutine could use either one. The unnecessary nmessg argument probably dates back to the days when it was a good policy not to assume too much about the intelligence of a Fortran compiler. My XERROR routine does not do all the things the original does, and the formatting looks dierent, but it does get the error message printed. Implementing more features is left as an exercise for the reader. #include <stdio.h> void xerror_(char *messg, int *nmessg, int *nerr, int *level, int messg_length) { char message[73]; int i; /* copy the message over into a C string */ for (i = 0; i < *nmessg; i++) message[i] = messg[i]; message[*nmessg] = \0; switch (*level) { case 2: fprintf(stderr,"Fatal Error "); break; case 1: fprintf(stderr,"Recoverable Error "); break; default: fprintf(stderr,"Warning "); } fprintf(stderr,"%d\n%s\n",*nerr,message);
17 exit(1); }
Note how I am carefully copying over the message into a separate C string. The quick and dirty way is messg[*nmessg] = \0. However, if the error message is the full 72 characters long, this would write a zero byte over the following storage location, with unknown consequences. It is unlikely that this will happen, and even if it does happen, it is unlikely that this would lead to any problems. Still, an error of this sort is virtually impossible to track down, and I do believe in defensive programming. Also, if you are paranoid: little errors like this have been exploited repeatedly by hackers. Lets link DGAMMA rst with the original XERROR routine from CMLIB, and test it on a DecStation: % f77 -o dgamma dgamma.c /home/math/lib/arch/libcm.a % ls -l dgamma -rwxrwxr-x 1 keinert 478436 Nov 1 11:55 dgamma* % dgamma enter x: -1 FATAL ERROR IN... DGAMMA X IS A NEGATIVE INTEGER ERROR NUMBER = 4 JOB ABORT DUE TO FATAL ERROR. 0 ERROR MESSAGE SUMMARY MESSAGE START NERR DGAMMA X IS A NEGAT 4 With xerror.c % f77 -o dgamma dgamma.c xerror.c /home/math/lib/arch/libcm.a % ls -l dgamma -rwxrwxr-x 1 keinert 81492 Nov 1 11:56 dgamma* % dgamma enter x: -1 Fatal Error 4 DGAMMA X IS A NEGATIVE INTEGER The program is now about 400K shorter, and still prints error messages. To be honest, this trick is not really needed any longer on Alphas and SGIs. The newer machines all support dynamic linking. On a DEC Alpha, the le length can be shortened from 49152 to 32768, which isnt as dramatic a dierence; on an SGI, the le size goes from 31196 to 24004. I assume there is still a dierence of 400K or more of main memory during execution, when the dynamic libraries get loaded. Subroutine XERBLA from LAPACK is much simpler, and reproduced in its entirety (minus comments): subroutine xerbla(srname,info) character*6 srname integer info * write(*,fmt =9999) srname, info stop format( ** On entry to ,a6, parameter number ,i2, had an illegal value) end #include <stdio.h> void xerbla_(char srname[6], int *info, int srname_length) { char name[7];
LEVEL 2
COUNT 1
9999
The C translation is
18
for (i = 0; i < 6; i++) name[i] = srname[i]; name[6] = \0; fprintf(stderr," ** On entry to %6s parameter number %d had an illegal value\n", name,*info); exit(1); } Normally, we would take the string length from srname length, but in this particular case the LAPACK documentation states that all subroutine names have exactly 6 characters.
19 Case Study 4: DQAGS This case study illustrates the passing of a subroutine name as an argument. bPACK, which is part of CMLIB. For our purposes it suces to know that on input we have to specify the function f, integration limits a, b and the requested absolute and/or relative accuracy epsabs, epsrel. On return, we get the approximate integral result, the estimated actual error abserr, and the number of function evaluations done neval. Whether the function to be integrated is written in Fortran or C makes no dierence. You just have to make sure that a C function does argument passing by reference. As an example, lets integrate ex from 0 to 1 twice, once using a Fortran function and once using a C function. #include <stdio.h> #include <math.h> extern void dqags_(double f(double *x), double *a, double *b, double *epsabs, double *epsrel, double *result, double *abserr, int *neval, int *ier, int *limit, int *lenw, int *last, int iwork[], double work[]); /* Fortran function */ extern double f1_(double *x); /* double precision function f1(x) double precision x f1 = exp(x) end */ /* C function */ double f2(double *x) { return exp(*x); } #define LIMIT 5 #define LENW 4*LIMIT int main(int argc, char **argv) { int neval, ier, limit=LIMIT, lenw=LENW, last, iwork[LIMIT]; double a = 0., b = 1.e0, epsabs = 1.e-10, epsrel = 0., result, abserr, work[LENW]; double exact = exp(1.e0) - 1; dqags_(f1_, &a, &b, &epsabs, &epsrel, &result, &abserr, &neval, &ier, &limit, &lenw, &last, iwork, work);
Calling Fortran From C if (ier > 0) { fprintf(stderr,"DQAGS returned IER = %d\n",ier); } printf("Fortran result = %20.15lf\n",result); printf("estimated error = %20.15lf\n",abserr); printf("actual error = %20.15lf\n",abs(exact - result)); printf("computed using %d function evaluations\n\n",neval); dqags_(f2, &a, &b, &epsabs, &epsrel, &result, &abserr, &neval, &ier, &limit, &lenw, &last, iwork, work); if (ier > 0) { fprintf(stderr,"DQAGS returned IER = %d\n",ier); } printf("C result = %20.15lf\n",result); printf("estimated error = %20.15lf\n",abserr); printf("actual error = %20.15lf\n",abs(exact - result)); printf("computed using %d function evaluations\n\n",neval); return(0); }
20
We compile and run it with % f77 [-nofor_main] -o dqags dqags.c f1.f xerror.c /home/math/lib/arch/libcm.a % dqags Fortran result = 1.718281828459046 estimated error = 0.000000000000019 actual error = 0.000000000000000 computed using 21 function evaluations C result = 1.718281828459046 estimated error = 0.000000000000019 actual error = 0.000000000000000 computed using 21 function evaluations
21
Case Study 5: CGAMMA This case study illustrates the use of complex numbers. This is not quite so simple. You should skip this example unless you really need to use complex numbers. Routine CGAMMA is the complex equivalent of DGAMMA, that is, the gamma function (x) for complex arguments. It is declared as complex function cgamma(x) complex x ... Note that CGAMMA is in single precision; I was not able to nd a public domain double precision complex gamma function. The main problem is that C does not have complex numbers. As far as storing them goes, that is easy to x: typedef struct { float re, im; } complex; typedef struct { double re, im; } double_complex; Now you can have declarations like complex x, even arrays like complex x[10], and they will be stored like Fortran complex numbers. You still dont get complex arithmetic. If you want to multiply complex numbers, you have to write your own multiplication routine, for example complex cproduct(complex x, complex y) { complex z; z.re = x.re * y.re - x.im * y.im; z.im = x.re * y.im + x.im * y.re; return z; } Now you can program z = x y as z = cproduct(x,y), and similarly for other arithmetic operations. This notation gets tedious very fast. If you really want to work with complex numbers in C, it may be worthwhile learning C++. C++ lets you dene arithmetic operations on new data types using standard notation, and I am sure this has been done already for complex numbers. An even bigger problem occurs with functions that return a complex number, like CGAMMA. At rst, the interface appears straightforward, just a minor modication of the DGAMMA example: #include <stdio.h> #include <math.h> typedef struct { float re, im; } complex; extern complex cgamma_(complex *x); int main(int argc, char **argv) { complex x, y; printf("enter x: "); scanf("%f%f",&(x.re),&(x.im)); y = cgamma_(&x); printf("gamma(x) = %10.7g + i %10.7g\n",y.re,y.im); return 0; } This program compiles and runs, but produces garbage. In Fortran, functions can return only scalars. Real scalars are returned in one particular CPU register, complex scalars use two registers.
22
In C, functions can return scalars, pointers, structures, maybe even arrays. Anything that ts into one CPU register (a real number, integer, character, or pointer) is returned in the same CPU register that Fortran uses. Anything longer is returned on the stack. The one place where these conventions clash is complex numbers: Fortran returns a complex number in two registers, C uses the stack. I dont see how you could access a Fortran complex return value from C without using assembly language. The best easy solution I came up with is to create a Fortran interface routine CGAMMA2 which converts a function return value into a subroutine argument: subroutine cgamma2(x,y) complex x, y, cgamma y = cgamma(x) end Now we can get to the return value y from C: #include <stdio.h> #include <math.h> typedef struct { float re, im; } complex; extern void cgamma2_(complex *x, complex *y); int main(int argc, char **argv) { complex x, y; printf("enter x: "); scanf("%f%f",&(x.re),&(x.im)); cgamma2_(&x,&y); printf("gamma(x) = %10.7g + i %10.7g\n",y.re,y.im); return 0; } To compile and run % f77 [-nofor_main] -o cgamma cgamma.c cgamma2.f xerror.c /home/math/lib/arch/libcm.a % cgamma enter x: 2 1 gamma(x) = 0.6529655 + i 0.343066
23
Case Study 6: DGESVD Revisited This case study deals with advanced topics. Most users should skip this section. Two-Dimensional Arrays Lets look at the reason why a two-dimensional array argument is declared as double (*A)[] instead of the more natural double A[][], and why my print matrix subroutine in case study DGESVD is so messy. We need to consider how array subscripting is implemented. Two-dimensional arrays in C are stored by rows. That means that if A is declared as double A[5][10]; then A[i][j] refers to the location (10*i+j) from the start. In order to do address calculations correctly, the compiler needs to know the second dimension of A. Similarly, the Fortran compiler needs to know the rst dimension. When a two-dimensional array is passed to a subroutine, the dimension information must be made available to the subroutine. Fortran allows dimensions to be passed as a parameters, as in subroutine sub(A,m,n) integer m, n real A(m,n) As far as I know, neither C nor C++ let you do this. The only solution is to do your own address arithmetic. In print matrix, I am doing that in a macro, to keep the code halfway readable. As far as declaring the argument type goes, C rejects double A[][] since it contains no dimension information. It would accept double A[][10] (only the second dimension is essential), but then you could only pass arrays with a second dimension of 10. Some experimentation determined that the compiler will accept the notation double (*A)[], which means that A is a pointer to an array of doubles. A Fancier Interface Lets assume you want to do a whole bunch of work with routines from some Fortran library. It would be nice to hide all the messy details in some interface routines once and for all. Here is how I would go about it. Note the similarity with the Matlab interface in my other document Calling Fortran from Matlab. After looking at a couple of matrix packages for C and C++, I decided that I like Matlabs approach best. My matrix data structure is a stripped-down version of what Matlab uses internally. All sizing information is contained in the matrix itself, and matrices (in particular working storage) can be created and destroyed as needed. These matrices are stored by columns, so we can interface them more easily to Fortran. My interface routine dgesvd takes 4 parameters: input matrix A, output matrices U, S, VT. It takes care of setting up the other 10 parameters dgesvd requires. In detail, it does the following: extract dimension information from the arguments create working storage call dgesvd destroy working storage I will let the code speak for itself: #include <stdio.h> #include <math.h> #include <stdlib.h> #define max(a,b) ((a) > (b) ? (a) : (b)) #define min(a,b) ((a) < (b) ? (a) : (b)) typedef struct { int m, n; double *data; } matrix; matrix *create_matrix(int m, int n) { matrix *M;
24
M = (matrix *) malloc(sizeof(matrix)); if (M == NULL) { fprintf(stderr,"Memory allocation error\n"); exit(1); } M->m = m; M->n = n; M->data = (double *) malloc(m * n * sizeof(double)); if (M->data == NULL) { fprintf(stderr,"Memory allocation error\n"); exit(1); } return M; } void destroy_matrix(matrix *M) { free(M->data); free(M); } void print_matrix(char *label, matrix *A) { int i, j; #define A(i,j) (*(A->data + (A->m)*j + i)) printf("%s\n",label); for (i = 0; i < A->m; i++) { for (j = 0; j < A->n; j++) printf("%20.15lf ",A(i,j)); printf("\n"); } printf("\n"); } extern void dgesvd_(char *jobu, char *jobvt, int *m, int *n, double A[], int *lda, double S[], double U[], int *ldu, double VT[], int *ldvt, double work[], int *lwork, int *info); int dgesvd(matrix *A, matrix *U, matrix *S, matrix *VT) { char job = a; int m = A->m, n = A->n, lwork = 5*max(m,n), info; matrix *work; work = create_matrix(lwork,1); dgesvd_(&job,&job,&m,&n,A->data,&(A->m),S->data,U->data,&(U->n), VT->data,&(VT->n),work->data,&lwork,&info);
25
destroy_matrix(work); return(info); } #define M 2 #define N 3 int main(int argc, char **argv) { double data[] = {1,4,2,5,3,6}; matrix *A = create_matrix(M,N), *U = create_matrix(M,M), *VT = create_matrix(N,N), *S = create_matrix(min(M,N),1); int info; A->data = data; print_matrix("A =",A); info = dgesvd(A, U,S,VT); if (info != 0) fprintf(stderr,"DGESVD returned info = %d\n",info); print_matrix("A =",A); print_matrix("U =",U); print_matrix("Sigma =",S); print_matrix("VT =",VT); return(0); } The program compiles and runs like before.
26
This section describes how to call a Fortran subroutine from C++. It is very rudimentary because I know very little about C++. One of these days I hope to expand the section and/or integrate it better with the section on C, but dont hold your breath. It is assumed that you fully understand the section Calling Fortran from C. If you want to reproduce the case study in this section, you will need access to CMLIB and the Gnu C++ compiler. On Project Vincent, the following statements should be in your .startup.tty le: add math add gcc Documentation for CMLIB can be found through. The ocial documentation for the Gnu C++ compiler is a hypertext document available from inside the Emacs editor. You can access it by pressing Ctrl-U Ctrl-H i in Emacs, and giving /home/gcc2/info/gcc.info as the document name. There is a man page for C++, but it states that it may be out of date and will not be updated any more. Take your chances if you want. All the guidelines from the section Calling Fortran from C apply also to C++, and there is one new one: Wrap extern "C" {...} around the extern statements referring to Fortran or plain C One of the main features of C++ is overloading. This means that several subroutines can share the same name, as long as they have dierent types of arguments. The compiler will gure out from the arguments which of the routines you mean. This is not unlike the situation in Fortran, where y = exp(x) invokes the single precision, double precision, or complex exponential function, depending on the type of x. In order to maintain its sanity, the C++ compiler internally appends a string indicating the argument types to each subroutine name. For the Fortran interface, you need to turn this feature o. You do this by declaring your Fortran or plain C subroutines inside an extern "C" {...} environment. See case study DGAMMA for an example.
27
Case Study 1: DGAMMA This case study illustrates mainly the linking step. For details on the interface, see section Calling Fortran from C. As an example, we use the simplest case: calling a simple function with one scalar argument and one return value. We want to interface the double precision gamma function DGAMMA from FNLIB, which is part of CMLIB, declared as double precision function dgamma(x) double precision x ... This function calculates the gamma function (x). The C++ program dgamma.C is #include <stream.h> #include <math.h> extern "C" { extern double dgamma_(double *x); } int main(int argc, char **argv) { double x, y; cout << "enter x: "; cin >> x; y = dgamma_(&x); cout << "gamma(x) = " << y << "\n"; return 0; } This is basically the same as dgamma.c in section Calling Fortran from C, except for the wrapper around the extern declaration and the dierent I/O statements. Linking is a real headache. As mentioned before, the linking must be done by the Fortran compiler; unfortunately, the Fortran compiler knows where the system C libraries are, but not where the C++ libraries are. That is because the system C compiler and the system Fortran compiler come from the same company, the C++ compiler does not. The situation is complicated by the fact that the C++ libraries are located in directories with very long, complicated names. The following instructions are accuracte as of October 22, 1996. As soon as a new version of the Gnu C++ compiler gets installed, or the libraries get moved for some other reason, this part will break. If you need to do this from scratch, you can gure out the system libraries as follows: write a trivial C++ program hello.C #include <stream.h> int main(int argc, char **argv) { cout << "hello, world\n"; return 0; } and compile and link it verbosely. Only the -L and -l arguments in the ld statement at the end are important. On my DecStation at the time of writing, I got the following (after splitting the line across several lines, for readability): % g++ -v hello.C ... /afs/iastate.edu/project/gcc2/ultrix43a/lib/gcc-lib/mips-dec-ultrix4.3/2.7.2/ld
28
/usr/lib/cmplrs/cc/crt0.o -L/afs/iastate.edu/project/gcc2/ultrix43a/lib/gcc-lib/mips-dec-ultrix4.3/2.7.2 -L/usr/lib/cmplrs/cc -L/usr/lib/cmplrs/cc -L/afs/iastate.edu/project/gcc2/ultrix43a/lib /usr/tmp/cca240561.o -lg++ -lstdc++ -lm -lgcc -lc -lgcc % Anything in /usr/lib would be known to the Fortran compiler, and /afs/iastate.edu/project/gcc2 is the same as /home/gcc2, so we need to link with -L/home/gcc2/ultrix43a/lib/gcc-lib/mips-dec-ultrix4.3/2.7.2 -L/home/gcc2/ultrix43a/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc Now we are ready: % g++ -c dgamma.C % f77 -o dgamma dgamma.o /home/math/lib/dec/libcm.a -L/home/gcc2/ultrix43a/lib/gcc-lib/mips-dec-ultrix4.3/2.7.2 -L/home/gcc2/ultrix43a/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc % dgamma enter x: 2.718 gamma(x) = 1.56711 % Everything after the f77 is on one huge line; I have split it here only for readability. On an Alpha, the linking command is % f77 -nofor_main -o dgamma dgamma.o /home/math/lib/axp/libcm.a -L/home/gcc2/du3.2/lib/gcc-lib/alpha-dec-osf3.2/2.7.2 -L/home/gcc2/du3.2/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc On an SGI, the command is % f77 -o dgamma dgamma.o /home/math/lib/sgi/libcm.a -L/home/gcc2/sgi53/lib/gcc-lib/mips-sgi-irix5.3/2.7.2 -L/home/gcc2/sgi53/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc To keep from going insane, you probably want to put those library names in some environment variable, alias, or shell script. For example, you could put the following in your .cshrc.mine le: if (arch == "dec") then setenv GCCLIBS "-L/home/gcc2/ultrix43a/lib/gcc-lib/mips-dec-ultrix4.3/2.7.2 -L/home/gcc2/ultrix43a/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc" else if (arch == "axp") then setenv GCCLIBS "-L/home/gcc2/du3.2/lib/gcc-lib/alpha-dec-osf3.2/2.7.2 -L/home/gcc2/du3.2/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc" else setenv GCCLIBS "-L/home/gcc2/sgi53/lib/gcc-lib/mips-sgi-irix5.3/2.7.2 -L/home/gcc2/sgi53/lib -lg++ -lstdc++ -lm -lgcc -lc -lgcc" endif (again, all the setenv statements should be one line, not two). Then you can compile with % g++ -c dgamma.C % f77 [-nofor_main] -o dgamma dgamma.o /home/math/lib/dec/libcm.a $GCCLIBS On Alphas running Digital UNIX 3.2D, a C++ compiler from DEC is now available. Unfortunately, it is not as nicely integrated with Fortran as the standard C compiler. You still have to specify a bunch of libraries to link with. The linking sequence for our example is % cxx -c dgamma.C
29
Calling Fortran From C++ % f77 -nofor_main -o dgamma dgamma.o /home/math/lib/dec/libcm.a /usr/lib/cmplrs/cxx/_main.o -L/usr/lib/cmplrs/cxx -lcxxstd -lcxx -lexc % dgamma enter x: 2.718 gamma(x) = 1.56711. | https://ru.scribd.com/document/128151893/CALLING-FORTRAN-SUBROUTINES-FROM-FORTRAN-C-AND-C | CC-MAIN-2020-16 | refinedweb | 7,090 | 63.19 |
Download TextPrinter-Test.zip complete MonoDevelop solution to demonstrate TextPrinter Download QuickFont-Test.zip complete MonoDevelop solution to demonstrate QuickFont Download TextureLib-Test.zip complete MonoDevelop solution to demonstrate TexLib Download HandCrafted.zip complete MonoDevelop solution to demonstrate TextRenderer Ver. 1 Download TextRenderer-Test.zip complete MonoDevelop solution to demonstrate TextRenderer Ver. 2 Download FreeType-Test.zip complete MonoDevelop solution to demonstrate FtFont Ver. 1 and Ver. 2 Download FreeTypeDynamic-Test.zip complete MonoDevelop solution to demonstrate FtFont Ver. 3
TextPrinter
QuickFont
TexLib
TextRenderer
FtFont
This article shall help to get a quick overview about the text rendering options for OpenGL/OpenTK, espeially for the MONO/.NET programming languages. I want to share my findings and help programmers, which are looking for a solution, that fits their needs.
There are two basic approaches:
Let's start with a short discussion about the INNOVATIVE approach: This method requires a powerful GPU and a considerable amout of texture buffer memory on the one hand but it relieves the CPU remarkable on the other hand. Currently there is exactly one known implementation: GLyphy. IMO this will definitively be the approach of the future. Future because GLyphy detected various implementation errors in Mesa, almost all video drivers and pixel shaders it has been tested for. And they have to be fixed before it can be used widely.
Now let's go back to the CONVENTIONAL approach: There are thee practices:
The practice 1.a. (text to final texture bitmap) requires DrawString() and MeasureString() methods of the graphics context, used to render the text-to-display.
DrawString()
MeasureString()
Pros: All that's needed is provided by the Windows GDI or the X11 font server. Any system font can be used.
Cons: The quality depends on the providerd DrawString() and MeasureString() method implementation.
Description: On Windows the System.Drawing.Graphics class implementation produces excellent output quality. On X11 the Mono implementation of the System.Drawing.Graphics class produces output quality in a wide range:
System.Drawing.Graphics
On X11 the Mono wrapper for Pango or Cairo's Pango calls could be a good alternative to Mono's System.Drawing.Graphics namespace (Windows GDI replica). Cairo offers Cairo.Context.ShowText() and Cairo.Context.TextExtents() as an equivalent to System.Drawing.Graphics.DrawString() and System.Drawing.Graphics.MeasureString(). But i didn't find code "ready to use" that implements the required functionality for application in the context of OpenTK.
Cairo.Context.ShowText()
Cairo.Context.TextExtents()
System.Drawing.Graphics.DrawString()
System.Drawing.Graphics.MeasureString()
The practice 1.b. (glyphs to intermediate texture bitmap, excerpts to final texture bitmap) is a little bit "reinventing the wheel". Because to render the glyphs of a font to an (intermediate) bitmap is the same thing that Windows GDI or X11 font server already do.
Pros: Absolute control over the the whole text rendering chain (glyphs, texture, blending). Any quality and anny effect can be achieved. Any system font can be used.
Cons: A lot of effort for creation and management of the (intermediade) font bitmaps. As well as for extraction of glyph texture's excerpts and combination to a string. Program initialization requires font bitmap initialization and consumes runtime.
Description: This practice requires DrawString() and MeasureString() method as well, but the produced font bitmaps can be post-processes to achieve a specific quality or effect. Drawbacks of the MONO implementation of the System.Drawing.Graphics class can be counterbalanced.
On X11 a FreeType based text drawing should be used instead of Mono's System.Drawing.Graphics namespace (Windows GDI replica).
The practice 1.c. (bitmap-font to intermediate texture bitmap, excerpts to final texture bitmap) is similar to practice 1.b, but it doesn't provide the same fonts as the Windows GDI or X11 font server already do - it provides fonts from specific bitmap font files. This practice is typically used by games.
Pros: Absolute control over the whole text rendering chain (glyphs, texture, blending). Any quality and anny effect can be achieved.
Cons: A lot of effort for creation and management of the font bitmaps. As well as for extraction of glyph texture's excerpts and combination to a string. Specific font files are required. Program initialization requires font bitmap initialization but is much faster than 1.b.
Description: The creation of font bitmaps can be completely separated (by time, by resources, by location) from their usage. Artificial or texture fonts are easy to achieve. Most of the fonts are monospaced, but proportional fonts are possible - they need the glyph widths in addition to the bitmap provided by the font file.
I prepared four sample solutions, that cover the practices 1.a., 1.b. and 1.c., all with MonoDevelop 5.0.1 on Mono 3.8.0 and .Net 4.0. The OpenTK library is assembly version 1.1.0.0, the Mesa library is 10.3.7:
OpenTK.Graphics.TextPrinter
TextureFont
I've updated MonoDevelop 5.0.1 to MonoDevelop 5.10 to overcome the frequent debugger crashes.
I've found the 'missing piece' to use FreeType instead of Mono's GDI implementation (System.Drawing.Graphics class) within the articles Rendering FreeType/2 with OpenGL and Rendering AltNETType (= .NET FreeType port) with OpenGL. They led me to the FtFont class that joins ideas from OpenGL 3.3+ text..., freetype-gl and FreeType Fonts.
I added three further sample applications, that cover the practices 1.a and 1.b:
I've found the 'missing piece' to fix the 'unicode charachter' problems like '¬' instead of '€' within the SFML code. This library adds glyphs dynamically to the intermediate texture bitmap. I can recommend to read SFML-2.5.0\src\SFML\Graphics\Font.cpp, SFML-2.5.0\src\SFML\Graphics\Texture.cpp and SFML-2.5.0\src\SFML\Graphics\Text.cpp. For this solution i had to implement
SFML-2.5.0\src\SFML\Graphics\Font.cpp
SFML-2.5.0\src\SFML\Graphics\Texture.cpp
SFML-2.5.0\src\SFML\Graphics\Text.cpp
FtTexture
FtFontPage
FtGlyphTable
GlCommandSequence class
because dynamically glyph appending doesn't work during string drawing, if the intermediate texture is to enlarge in the case a new glyph bitmap to add. Moreover, the command pipeline offers the advantage to buffer the GL commands and avoid recalculation of the glyph texture bitmap excerpts in future.
The new sample application FreeTypeDynamic-Test implements all these techniques. This sample application is based on FreeTypeLineWise-Test. The new sample application The TextPrinter-Test sample is based on the obsolete TextPrinter from the OpenTK.Compatibility.dll. Obsolete doesn't mean the code is completely outdated. Instead the code currently lacks of a maintainer to keep it aligned with the OpenTK development progress. The output quality using TextPrinter is excellent.
OpenTK.Compatibility.dll
The next image shows a sample output with TextPrinter and seven coloured different fonts. The quality is excellent. No off-color pixel, no residues.
width="699" alt="Image 9" data-src="/KB/openGL/1057539/TextPrinterSample.png" class="lazyload" data-sizes="auto" data->
Instead using TextPrinter, the whole OpenTK community recommends to write an own text printer. But TextPrinter is still present, produces very good output quality and is open source (can be copied and used, even if it would be removed from OpenTK.Compatibility.dll in remote future).
The sample program HandCrafted-Test will take a closer look to the aspect 'write an own text printer'. The TextRenderer-Test sample is based on the TextRenderer technology of the HandCrafted-Test sample, but produces the same output as the TextPrinter-Test sample. The output quality is comparable to the obsolete TextPrinter class.
The next image shows a sample output with TextRenderer and seven coloured different fonts. The quality is excellent. No off-color pixel, no residues.
width="699" alt="Image 11" data-src="/KB/openGL/1057539/TextRendererSample2.png" class="lazyload" data-sizes="auto" data->
To use TextRenderer as an alternative to the obsolete TextPrinter, a lot of effort is required to speed up the text rendering.
The sample programs FreeTypeGlyphWise-Test and FreeTypeLineWise-Test will also take a closer look to the aspect 'write an own text printer' and are faster than the TextRenderer technology. The QuickFont-Test sample is based on the OpenTK QuickFont code. The output quality using QuickFont can reach from poor to very good - depending on the font. Some fonts produce residues, it seems QuickFont just uses too little space between the glyphs.
The next image shows a sample output with QuickFont and seven coloured different fonts. The quality is varying. No off-color pixel, but residues with DroidSerif-Bold, DejaVu Serif and luximr.
width="698" alt="Image 13" data-src="/KB/openGL/1057539/QuickFontSample.png" class="lazyload" data-sizes="auto" data->
During the work on the sample solution FreeTypeLineWise-Test i've been faced with residues as well and solved it with two extra scan-lines within the intermediate bitmap, one obove and one below the glyphs. I think the extraction of glyph texture's excerpt from the intermediate bitmap, that is done with glTexCoord2() on a coordinate range 0.0 ... 1.0, has insufficient precisition and causes the problems. Probably extra scan-lines within the intermediate bitmap would solve the problem for QuickFont too. The FreeTypeGlyphWise-Test sample is based on the Rendering FreeType/2 with OpenGL article's code. The output quality using FtFont class' first version is excellent.
The next image shows a sample output with FtFont class' first version and seven coloured different fonts. The quality is excellent. No off-color pixel, no residues.
width="698" alt="Image 15" data-src="/KB/openGL/1057539/FreeTypeGlyphWiseSample.png" class="lazyload" data-sizes="auto" data-> The FreeTypeLineWise-Test sample advances the FtFont class to support string rendering instead character rendering (that is mapping the text glyph texture by glyph texture to the viewport). The output quality using FtFont class' second version is excellent as well.
The next image shows a sample output with FtFont class' second version and seven coloured different fonts. The quality is excellent. No off-color pixel, no residues.
width="698" alt="Image 17" data-src="/KB/openGL/1057539/FreeTypeLineWiseSample.png" class="lazyload" data-sizes="auto" data-> The FreeTypeDynamic-Test sample advances the FreeTypeLineWise-Test sample and FtFont class to support dynamic glyph appending to the glyph texture. The output quality using FtFont class' third version is excellent as well.
The next image shows a sample output with FtFont class' third version and seven coloured different fonts. The quality is excellent. No off-color pixel, no residues. The glypht positioning has been reworked, now the text output complies with the font metrics.
As you can see, the 'unicode charachter' problems like '¬' instead of '€' is solved.
Kerning is switched off (because kerning cuts down the speed to the half approx.).
Shrink is switched on (which reduces the character spacing by 1/12 character advance).
Kerning and shrink are new parameters of the
public Size DrawString (string text, uint characterSizeInPPEm, bool bold, int startX, int startY,
bool applyKerning = false, bool shrink = false)
method.
width="698" alt="Image 19" data-src="/KB/openGL/1057539/FreeTypeDynamic-Test.png" class="lazyload" data-sizes="auto" data-> The TextureLib-Test sample is based in the OpenTK TexLib code. The output quality using TexLib can be very good - depending on the quality of the font bitmap file.
The next image shows a sample output with TexLib and seven different fonts. The quality is varying. The delivered bitmap font big-outline has cut-off ascenders and descenders, the other font bitmaps are created quick and dirty and show only black text. The acquisition of ready-to-use high quality font bitmap files seems to be a problem. The limitation of TexLib to 16 x 16 glyps is a restriction.
width="699" alt="Image 21" data-src="/KB/openGL/1057539/TextureLibSample.png" class="lazyload" data-sizes="auto" data-> The HandCrafted-Test sample compares the output quality of the TextRenderer class, the obsolete TextPrinter class and the QuickFont class best output quality.
The next image shows a sample output with TextRenderer (first red text line) compared to TextPrinter (second red line) and QuickFont (third red line). The quality is excellent. No off-color pixel, no residues.
width="677px" alt="Image 23" data-src="/KB/openGL/1057539/TextRendererSample.png" class="lazyload" data-sizes="auto" data->
The next image shows a detail with 600% zoom to compare the new TextRenderer class output (upper red string) against the obsolete TextPrinter class output (lower red string).
width="637px" alt="Image 24" data-src="/KB/openGL/1057539/TextRendererDetail3.png" class="lazyload" data-sizes="auto" data->
By the way: The green strings are TextRenderer class output as well.
One option could be to advance the TextRenderer class to be convenient and fast and well documented in the future, because it has much lesser code and produces the same quality as TextPrinter. Nevertheless TextPrinter is a good choice too.
Another option could be to develop a FreeType font class to render without the drawbacks of the Mono implementation of the System.Drawing.Graphics namespace (Windows GDI replica).
The FreeType FtFont implementation has been successfully and much faster than the TextRenderer implementation. Now i would favor to go on with the FtFont approach and fix 'unicode charachter' problems like '¬' instead of '€'.
There are alternatives to the FtFont class
Font
I've fixed the 'unicode charachter' problems like '¬' instead of '€'. And i've added kerning (optionally, because kerning cuts down the speed to the half approx.). I would definitively recommend to go on with the FreeType FtFont implementation.
This are performance figures, measured with a VMware® Player 7.1.2 build-2780323 virtual machine and two cores of a i7-5600U CPU.
The quality levels are
It has been tricky to makeTextRenderer class work properly - these are the findings, that led me to the final solution:
System.Drawing.Imaging.PixelFormat.Format32bppArgb
Color.FromArgb (0, 0, 0, 0)
Color.Black
Clear()
FillRectangle()
PostprocessForeground()
// Calculate the margin from the background RGB color components
// to the foreground RGB color components.
int deltaB = Math.Abs(bitmapData[index ] - targetColor.B);
int deltaG = Math.Abs(bitmapData[index + 1] - targetColor.G);
int deltaR = Math.Abs(bitmapData[index + 2] - targetColor.R);
// Determine the highest RGB color component margin.
int deltaM = Math.Max(deltaB, Math.Max (deltaG, deltaR));
// Apply the entire target color RGB component to prevent color falsification
// and the respectively other RGB color component margins proportional to brighten up.
bitmapData[index ] = (byte)Math.Min(255, targetColor.B + deltaR / 3 + deltaG / 3);
bitmapData[index + 1] = (byte)Math.Min(255, targetColor.G + deltaR / 3 + deltaB / 3);
bitmapData[index + 2] = (byte)Math.Min(255, targetColor.R + deltaG / 3 + deltaB / 3);
// Now we have exactly the target color or the target color proportional to
// brighten up and can apply the highest RGB color component margin to the alpha byte.
bitmapData[index + 3] = (byte)Math.Min(255, 255 - deltaM);
GL.Color4 (1f, 1f, 1f, 1f);
GL.Enable (EnableCap.Blend );
GL.BlendFunc (BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
The initial steps to provide a FreeType alternative to System.Drawing.Graphics class have been done successfully. Things, that are currently missing, are:
There is a very interesting article Texts Rasterization Exposures about text rendering detalis written for the Anti-Grain Geometry (AGG) library. The text rendering detalis that are discussed, are namely:
Even if the article is pretty old (July 2007) and ClearType is doing a better job since its improvement with DirectWrite (Windows 7), it shows the complexity of text rendering.
The subsequent image shows gray-scale anti-aliased text with color fidelity (on the left) and sub-pixel positioned text with brightness fidelity (on the right), zommed to 400%. Both texts have been rendered with WPF (.NET 4.5) and have different pixel set on different positions.
width="593" alt="Image 25" data-src="/KB/openGL/1057539/ClearType.png" class="lazyload" data-sizes="auto" data->
While gray-scale anti-aliased text might show saw-tooth edges, especially in case of high contrast and small letters.
Sub-pixel positioned text on the other hand might look distorted on a colored background, especially in case of gradient or changing background color.
This is the first version from 21. November 2015.
The second version is from 13. January 2016 (improved TextRenderer and new FreeType samples FreeTypeGlyphWise and FreeTypeLineWise; text improvements).
The third version is from 04. July 2016 (text fixes and improvements; additional links).
The fourth version is from 03. October 2018 (text fixes and improvements; additional links).
The fifth version is from 27. October 2018 (new FreeTypeDanamic and FreeType sample; text. | https://codeproject.freetls.fastly.net/Articles/1057539/Abstract-of-the-text-rendering-with-OpenGL-OpenTK?msg=5268615#xx5268615xx | CC-MAIN-2021-49 | refinedweb | 2,745 | 51.14 |
How to implement a website full text search¶
Orchard Core comes with a Lucene module/feature that allows to do full text search on your websites. Most of the time, when running a blog or a simple agency website you will require to search mostly within your pages content. In Orchard Core it is possible to configure which text/data you want to index in the Content Type configuration through the usage of Liquid.
Before going further I want to specify that TheBlogTheme includes a recipe which will configure all of this by default for you without any required knowledge. Let's see how we make this available for you step by step.
First step : enable the Lucene feature in Orchard Core.¶
As you can see here we have 3 different Lucene features in Orchard Core. You will require to enable the "Lucene" feature for being able to create Lucene indexes.
Second step : create a Lucene index¶
Click on "Add Index" button.
Let's pause here and see which options we have on a Lucene Index.
The Index Name will be the name used for identifying your index. It will create a folder of that name in
/App_Data/Sites/{YourTenantName}/Lucene/{IndexName} which will contain all the files created by Lucene when indexing.
The second option is the Analyzer Name used for this Index. The analyzer here is a more complex feature for advanced users. It allows you to fine tune how your text is stemmed when it is indexed. For example, when you are searching for "Car" you might want to also have results when people are typing "car" which is in lowercase. In that case the Analyzer could be programmed with a Lower case filter which will index all text in lower case. For more details about analyzers please refer to Lucene.NET documentation. By default the Analyzer Name in Orchard Core has only the standardanalyzer available which is optimized for "English" culture chars. Orchard Core has made analyzers extensible so that you can add your own by using one of the provided analyzers in Lucene.NET or by implementing your own. See :
You can register for example a custom analyzer with the DI using this example from a startup.cs file in your custom module :
using Microsoft.Extensions.DependencyInjection; using OrchardCore.Lucene.Model; using OrchardCore.Lucene.Services; using OrchardCore.Modules; namespace OrchardCore.Lucene.FrenchAnalyzer { [Feature("OrchardCore.Lucene.FrenchAnalyzer")] public class Startup : StartupBase { public override void ConfigureServices(IServiceCollection services) { services.Configure<LuceneOptions>(o => o.Analyzers.Add(new LuceneAnalyzer("frenchanalyzer", new MyAnalyzers.FrenchAnalyzer(LuceneSettings.DefaultVersion)))); } } }
The third option is the culture. By default Any culture will be selected. Here, the option is made for being able to define that this index wether should be only indexing content items of a specific culture or any of them.
Content Types : you can pick any content types that you would like to see this index parse.
Index latest version : this option will allow you to index only published items or also index drafts which could be usefull if you want to search for content items in a custom frontend dashboard or even in an admin backend custom module. By default if we don't check this option it will only index published content items.
Third step : configure search settings¶
By enabling previously the Lucene module we also added a new route mapping to
/search which will require some settings to work properly. First thing to do after creating a new Lucene index is to go configure the search settings in Orchard Core. Here we can define which index should be used for the
/search page on our website and also define which Index fields should be used by this search page. Here we are using normally by default
Content.ContentItem.FullText. I will explain later why.
Fourth step : set index permissions¶
By default each indexes are permission protected so that no one can query them if you don't set which ones should be public. To make the "Search" Lucene index available for Anonymous users on your website you will require to go and edit this user role and add the permission to it. Each index will be listed here in that
OrchardCore.Lucene Feature section.
Sixth step : test search page¶
Here for this example I used TheBlogTheme recipe to automatically configure everything. So the above screenshot is an example of a search page result from that theme.
Seventh step : fine tune full text search¶
Here we can see the Blog Post content type definition. We have now a section for every content type to define which part of this content item should be indexed as part of the
FullText. By default content items will index the "display text" and "body part" but we also added an option for you to customize the values that you would like to index as part of this
FullText index field. By clicking on the "Use custom full-text" we allow you to set any Liquid script to do so. So as the example states you could add
{{ Model.Content.BlogPost.Subtitle.Text }} if you would like to also find this content item by it's Subtitle field. For the remaining, we let you imagine what you could possibly do with this Liquid field : index identifiers, fixed text or numeric values and else!
Optional : Search templates customization¶
Also, you can customize these templates for your specific needs in your theme by overriding these :
/Views/Shared/Search.liquid or .cshtml (general layout)
/Views/SearchForm.liquid or .cshtml (form layout)
/Views/SearchResults.liquid or .cshtml (results layout)
For example an idea here could be to simply customize the search result template to suit your needs by changing "Summary" to "SearchSummary" and create the corresponding shape templates.
SearchResults.liquid :
{% if Model.ContentItems != null and Model.ContentItems.size > 0 %} <ul class="list-group"> {% for item in Model.ContentItems %} <li class="list-group-item"> {{ item | shape_build_display: "SearchSummary" | shape_render }} </li> {% endfor %} </ul> {% elsif Model.Terms != null %} <p class="alert alert-warning">{{"There are no such results." | t }}</p> {% endif %} | https://docs.orchardcore.net/en/dev/docs/guides/implement-fulltext-search/ | CC-MAIN-2020-10 | refinedweb | 1,007 | 56.35 |
» Publishers, Monetize your RSS feeds with FeedShow: More infos (Show/Hide Ads)
Doubling the Size of our Android Team
You see, there’s this guy named Graham. He’s been YNAB’s sole Android developer for a long while now. First he moonlighted, then we wooed him over to work with us full-time. One developer can only do so much.
We’re looking to double the size of our Android team in the next 30 days. From one to two.
(Android/Apple fans that are keeping score, that will mean the iOS and Android teams will be the same size.)
We have big plans for the Android platform.
If you’re an experienced Android developer looking for a full-time, remote gig, read on. If you know an experienced Android developer, forward them this posting!
A Bit About Us
We build the best budgeting software around. Our Android app consistently reviews very well. Your craftsmanship will be seen by hundreds of thousands of YNABers. YNABers really like our Android app, but we’re far from satisfied.
We build software that delights. We focus on helping our users implement YNAB’s Four Rules.
We have one overarching requirement when it comes to having you join our team. Our Cultural Manifesto has to resonate with you. Not on a really weird level but, you know, pretty deep down.
Now, let me sell you on the idea of working with us at YNAB.
I’ll hash these out quickly. This is a bit of a glimpse into how we work:
Adulthood
We’re all adults. There’s no need to punch a clock, or ask for permission to take off early one afternoon to go see the doctor. You set your schedule to your liking. We just ask you to do really cool stuff that YNABers will like. We look at what you’re accomplishing, not how long you sit in front of a computer.
No Crazy Hours
We rarely work more than 40 hours per week. There may be a few times where things go a little crazy and people log some more time. Most make sure to take some extra time off so it all balances out. We’re in this for the long haul. Don’t go crazy on the hours.
Take Vacation
We don’t track vacation, but you’re encouraged to take vacation. I’m not just saying that with a “wink wink” where nobody actually takes vacation. We all like vacation. It’s important to get out and do something. Post pictures of your vacation in our internal chat room, creatively named #office_wall.
Live Where You Want
You can live wherever you want, because we know you do great work. As I write this, Taylor (our CTO) is in Kuala Lumpur. I’m not sure where he was before that, or where he’ll be next. (Taylor’s edit: I’ll be in Singapore next.) Not all of us travel so extensively, but the fact that he does is totally okay because, again, we’re all adults. Just make sure you have a reliable internet connection.
International is Absolutely Okay
If you are Stateside, we’ll set you up as a W2 employee. If you’re international, you’ll be set up as a contractor. Whether you’re an “employee” or “contractor” it’s all the same to us. You’re part of the team. (We are spread all over the world: Australia, Pakistan, Switzerland, Scotland, Canada, and all over the United States.)
If You’re Stateside…
A few notes, specifically if you’re Stateside where we do payroll:
– We have a Traditional and Roth 401k option. YNAB contributes three percent whether you choose to throw any money in there or not.
– We don’t offer health insurance. Your health insurance is your business. We wouldn’t presume to make that decision for you.
Bonuses
We do bonuses. There’s a 40-page document outlining how they’re calculated. Just kidding. Bonuses are awarded when you do cool things. If you were to ship an overhaul of the Android app, I think you’d be due a bonus. Or if you and Graham (the guy from the beginning of this job description) were to ship an Android app that rocked it on a tablet? You’d get a bonus. You may also have random YNABers stopping you on the street wanting to buy you a drink. That’s how much they will love your work!
The YNAB Meetup
We get together every 12-18 months and have a great time: Best Western conference room, powerpoints for hours…and budget talk. Just kidding. This year it’s in Costa Rica. We don’t really work during the meetups. We do eat a lot though.
Do Stuff Besides Work
We all have lives outside of work. Erin, our Lead Teacher goes on long hikes over mountains (in Utah, we call those baby mountains). Kyle flies drones (baby airplanes). Lee is building a tool shed (baby house). We want you to have interests outside of your Android craft.
Stuck in an Elevator
In the end, you have to imagine that you and I, or you and Graham, or you and Taylor (our CTO) were stuck in an elevator together (maybe even in Kuala Lumpur). Besides the claustrophobia and fact that all I had on me were some almonds in a Ziploc, would it otherwise be a pretty great experience? (Taylor’s edit: Maybe not Kuala Lumpur. It’s pretty hot and humid here, especially in elevators, so it’s okay to imagine an air conditioned elevator somewhere instead.) If you think we could make that situation pretty darn enjoyable, then you should continue reading, because now I want to talk about you.
About You
You’re an experienced Android developer who would like to work with us full-time. Compensation will be based on experience.
You would be:
- Working with an existing, well-architected codebase.
- Helping us improve performance, fix bugs, etc.
- Learning an awesome Cloud Sync technology.
- Creating some cool new features(we have a lot left to do here).
You’re the one we’re looking for if you:
- Are a top-notch Android developer.
- Have excellent debugging skills.
- Have great OO design and architecture skills.
- Write code that is easy for other programmers to understand and use.
- Use descriptive variable names in your code.
- Have excellent spoken and written English (we’re an international team, so accents are fine!)
- You’re self-motivated, and thrive with directions like:
- “This part of the program is too slow, and these are the places that might be good to start looking. Do you think you can make it fast, even on this pokey device?”
- “This component needs to be re-architected to allow for
. How do you think we should do it?”
- “Our code needs to call into a Javascript library (not a typo), but that Javascript library is crashing because it can’t find the setTimeout method. Can you investigate?”
If this sounds like your ideal environment, read on!
How to Apply
- Your cover letter can be your email. No need to send anything separately.
- Send your resume in PDF form.
- Please include links to Android apps you’ve built, and describe your role in building those apps.
- Include “ANDROID FTW” in the subject line of your email. If you don’t, we won’t read the email.
- Applications should go to: YNAB-YNAB0766@applications.recruiterbox.com.
- The deadline for applications is 11:59PM on Friday, October 17th.
Please complete the following two questions, and include them with your cover letter. This shouldn’t take you very long.
(1) Write a method “countTo” that returns a string containing every number from zero to the number passed in. So, when I call ‘countTo’ like so:
YourClass.countTo(10);
It should return the following string: “0 1 2 3 4 5 6 7 8 9 10”
public class YourClass { public static String countTo(int value) { // Your code goes here } }
(2) Change the following code so that the view is hidden 5 seconds after the Activity is created, instead of immediately.
@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.my_activity); View myView = findViewById(R.id.my_view); myView.setVisibility(View.INVISIBLE); }
As many faithful blog readers may have noticed, I’ve been thinking a lot about debt. I passionately implored people to dramatically question their assumptions around debt, we discussed good debt vs. bad debt, and today I want to discuss debt through the lens of the YNAB Method.
These rules, after all, form the foundation of a method that is really successful for me and tens of thousands of YNABers. So what do The Rules say about debt?
On the surface? Nothing.
But when we dig deeper, it turns out to be a whole lot.
At it’s core, the YNAB method is all about prioritizing; YNAB helps you align your money with your priorities, goals, and values. When you do, you’re in control of your money, and you have a broader range of choices and more leverage (not that kind of leverage!).
Take Rule One for example, Give Every Dollar a Job. When we say this, we mean only the dollars you have, and then spending based on those decisions. It’s simple, powerful and assures that you’re spending less than you are earning. It’s the only way that you will really get ahead. By definition, on the other hand, debt is committing yourself to spending dollars you don’t yet have. That’s a big part of where my skepticism of debt comes from. Especially when we’re talking about using credit cards to spend more than you have, which allows you to ignore the prioritizing that is the core of the YNAB method.
The other way that debt impacts Rule One is that debt payments need to be prioritized ahead of a lot of other jobs for your money. YNAB works so well because it helps you get control of your cash flow–how you use and direct your money in the day-to-day and month-to-month. But when you carry a lot of debt, a lot of your cash flow is already prioritized for you. Almost before you even start, your choices are restricted. You’ve got jobs you’d really like your dollars to do, but because you have debt payments, there aren’t enough dollars to go around.
Of course, Rule One is also the perfect tool for solving this dilemma. Rule One’s laser focus on prioritizing is exactly what’s going to help you find the dollars you need to get out from under that debt. But I’m getting ahead of myself.
Rule Two is assigning money you have now for expenses that will happen later. It helps you smooth out large, irregular expenses so that you’re prepared. Debt payments, on the other hand, are using money you have now for expenses that have already happened. When I rail against debt, that’s what I’m thinking about. I want you making choices for what’s happening now, and in the future.
When a Rainy Day comes along–and it will–you’re less prepared for it when you’re hamstrung by debt. The money you could have been setting aside has been going to your debt payments. In the worst case, you might need to incur more debt just to get by.
YNAB’s Rule Three is all about flexibility and accountability. When you overspend (and, hey, you will), you need to be flexible and deal with that overspending right away. YNAB is great at this. It’s built for it.
But your debt payments severely impact your flexibility. If thirty percent of your cash flow goes towards debt payments, you simply have fewer options for changing plans. When things aren’t flexible, they break. I don’t want you or your budget to break.
One of your most powerful tools for staying out of debt and being in control is living on last month’s income, YNAB’s Rule Four. This gives time to deal with whatever life might throw at you. You can plan for the whole month at a time, and really appreciate the big picture. But it’s awfully tough to save up your Buffer when you’re writing big checks to credit card or auto loan companies every month. With debt, it simply takes longer to get there.
These are really good reasons not to take on new debt, and why I wrote last time about being selective about what makes for “good debt.” If it’s going to impact your cash flow and your decision making in these ways, I sure want it to be something important, and something that adds value to your life. There’s plenty of room to engage in friendly debate about how mortgages, student loans, and other debt fit into that. The important thing is that it is about your priorities. I think using YNAB has made you a lot better at identifying what those are.
*Updated to add: the application deadline for this job has been extended through October 14, 2014 at midnight.
Some of you know YNAB has softly launched a small business services brand called PACE. PACE creates and maintains budgets for small business owners (using YNAB, of course), helping them get off the cash flow roller coaster, pay themselves more, and sleep better at night.
Mark, PACE’s Small Business Consultant, took on his first client back in February, and since then we’ve added over twenty more. Pace is accelerating, and we’re looking for help.
I’m looking for an expert YNABer to fill the role of Client Budget Specialist, keeping clients’ YNAB files current and pristine, allowing Mark to spend his time a) consulting clients, and b) enrolling more of them.
Your responsibilities will include:
- Building new budgets based on incoming clients’ transaction histories.
- Updating each client’s budget weekly by adding new transactions, categorizing appropriately, and reconciling accounts.
- Re-organizing and customizing clients’ budgets based on changes in their business and/or goals.
This position is for you if:
- You’re immediately available for up to ten hours per week, and open to having your hours increase as PACE’s client count grows.
- You’re meticulous, bordering on obsessive. The thought of an unreconciled account causes you to lose sleep at night and possibly break out in hives.
- You love the software and the Four Rules. Spending 10 hours per week in YNAB sounds like a blast to you – maybe even more fun than binge-watching Alias on Netflix.*
*Great news. This position would allow you to do both at the same time. Boom.
YNAB expertise is crucial, so please apply if all of the following are as comfortable as your favorite hoodie:
- Adding budget accounts with the correct starting balance. Yes, even credit cards. Especially credit cards.
- Importing transactions in various file formats, including .csv. Especially .csv.
- Updating large numbers of transactions at once.
- Editing Payee settings.
- Creating Off-Budget Accounts and understanding their interaction with Budget Accounts.
- Exporting budget data to Excel or Google Sheets (which we use to produce Profit and Loss statements clients).
Here are some of the nitty-gritty details for joining the PACE team:
- You’ll be paid a flat rate per client, per month. Mark has done this work over the last several months, so I can tell you the hourly rate will come out somewhere between $12 and $15 per hour, depending on your efficiency.
- You’ll be an independent contractor, where you’ll be responsible for your own schedule, equipment, invoicing (you’d invoice YNAB), etc.
- Your computer needs to be (1) a Mac and (2) reasonably powerful (SSD and plenty of RAM would be a big plus).
Why? Because (1) the YNAB dev team (George, specifically) built us a very handy piece of software to help with PACE’s client management, and it will only run on a Mac, and (2) Dropbox + YNAB x A Couple Dozen Budgets = Pinwheel of Death on an old Mac.
Here’s how to apply:
- Send a cover letter and resume to: YNAB-YNAB0224@applications.
recruiterbox.com.
- In the subject line, include “beats per minute” in some way. If you don’t include it, we won’t read your email. That would be very sad.
- Answer the following questions (just include these in your email, don’t do a separate PDF, Word doc, Powerpoint presentation, screencast, etc.):
- My clients need me to categorize their income streams in more detail than “Income for this month” or “Income for next month.” Tell me how you’d do this in YNAB.
- Sometimes I have to bulk-update the Memo field on dozens (or hundreds) of transactions in YNAB. Explain how you’d tackle this problem.
- You have a Budget Account whose Working Balance is $11.58 higher than the bank is saying it should be. What’s your next step?
*Updated to add: the application deadline for this job has been extended through October 14, 2014 at midnight.
Well, my last post about debt being your single priority was about as exciting as it gets around here! (Except maybe when we release an iPad app.)
I know the post came across as blunt, hard-headed, opinionated, presumptuous, and a perhaps even a bit mean.
I hope(?) I also managed to get a few of you to question some deeply-held assumptions. That was my goal. The comments were fairly evenly split, which means I might have just about nailed it! ;)
Two themes emerged from your fantastic comments:
- What about mortgage debt?
- Then more generally, what about “good debt?”
Regarding mortgage debt, I answered in the comments that we paid off our mortgage in 2010. It was a big goal of mine from the time I was 23, my hair grew very long as a result, and I actually ended up owing the IRS money for a few months (I don’t recommend this). I’ll probably write a whole post about this. If you’re interested. Just let me know in the comments.
Good Debt, Defined.
I feel like there are a few ways we can assess whether debt is “good” or “bad.” We all recognize that rules are meant for exceptions, so please see these as…guidelines worth considering. I feel pretty comfortable putting them out there.
- What is the return on the purchase that has been financed?
- How much debt is begin assumed?
- Why?
If you’re purchasing something that goes down in value, it’s bad debt.
I think this is a pretty solid definition. Homes don’t normally go down in value. Especially if you’re going to stay in the home for a long while.
A college degree doesn’t go down in value, though you need to be very careful here. Whether the numbers fall in line with your life’s passions or not, many industries and areas of study do not give much of a “return” on that debt you’ve assumed.
I was speaking with Adam, our Chief Product Officer, and he and I both share a lot of enthusiasm for the whole two-years-at-a-junior-college-then-transfer strategy. I also love state schools. Be crazy about scholarships.
Follow your life’s passion, balanced by a vice-like grip on your wallet. And my heavens, use YNAB for free while you’re a student.
My gut feeling is that the student debt load is not just a reflection of unbelievably sky-high tuition costs. It’s also, to some degree, a reflection of spending that hasn’t been checked. There are stories of students living the high life while on loans, and then stories of students that barely eked by, worked their way through, etc. Be one of those “barely eked by” students. :)
Stuff that goes down in value
That bison burger I just ate? That went down in value. The Disney(!) vacation you just went on? That went down in value. Any vacation goes down in value.
(But Jesse! The memories are priceless! Yes, but financed vacations don’t have a corner on priceless memories.)
I’m struggling to find something you purchase on a credit card that doesn’t go down in value. Cleats for my three boys for soccer… My little girls jeans… dinner last night… that reupholstered chair in the front room…
I like this definition. If you’re purchasing something with debt that goes down in value, that’s bad debt.
A new car’s value drops like a rock. A pretty used car’s value doesn’t drop nearly as fast, but it’s still inevitably headed south.
Can the amount of debt make it bad debt? I think so
The two items we just talked about above, those that hold or increase their value over time, and therefore should be good debt, the idea of a home and a higher education…
Can those become bad debts because of their size?
I think so.
Can a “good” debt like a mortgage become bad?
This is where you see rules of thumb come into play. A popular (though not always achievable if you aren’t really being creative) guideline is a 15-year mortgage at no more than 25% of your take-home pay.
Another rule of thumb to keep good debt from going bad is that no more than 1/3 of your take-home pay should be dedicated to mortgage and utilities.
I’ve recently been reading a book, The Not So Big House. They don’t have a square footage requirement, but they do have a guideline that every room of your house should be used every day.
(Then there’s the Tiny House movement. Julie calls it $30,000 tent living. The book I read called for 200 square feet per person. I think there’s a balance to be struck, but also love when people challenge conventional wisdom and buck wildly entrenched trends, like the trend of square footage increasing steadily for the past seventy years.)
This is all, of course, dependent on where you live. Regardless of where you live, you’ll want to pay careful attention to our third guideline when evaluating whether something is good debt or bad debt.
Why?
If you’re taking on a student loan, why? Why for that particularly expensive school? Are there any other (even off-the-beaten-path) options?
If you’re purchasing a home, why in that more expensive location? Have your assumptions around a location been checked, and re-checked?
What about the size of the home? One commenter mentioned in my last post that I must not have any financial regrets. I have several. One was spending $65,000 and nine months on software that I then completely scrapped. Another was purchasing the home I now live in. (I also regret planting 11 fruit trees, a raspberry patch, an asparagus patch, and a strawberry patch, because now it’s going to be even harder to downsize ;))
Here’s one for-instance. Julie and I like entertaining, so we were certain that we needed a dining room. We bought this house with a dining room and guess what else you get with a house that has a dining room? A bigger great room, kitchen (no regret there, I like the large kitchen), an extra bedroom, a larger basement, wider staircase, wider hallways, ridiculous master bedroom… the list goes on. All because we were convinced we needed a dining room. That means there’s more to clean, paint, fix, furnish, decorate and maintain.
That dining room has been used maybe ten times in the six years we’ve lived there. I don’t have any math to back me up, but I think that dining room cost me about $150,000. I’m no longer an accountant, but the “per party” cost is at about fifteen grand.
Recognize that I’m being a bit tongue-in-cheek here, but completely honest with you about my regret. Julie and I didn’t do a serious analysis of our true priorities and it led to assuming more debt than needed. The debt still fit okay into our budget, so it wasn’t really bad debt I guess, but it could have been better.
So back to guideline two, where the amount of debt can morph it from good into bad, you really need to work through guideline three, where you ask yourself why again and again, and really get to the brass tacks of your needs and situation.
Conclusion
These guidelines feel pretty good. If what you’re financing goes down in value, it’s bad debt. You can take on some good debt, and make it bad by assuming way too much of it. Finally, ask yourself why all the time, and be brutally honest with yourself.
P.S.
A few things have come of my somewhat incendiary post from earlier:
- I plan on moving my family into an apartment to test the idea of happiness as it relates to living quarters with seven people (some of them rather small people, but large and loud voices). Julie said she’s good for a test that lasts three months. This isn’t a “real” downsizing test, because we know our home will be available when we’re done, but I think it will at least give us some insight into what it would be like to live in a much, much smaller space. This will probably happen in early 2015. Fall is too busy to orchestrate a move. Well, I’m too lazy to orchestrate it.
- I increased our giving to help fund scholarships at the School of Accountancy at BYU (my alma mater). That’s one way to help students avoid student loan debt. A few people were really concerned because I didn’t distinguish between debt, and student loan debt. To be clear, I really don’t like student loan debt but can’t pretend to know everyone’s situation.
- I’m not going to eat out for sixty days. I’m two days in. Some people really balked at my idea of not eating out. I’ve grown pretty used to eating out. I eat out for lunch four out of five days, probably. Julie and I go on a weekly date and it almost always involves a tasty restaurant. So this is coming from a guy that has a pretty solid habit of frequenting restaurants. I’ll make two exceptions to this: 1) If it’s a legitimate business lunch, where someone invited me. If I invite someone to meet up, I’ll avoid making a meal of it. 2) If I’m traveling and staying in a hotel. If I travel and have access to a kitchen, no eating out.
- I really enjoyed writing, and have made myself the sole blog contributor again. (With the exception of a few posts here and there from our Teachers.) Be prepared for many, many more commas. I was told I don’t use them, correctly.
The first time I set up a budget was way back in 2006 when YNAB was just a spreadsheet for Excel. I’d just added up my credit card debt and I knew I’d need to budget my way out. I set up my categories and entered my income and started working my way down the screen. $60 for the electric bill, $945 for the mortgage and so on.
Then I got down to the grocery category. I stared at it for a long time. How much should I budget for groceries? I had no idea. What did I spend on groceries last month? Again, no idea. I remember feeling like I should know this. It was pretty discouraging. So I did the only thing I could think of.
I guessed. Yup, I just guessed. I figured I was about to learn a lot about my spending and I should just be flexible. I remember making lots of guesses in those early days.
I suppose I could have logged into my bank online and searched through past transactions, but I didn’t think of that. And honestly, even now, it doesn’t sound very appealing.
There’s something very freeing about guessing. You know you’re guessing so you don’t have to be right. There’s much less pressure when you remove the need to perfect. Perfection is overrated anyway. We learn more by reflecting and adapting to information.
We often think perfection in budgeting means setting a budget and sticking to it. But every month would have to be “normal” for that to work. When was the last time you had a normal month? There is no such thing! Thanks to Rule Three, perfection is defined by sticking to the process of budgeting. You evaluate the budget as the month unfolds and make sure that things are still lined up with your priorities. Ahhh, that’s perfection.
If you’re just starting and you’re in the same situation I was, you’ll be guessing a lot. Then, with the help of Rule Three, you’ll be adjusting a lot when you’re guesses aren’t right. But this is a really, really good thing. You’ll gain crystal clear awareness of what’s going on financially. That will lead to better decisions about money.
You’re also going to get lots of practice refining the budget. You’ll be massaging things here, tweaking things there, and constantly reevaluating your priorities. They’ll start to become obvious and will jump right out at you. When you can align your spending with what matters, money management becomes a much more positive and rewarding experience.
Over time something magical will happen. Your guesses will get a lot more accurate. This is partly because you’ll be more aware of your spending habits, but also because you’ll have some data to look back at. For me, I budgeted too little for groceries when I first started. I was constantly adjusting that category. After about 6 months, I was able to see what my average grocery spending had been and I was able to use that as a guideline moving forward.
But don’t worry if that doesn’t happen right away. Enjoy the journey. Enjoy the guessing. When you make an adjustment say, “Hooray! I saw a problem and I fixed it! Go me!”.
My guess is all this guessing will eventually put you in pretty good financial shape.
Update 9/16/2014: I’ve clarified this (rather divisive) blog post, to be specifically talking about non-mortgage debt. I’m not a fan of student loan debt, but it’s a gray area when it comes to “good debt verses bad debt.” I plan on writing a post about good debt and bad debt, coming shortly :).
I’ve had this sneaking suspicion, as of late, that we’re becoming—how do I say this— soft in our thinking, which directly affects our actions.
YNAB, as a brand/company, doesn’t talk too much about things beyond the Four Rules. We teach you:
- Rule One: Prioritize your spending/saving, and recognize the reality of finite resources. You don’t hear us say much about how or what to prioritize.
- Rule Two: Look ahead for larger, less-frequent expenses, and break them into monthly amounts. Build those monthly amounts into Rule One’s priorities. Again, we don’t take a stance on what larger, less-frequent expenses you should have.
- Rule Three: Adjust your budget as the need arises. We don’t want you to quit. We don’t want you to define successful budgeting as the ability to guess what will happen in the future. Rule Three is just a way of saying, “Hey, if Rule One and Two are feeling a bit brittle under these new circumstances, it’s okay to change things.” We do not teach you what “needs” would merit changing your budget under…yep…Rule One.
- Rule Four: We teach you to learn to live on last month’s income (called a buffer). It’s just a really nice place to be. You can get out of the paycheck to paycheck cycle, pay your bills as they come due, and just sleep a little easier at night. We do not tell you that reaching this buffer is mission-critical. We don’t yell and scream about the fact that you don’t yet have your buffer. We let you decide how to prioritize your buffer based on, you guessed it, Rule One.
Rule One’s kind of a Big Deal, no? Lo and behold, your priorities end up being the important part in the whole methodology. We’re saying something like:
Prioritize (r1), now prioritize taking the future into account (r2), now prioritize again when you get new (better) information (r3), now make the whole prioritization process easier by prioritizing a month ahead (r4).
We’re silent on your actual priorities. As “YNAB the Fancy Company” we’re silent.
But I’m going to share my own personal opinion on your priorities. Imagine that you and I know each other very well, you know I care about you, and want the absolute best for you.
This post, where I discuss your priorities, is particularly difficult for me, because I don’t presently share in your struggles. Honestly, I’ve struggled to find what I can write about these days, because I haven’t had serious, or even moderate financial strain since the end of 2009. I feel a little bit like the skinny guy with the lightning-fast metabolism whose chowing down on his third donut while telling you to eat celery, cucumbers, and carrots. (Since I’m assuming we know each other very well, you would know that I find celery, cucumbers, and carrots to be the most useless vegetables of all time.)
At the same time, my current lack of financial strain is a result of 20 years of priorities that we’ll now discuss.
In all of this, I’ve been blessed with tremendous health (a priority of mine), an unbelievably supportive wife (the finding thereof was also a priority, but I think I got really lucky as well), and great parents. Let’s start with my dad.
In 1994, I was 14 years old and my dad gave me two books:
- Financial Peace
- The Richest Man in Babylon
When I was 18, my dad gave me The Millionaire Next Door.
Reading those books has framed my priorities for the past 20 years. By “framed” I should say something more like, “I treated them as absolute truths, and didn’t waiver from the instruction.”
Here’s where I write about things you may be doing wrong. In other words, here’s where I start questioning your priorities. Here’s the single biggest mistake I see in your priorities.
You treat debt, especially non-mortgage debt, as a given
You’re being weak here. You’re being manipulated by an environment that’s teaching you that debt is simply a part of life. You’re not questioning your core assumptions around your lifestyle, and because those assumptions aren’t questioned, debt is a given.
If you would flip that on its head, and treat non-mortgage debt like the most absurd proposition of all time, if you would get to a place where you don’t even entertain the idea of debt, your mind would take hold of that thought, and you would unleash your creativity, your resolve, and your force of will on that debt.
In early 2004, I was projecting what mine and Julie’s finances would be like once our first baby came into the picture. I was making great money as an internal audit intern ($14/hour) and had a bit over two years left of school for my Master’s of Accountancy. I was sitting on our Dr. Pepper Can Cooler (wedding present) at the Wal-Mart desk, trying all sorts of scenarios that would 1) allow me to finish school, 2) let Julie quit her job with the state ($11/hour) to stay home with Porter and 3) do it all without borrowing any money. All three were non-negotiable.
My projections were dismal, I was mad, and there didn’t seem to be a solution. I was projecting full-time school, and entertaining the idea of working 30 hours per week at my internship (we technically weren’t “allowed” to work in the accounting program, but I ignored that guideline). It was a road to burnout, but I was entertaining it because I wasn’t entertaining the idea of borrowing the money needed for tuition.
Julie and I got $5 each month for fun money. Our grocery budget was $100 per month. We barely drove the car (I would have sold it except it wouldn’t have made a dent in our shortfall). We skipped a honeymoon. We didn’t eat out. Despite our best frugal efforts, the projected monthly shortfall was about $350. Coincidentally, that was how much we paid in rent.
So I decided to start YNAB.
Had I entertained the idea of borrowing money to cover tuition (the loan would have been “super reasonable,” just about $7,000 for a graduate degree! <—sarcasm intended), I never would have felt the need to build a little business that could conceivably cover rent. Had I entertained the idea of debt, this business never would have been started. What a tragedy that would have been.
Stop treating debt as a given. It’s not. You’re just not willing to sacrifice really hard things in order to get there. You need to prioritize your debt and get obsessed about it.
You are underestimating your creative power. You’re underestimating your work ethic. You’re underestimating your force of will. Stop being content with just chipping away at your debt. Get rid of it. You shouldn’t have it. It’s plenty hard reaching your financial goals presently without having to also be funding your lifestyle from the past.
You don’t question your assumptions..
Move.
Downsize. Sell your home, taking any equity out of it that’s there and get something smaller, cheaper, and more reasonable. You’ll make new friends. Good friends you’ve had, you’ll keep them. Rent for a while. Rent a small apartment. Enjoy the journey. Stick your kids in a school that’s just “so so” and then spend time with them at night to fill in any gaps you’re worried about. Consider an apartment.
Consider an apartment and dirt-cheap rent.
Think about renting an apartment for a while.
Protect the idea from your assumptions that are attacking it right now. Defend your idea. Reframe.
Sell your car(s).
If you’re upside down in your car, you should sell it right now. Borrowing money for an asset that goes down in value is simply insane. INSANE. It’s the ultimate poor man’s move. It’s sickening.
Because this is Jesse, and not YNAB, I’ll be a bit prescriptive: If you have any debt at all, and your cars are worth more than $3,000, you should trade down to something more reasonable. That is exactly what I would do.
Sell a lot of other stuff.
It’s not a good investment, to borrow money to buy a bunch of stuff, and then have to sell it for pennies on the dollar so you don’t have to keep paying for it, but it’s still a good move for you.
You’re stuck with your situation until you decide to do something about it. Go through your garage and get rid of half your stuff. Then get rid of another half. Since you’re moving anyway, this will help that process along. Now we’re making progress, and being efficient to boot.
If you have seasonal gear (skiing, mountain biking, hiking, camping, boating, etc.) it needs to go. Buy that stuff later from someone else who’s trying to get out of debt. Share your story of doing the same thing, and how it transformed your thinking forever, and how it’s hard now, but they will be so happy they did it.
De-clutter to gain visibility.
You should be able to list every item of clothing you own. You don’t need all of those shoes, sweaters, and skirts. Minimize. Sell your excess clothing or just give it away. Pull all of the clothing from every corner of your room and pile it all on your fancy bed.
Grab a laundry basket and fill it with the clothes you want to keep.
Shoot for selling, or giving away 80 percent of your clothing.
De-clutter in every single room of your house. Start from one end of your massive house and move to the other. Get rid of stuff. Sell the kitchen gadgets you barely use. Sell the kids’ toys they’ve forgotten about. Sell everything.
De-cluttering will give you visibility into your consumption, because you’ll now notice every new item that you mistakenly purchase.
Stop eating out.
You lost the privilege when you went into debt. I know it’s convenient, but you don’t get to do convenient things because you used up all of your Convenience Points when you took on the debt that made your life so convenient. (It’s alluringly convenient to not have to pay cash for stuff, because you don’t have to work for it.)
Without even going into the absurdity that surrounds us, with eating-out establishments by the thousands surrounding all of us, without even going into how absurd it is that we pay disgusting prices to eat disgusting food…
We won’t talk about that. I’m just talking about your privilege. It’s been revoked. Eat a can of beans. Eat it right out of the can. Don’t even warm it up in the fancy microwave. Grab the can opener, open the can, grab a spoon, and eat that can of beans. Smile while you’re doing it. In that instant, you’re doing something that others would find absurd, bizarre, or disturbing, and in that moment, you’ll know you’re on the right track.
You will have the last laugh, you crazy, crazy person.
Cancel your vacations.
Obviously you don’t borrow money to go on vacation. But if you choose to go on vacation while you are still in debt, then you are borrowing money to go on vacation and no sane person would ever do that.
Do a stay-cation. A stay-cation is where you stay home and host a garage sale to get rid of all the stuff, right before you list your house for sale to move along with the downsizing process. After all, you can’t fit a ton of your stuff in that small apartment, so it needs to go anyway.
You do not go on vacation when you’re in debt. You don’t even do little “getaways” for the weekend to “recharge.” You don’t need to recharge. You need to work. You don’t get the luxury. You gave up that luxury when you took on debt.
Do you see what happened here? Your past self really sold you a bill of goods. Your past self stuck you with a massive bill for things you aren’t even enjoying anymore. You’ve been conned by the greatest shyster you’ve ever come across, and that shyster is you!
Here you are, not even able to enjoy a simple vacation because that con artist took you for all of your future disposable income that could have been used for a vacation.
The only way to get back at yourself, for conning yourself the way you did, is to pay off the debt. For your current and future self.
Work more.
One of the reasons you can’t take a vacation is because you don’t have any free time. Your free time should be spent doing odd jobs to earn extra money, working overtime at your job, holding your third garage sale, becoming an ebay expert, eliminating every expense possible, and optimizing the expenses that, despite your herculean efforts, can’t be eliminated. Yeah… that doesn’t leave much time for vacations.
Use Facebook for something useful, and let your network know that you’re looking to earn extra money. Let them know that you can help them with any job they can conceive. Show up early, stay late, deliver above and beyond expectations and you’ll probably find yourself with a business bursting at the seams from referrals within a few months.
I’m a family guy (heaven knows, I have five kids) and I would miss my kids terribly if I were gone all of the time, working a second and third job.
Oh well.
The “work more” idea is a sprint. It’s not a life sentence. I’m actually quite a balanced guy. But I’m balanced because I’m not in debt.
Not for you. There’s nothing balanced about your situation. Everything is out of balance. You’re on a teeter-totter where your embarrassing load of debt has you leaning hard left, and the only way you can “be balanced” is to go way, way, way to the right. So far to the right that you’re about to fall off. So far to the right that your friends think you’ve gone crazy. Then you take a breath, and take another big step. To the right.
Eat basic foods. Eat less.
Cut your portion sizes. Eat basic food. Don’t eat anything fancy (like dairy or meat). Your grocery bill will drop. Your pant size will drop and, since you won’t be buying clothes any time soon, you may look a little silly with your pants cinched around your waist. That’s okay though. You’re fine to look silly to a bunch of people that are behaving completely irrationally.
You still aren’t questioning your basic assumptions
The majority of you have read the above and come up with all sorts of excuses. Some of you are just shaking your head.
“Jesse’s gone off his rocker.”
“I always thought Jesse was pretty level-headed. What happened?”
“I don’t like Jesse. He sounds really bossy, presumptuous, and mean.”
Maybe I’m not as level-headed as you thought (I’m not). I’m sure I’m being presumptuous, since I can’t individually work with each one of you. I do take issue to the “mean” comment. You know I really like you, and I want the best for you. Remember, we already went over that.
Your most basic assumption that you need to obliterate from your mind is that debt is an option.
Debt is not an option.
The fact that you have always behaved this way does not make it right. It’s tragic. It means you’ve missed out on an amazing ride of figuring out what really matters to you, and where your priorities really lie. You’ve never had to sacrifice to have those true priorities surface. Don’t keep missing out.
Debt has kept you down and out, a slave to interest (of course), but also a slave to middle-of-the-road thinking and suppressed creativity. The process you will undergo to obliterate every single shred of debt from your life will change you forever. You will be stronger, your thinking will be clearer, your perspective will be sharpened, and your resolve will be immovable.
This is my prescription for your priorities: Pay off your debt. Your debt is a crisis. You don’t have any other priorities. This is a sprint. Start running.
I am rooting for you.
Hey YNAB community! Jeremy here, and I’m kicking myself for not remembering to share this early in the summer, but it’s better late than never.
Years ago, I lived 1.6 miles away from where I worked. I rode my bike regularly, but realized the huge benefits of skipping the car altogether while in town. Less ga$, less repair$, fewer oil change$, and rippling calf muscles, to name a few. It was around the time An Inconvenient Truth came out on DVD, so I certainly wasn’t the only one thinking this way.
I figured out what was stopping me from becoming a full-on pedal pusher:
- Sun/heat/pitting out on the way to work (I live in central Washington State).
- Snow/cold/getting frostbite on the way to work (I live in central Washington State).
- I don’t feel like pedaling sometimes.
- It takes too long to get places. If I’m running late, I grab the car keys.
A friend of mine got a little scooter he drove around town. It was a cool, Vespa-looking type and got a bajillion miles per gallon. His gleaming white helmet reminded me of Wallace from A Close Shave. I was seriously tempted that direction until I priced the scooters out at $1,200 not including tabs, licensing, maintenance, etc.
I was randomly scanning eBay one day (I don’t recommend this) and finally found what I was looking for: an electric bike conversion kit. (That’s a non-affiliate link and the particular one I bought is no longer around. I can’t vouch for any of the kits listed because I haven’t used them.)
Since I already had a cheap Schwinn mountain bike, I got a front tire electric motor kit for $300 and was ready to rock. The main reason I got the front tire version (as opposed to the rear tire) was to easily shed the 40 lbs of weight it adds. I don’t always want it on, so being able to disconnect the wires and swap out the front tire is a huge plus. It took me about an hour to install and I was on the road.
It uses a thumb throttle, so I control how much power I’m using at all times. I’d love to tell you I don’t use it all the time for the sake of personal fitness, but keeping up with 20-25 mph traffic is too dang fun.
It broke through all the reasons I didn’t bike and paid for itself in three months from fuel I didn’t buy. In the summer, I pedal less and go fast enough to get a breeze going. In the winter, I pedal enough to get my body warmed. With both tires being powered, I cut through snow much easier too. Plus, my calves are rockin’.
The iPad app is here.
It’s a free companion to the desktop app.
How to get the new app on your iPhone or iPad
Do nothing. That’s the future we live in now, people. For many of you, the autoupdate feature of your phone means you already received the latest version. You’ll know if it’s the latest when you pull it up on your iPad.
If the autoupdate hasn’t kicked in (or you’ve turned it off), from any iOS device that has YNAB installed, go to the App Store and tap the Updates tab at the bottom. You should see YNAB available there, ready to rock.
What’s new for the iPad App
It’s a “universal” app, meaning it’s the same install for your iPhone and your iPad. Same app, different UI and features for iPhone or iPad. The iPhone received a lot of nice little fixes, but since we’ve all been waiting for the iPad features, let’s spend the rest of this fine blog post talking about those.
You can budget
I just budgeted the entire month of August on the iPad app. It’s super-fast, super-slick, and super-nerdy for you to like it so much, but you will.
Budget in landscape:
or portrait:
“But wait, won’t that just mean I have to tap a ton?”
No. You use those opposable thumbs, tap on a Budgeted cell for one of your categories and marvel as this fancy—let’s call it your Budget Drawer—pops up. Make heavy use of the NEXT button on the bottom-right.
Type in the amount, next, new amount, next…you get the idea.
OR
Use the Quick taps on the left
My favorite is to utilize the quick taps on the left of the Budget Drawer, so you aren’t typing anything. You’re presented with contextual information right at the moment of decision: “Let’s budget $500 for groceries again!”
“The fancy iPad app says here that our Average spent is $650…”
“Nah. This time we’re extra serious!”
So no, the iPad app won’t force you to address reality, but it will at least present you with relevant information, which you can use to your advantage (or not).
But do you see how powerful those opposable thumbs can be? You cycle through “budgeted last month” or “average spent for the last three months” with your left thumb, and the big fat NEXT button for your right thumb.
Bam. Bam. Bam. Tap. Tap. Smile. Tap. Bam. Smile. Smile. Tap.
Add transactions
Of course you can do this. But it’ll look really familiar, because it’ll be just like the workflow on your phone:
Travel back in time
Swipe right to move back a month. Swipe left to move forward again. With the added screen real estate of the iPad, this made a lot more sense.
See how weird this screenshot looks? That was me using my hands in a very contorted way to swipe halfway between months AND take a screenshot:
We’re just getting started…
Bounce around between multiple budgets
I run my rental properties through a separate budget in YNAB (here’s a podcast episode all about it, and I run YNAB (the business) through YNAB. To switch quickly between budgets, just tap the top-left little hamburger icon, get to the sidebar, tap the back arrow and you’re at our very cool Budget Picker screen that gives you thumbnail views of your budgets:
Quickly adjust budget amounts
What if I have some pesky overspending and I want to follow Rule Three and roll with the punches?
Tap the overspent balance amount and select which category should “fill up” the overspending:
Move money from one category to another
Or, if you’re sitting with what I call “vertical green,” where all of your categories are green and pretty, and you want to move some surplus to another category, just tap on the balance amount and you’ll be able to select the amount to move, and the destination category:
If you try and take too much…we make YNAB yell at you in red:
YNAB on your iPad does everything your phone does, but with a bigger screen AND all of the new features above. It has the Geosmart payees, the smart defaults, and the familiar, intuitive workflow you’ve grown to love.
Speaking of Love…
This took extraordinary effort from the iOS team, and I feel like they’ve really delivered a highly-polished product that is an absolute delight to use. Many of YNAB’s team members report that they prefer the iPad as their primary budgeting device.
Why is this a free app? Because 1) you already bought the desktop app, which the iPad app requires and 2) we love you.
AND because you’re going to help us spread the word about YNAB by leaving an awesome review on the App Store.
AND, I’d be remiss if I didn’t thank you already for all of the friends and family you’ve shared YNAB with up to this point. We are truly grateful for your business and confidence.
P.S. What about Android? We updated the UI to be much more Android-esque, and made many improvements. It’s in beta as I write this, and early beta feedback has been very, very positive. Android friends, you will like it. We have plenty more in store for Android.
My husband and I try to be pretty intentional about goal-setting and whatnot, so back in June we got to thinking about how we could cut expenses across the board and have a cheaper lifestyle overall. We were pretty much primed and ready for it because we were coming off of a pretty expensive year. We had an exchange student living with us, and we didn’t want her to have a totally un-American experience of frugality. Hehe.
But after looking at our budget, we were definitely ready for some drastic changes. We were ready to move from consumption to production. Don’t get me wrong, though, the pampered American existence can be pretty nice, but it’s also not the ultimate goal for which one should strive. Contentment and satisfaction in life can actually be found in working hard for things.
So back in June we began making a list of things that we could do to become more intentional about this goal of moving from a lifestyle that works against us to one that actually works for us. We opened up our trusty online task manager – Checkvist, and we started listing it all out.
- Move from consumption to production with food
- Go a month without Costco and co-op
- Create a schedule of different shopping methods
- Move from consumption to production with transportation
- Create an area for Michael to work at home
- Sell second car
- Move from consumption to production with energy
- Get an energy audit
- Create an action plan
- Move from consumption to production with fitness
- Revisit gym membership
- Work out with yoga at home
- Move from consumption to production with clothing
- Find our baseline lifestyle
We’ve already worked at these a ton, and some have even been completed (like yard work), which is why there are not too many sub-tasks underneath the main tasks, but there’s a lot more to do! Over the next few months I’d like to share with you about each of the main tasks and tell you what’s working for us and what’s not working, why we’re doing what we’re doing, and if we’d do anything differently. So think of this as your introduction to a new series I’ll be starting. I’m looking forward to sharing what I’m learning!
I love Rule Three. In fact, I’ve decided that it’s my favorite rule. I love that it removes the shame associated with overspending. You’re no longer punished for not being able to predict the future with 100% accuracy. I love that it gives you a way to answer the question, “Where’s the money going to come from?”. I love that it keeps you nimble and flexible.
The truth is it’s more important to stick to the process of budgeting – the give and take of priorities – than it is to stick to one set of numbers that made sense a week ago, but that doesn’t make sense today. When information changes, we change and adapt. Lee, one of our teachers, always says, “I never understood why people would tell me I couldn’t change the budget. I’m the one who typed those numbers in there in the first place.” Exactly.
Rule Three is a beautiful thing.
However, when I teach this concept in the “Getting Started with YNAB” course, inevitably a few people express concern that you could be too flexible, and then you could lose sight of goals and what you’re trying to accomplish. The concern is, if you’re changing things whenever you want, why budget in the first place? It’s a great question.
But Rule Three doesn’t mean changing everything. It’s about evaluating your priorities and adjusting as needed, if needed. During class as I’m walking through the software demonstration, we get to a point where I overspend on a gift. Here’s what the budget looks like at that point:
I ask the class to tell me where they’d take the money from. Then I pause and wait while attendees look over the screen.
Then the responses start coming in. There are usually several votes for clothing, some for restaurants, maybe one or two for car repairs.
Of course, it’s a little bit of a trick question, isn’t it? This screen just shows you a budget, it doesn’t show you the life and person behind the budget.
What if the cupboards are full and there’s 2 days left in the month? In that case taking the money from groceries might be ok. The same could be true for fuel.
Maybe this person is driving a newer car and just had a bunch of work done. So taking $17.50 out of car repairs would be fine. It really depends, and that’s always a worthwhile discussion. But here’s the interesting thing, and the big point I want to make.
No one has ever suggested taking the money from the car payment category. Not once. The responses are always thoughtful and responsible. I’ve always believed that people will make the right decisions for their lives, when they have the right information. But the key is having the right information. The budget gives you that. You can look everything over and decide what to do. You’ll know where you can shuffle funds from, and you’ll know what you shouldn’t touch.
I have certain categories in my budget that I’m more likely to shuffle from. I have some that I wouldn’t touch unless I absolutely had to. Everyone draws those lines in different places.
It’s important to the trust the budget when making decisions, but it’s equally important to trust yourself to make those decisions. Look things over, think about what’s going on financially, and you’ll make a good decision.
I love hearing YNAB success stories, so I recently put out a call out to the YNAB forum community asking for people to share theirs.
I received a whopping 40 responses. (You can read the thread here if you have a forum login and feel like being inspired.) Rather than share one or two in their entirety — which is all I’d have room for here — I decided to focus on what kept cropping up in one response after another: the ideas of control and awareness.
Look:
- “What YNAB has allowed us to do is see where we were and see where we can go. … YNAB has been amazing in helping ease the anxiety of being controlled by money. We can make decisions and know the exact impact it has on our finances.”
- YNAB “showed us the areas we could cut … without too much pain, just by being smarter. It made me aware of my finances and saved me fees. … We’re not wasting money on things that aren’t priorities now that we can see it add up. … I don’t stress about money anymore.”
- “Basically, YNAB created order where none previously existed. And with that organization came clarity.”
- “What was the biggest change? My mindset. I am determined to live within my budget. I check my categories before I spend a dime. Every purchase is now done with knowledge and consideration.”
- “Since using [YNAB], I am more aware of what I spend and have definitely cut back on frivolous coffees and so on, but I can still allocate a portion of money to eating out or buying a pair of shoes and not feel guilty.”
- “Knowing where our money is going and giving every dollar a job has completely changed our outlook.”
- “We’ve made some sacrifices, scaled back on some things, but what’s made the biggest difference is being able to use a ‘fine-tooth comb’ to look at all of our expenses and make informed decisions.”
- “Overall I feel a lot more in control as we have a broad view and an intimate view of how things are going. The control is the key thing for me to stay motivated and less anxious.”
- “YNAB speaks the truth in ways that my spreadsheets and Quicken never did for me.”
- YNAB “gave me clear sight as to what I was paying for and hence whether or not I really wanted to pay for it. … I feel like a driver and not a passenger in my financial life.”
- “My stress is much lower because for the first time I feel like I’m in control of my finances, not the other way around.”
- “I’m pretty good at the head-in-the-sand thing. … With YNAB, I traded in the stress of not knowing for the stress of knowing, and I much prefer the latter!”
Another recurring theme throughout the success stories thread had to do with YNAB and relationships (always a popular topic!). I’
ll share some of those quotes in my next post.
I was barely bigger than a Boise spud when our family began visiting my grandparent’s small farm in Idaho. With all that the wide open spaces and roaming cattle had to offer a little girl from the suburbs, nothing thrilled me more than getting a tap on the shoulder from Grandpa as he gestured towards a shiny, wire-covered coop out back.
“C’mon! Let’s go count the chickens.”
I’d squeal and grab his hand, pulling him as fast as my pint-sized strength would allow. He would unlock the gate and a sea of clucking, white puffballs would surround my feet. The joke was that I could never actually count all of the chickens. They seemed to multiply before my very eyes! I would chase them and laugh and dance in the rainfall of feathers…
…until the summer my Grandpa added a new rooster to the hen house.
Like clockwork, I ran to the fence doorway, ready to commiserate with my farmyard friends, but as soon as Grandpa opened the coop, Big Red had me in his sights. He jumped on my back and scratched with his claws. I can still feel his wings flapping furiously against my face. He sunk his talons into my fluorescent orange hoodie (apparently an infuriating color to roosters). It felt like forever until my Grandpa finally freed me from his grasp. I ran screaming from the pen with some pretty nasty gashes down my back, and a brand new—very real—fear of birds.
I swore off every winged creature that day and never looked back. But for all my talk of leaving the birds behind, I realized that I was still metaphorically playing that childhood game by relying on one of the oldest clichés on the planet: Counting my chickens before they hatched.
In the recent past, I was an expert at liquidating money before it even hit our bank account. Future bonuses, side jobs or tax returns were long spent before they even reached my hot, little hand. They were my golden parachute anytime I overextended the budget. I’d smile sheepishly each time my husband would be going over our accounts and point out an extravagant receipt,
“It’s okay! We’ll just use (fill in the blank) to cover it.”
But there are only so many “fill in the blanks” to go around before you find yourself in the Hen House of Debt, struggling to keep track of the slippery chickens you were so sure you could count on.
When the Christmas budget was bursting at the seams, I’d earmark incoming monetary Christmas gifts to cover it- until I remembered that the Christmas money we knew we’d be receiving was already slotted to cover the new rug we purchased to match our freshly painted living room. From there, I’d look even further to a bonus coming four months down the road. It would easily clean up the Christmas mess, but would leave us with next to nothing for summer vacation. Maybe we could use the following quarterly bonus for that? But wait—weren’t we using that money as savings for next Christmas…?
Round and round we went.
Luckily we escaped our “fowl” habits (I had to!) with only a few scratches. Our scuffle with the consequences of relying on money we didn’t yet have has instilled a healthy fear in us that rivals the impact of Big Red’s talons.
As we nurse our wounds from mistakes of the past, YNAB is helping us stay accountable to our current financial status. While there is freedom in how we disperse our money within the categories, we are pulled back into reality every time the numbers don’t add up. (You’ve all seen the bright red, “over budget” warning up in the corner of your YNAB spreadsheet, right? Terrifying!)
It’s not easy patching up what remains of our careless choices and poor planning, but the alternative is far worse. So we wait and save and plan for a future where all the chickens are properly hatched……and at least 500 feet away from me.
Tell me, my fine, feathered YNAB friends: Are you “playing chicken” with your budget? Do you find yourself scrambling to escape the Hen House of Debt? Have you scuffled with spending money you didn’t yet have? How did YNAB free you from the financial albatross around your neck? (Ten bonus points if you use a bird pun!)
Mostly because I’m only at step two of his seven-step path to getting a grip on my money, while everyone around me seems to be sitting fat and pretty at number seven. I know, I know, it’s an illusion. But still.
Okay, let’s back up. I (Alex) don’t really even know this Dave Ramsey fellow. I only heard about him because I’ve read every comment that my village mates leave on my YNAB posts, so this evening I took a little trip across Google to find out the score. I didn’t go too far into the website, because, of course, YNAB is the budget to beat all budgets, and I really only wanted to know what else this fellow has to offer besides budgeting advice.
I stumbled upon his seven-steps page, and realized with fresh horror that I am hopelessly far behind in the race to financial independence. I have my $1000 for emergencies (if you count the money I’ve saved this past year for travelling abroad with my sons, or the $2000 I’ve saved for my “retirement”, haha, let’s just have a moment to giggle about that one). I haven’t got any debt, but that’s only through the grace of a banking system that lets debt-riddled people throw up their hands, press RESET on the Great Green Debt Machine, and start over at GO.
At any rate, I’ve got the first two steps covered. As for having saved three to six months in expenses? For someone who has to choose between a package of paper towels or another squeeze at the gas pump, that seems as preposterous as me as strapping on my mukluks and flying to the moon. I’ll never get there.
Which means I’ll never get to the fourth step, either: investing 15% of my income for growth. And as far as step five goes, while I am currently saving for Thing Two’s university costs (Thing One, as an amputee, gets his first degree paid for by the War Amps, blessed be), step six makes me want to laugh and cry at the same time. Pay off my house early? Dude, I don’t even have a house. I live in a one-bedroom apartment in which I squash silverfish the size of gummy worms on a thrice-daily basis.
Sometimes it all just sucks.
But then. Then. Then . . . I take a deep breath, and I look back to where I was a year ago. Two years ago. Three, four and five years ago. And I realize that – thanks entirely, and only, to YNAB – I’m on my way somewhere. I have a travel fund. I have a retirement fund. I have a fund – many, actually – to cover expenses that I know are going to pop up when I least expect them. I have no clue where I’m going to end up, and unless I sell a hella good novel or scratch the right winning ticket, it sure as heck won’t in my own home, but I’ll be in a better place.
And damn it, I’ll have great eyelashes.
So I have what you call “the travel bug”. I love it. I don’t see it as extravagant or indulgent because it’s a huge value of mine since it has changed me so much by adding such value to my life and the life of my family. But let me tell ya…I’m not staying at fancy resorts, either.
We’ve been doing the Airbnb* thing, and I have to tell you…it’s changing me.
Airbnb is a website that allows you find people that are renting out all sorts of spaces – extra rooms, entire homes, and unique accommodations from glamping to castles. You can then safely book through Airbnb, allowing you also to vet the owners.
The other side to Airbnb is that you can list your own space on there, which is what we’ve done. So when we book our house out for a week, we go on a vacation to a place that is cheaper than our house. See what we did there? Simple math. ;)
This, however, is not for everyone. If you want to stay somewhere that is totally predictable and simple and not have to interact with others, then this is not for you. On the other hand, if you want to live like a local, meet interesting people, and have unique experiences, then try it!
But the mind game that it’s playing on me is phenomenal in a good way. I’m learning things like contentment and relying on my skills and wits and not my stuff while staying at places that are cheaper than my house.
And a funny thing happened yesterday. The sweet girl that’s staying at my house with her family was kind enough to friend me on Instagram to help with trust and whatnot. So yesterday I saw two pictures. The first was of everyone in my pool. And the second was of her and her 5 year old nephew laying on my couch. I’ll be honest. It was weird. That was MY stuff. MY couch. MY throw pillow. MY pool. MY water.
It made me want to never rent my house out again. Then the interesting awakening happened. It’s just stuff.
I’m fortunate enough to have a house to rent out for a decent rate. I’m fortunate enough to have a good decorating eye so that my house looks pretty good, if I do say so. And my husband is so fortunate to have a flexible job so that we can spend a week on an organic gogi berry farm in New Mexico for a week. It’s just stuff.
Why do we have such a hard time sharing? Why is acquisition more important to us than community and generosity? (You could argue that true generosity doesn’t charge, but I digress.)
So why am I sharing all of this on a budgeting blog? I think it’s all linked, truly. Budgeting becomes so much easier when you don’t have an attachment to stuff. Your options are wide open when you’re not a slave to things. Contentment and goals are achievable.
Whether you’re just starting out and facing the harsh reality of needing to cut some categories in order for everything to balance, or you’ve been YNABing for five years like me and want to find greater contentment and make long-term goals, it’s much easier when you’re not attached to stuff.
Welcome back, Grasshoppa! I, Jeremy, have awaited your return. If you missed it, last time we covered some credit score basics and provided a path to getting a free copy of your credit report.
If you are now or will ever be a mortgage shopper, the following will serve you well. I have seen many a mortgage application die at the hands of the cunning FICO, so let us not delay.
[*POOF!* I disappear from my seated lotus position in a bluish cloud of smoke and reappear beside a living, breathing credit report. “Gah!” you gasp.]
Be not afraid, Grasshoppa! By learning the furious five parts of its anatomy we can uncover corresponding attacks. The percentage shown is the weight each part carries in your total score.
Part 1 – Payment History (35%). Late payments hurt. A lot. For seven years. (!) Lenders will check your credit before approval and sometimes again before closing. If you have missed any payments during that time, FICO could deal a fatal wound to your mortgage application and slay your homeownership dreams on the spot.
- Myth 1 (thanks to a helpful comment by MrMcLargeHuge): “An unblemished credit report is suspicious. Lenders want to see some kind of mistake in your credit history.”
- Combat tactic: Noble lenders look for healthy borrowers while predators look for weaknesses. Colorful ad slogans like “Bad credit? No problem!” should warn you like the bright bands of the deadly coral snake.
- Myth 2: “Never pay off your balance completely.”
- Combat tactic: Know which credit lines you have open (by seeing your report) and make sure to keep them paid. If you can only pay the minimum, do it. DO pay your balance off each month; however, DO NOT necessarily close the account. We’ll cover that later.
Part 2 – How Much You Owe (30%). If you have a car loan, student loans, and three credit cards maxed out, this indicates a risk that you will be unable to repay your obligations should something unexpected happen (illness, loss of job, an “amazing” shoe sale, etc). Even if you are keeping up on your payments, FICO will ding you for maxing out.
Also, if you have more credit lines available than you could ever hope to repay, it negatively affects your score.
- Myth 1: “As long as you pay them off every month, you can max out your credit cards.”
- Combat tactic: Keep your balances at or below 25% (some even say 10%). For example, if you have a $1,000 credit limit, only spend $250 before it’s paid off.
- Myth 2: “If I have a lot of credit available to me, it shows lenders I can handle the responsibility.”
- Combat tactic: If you earn $4,000/month, it will make FICO nervous if you can dive into $20,000 of credit card debt at 22% interest rate in a moment’s notice. Limit your credit cards to three.
Which three? Ahhh, you show wisdom for one so good looking, Grasshoppa. Don’t go creating credit card confetti with your katana just yet. Next time we meet, we will explore the three remaining parts of credit score anatomy and bring the whole creature into view.
*You have, no doubt, heard many myths and developed your own tactics. I cordially invite your questions and comments. When you do, we all get stronger.
Recently, my fellow YNAB blogger Alex wrote a great post about how her budget allows her to afford certain indulgences. She’s right: Even if you’re living on a shoestring, you can consciously plan for the special things that bring a little sunshine into your life, without letting them drain your bank account.
But the budget isn’t just for people who want to keep their luxury spending from getting out of hand; it can also liberate those of us who tend to avoid ever spending money on ourselves, even for necessities. (Surely I’m not the only one?)
For example: I’ve been hiking with our new dog a lot this summer. And because I am a cheapskate, I’ve been doing it wearing my only suitable outdoor shoes, a pair of 3- or 4-year-old sneakers that a couple of years ago went from being my primary indoor workout shoes to my beat-up yard work shoes. These were already nearly useless for hiking when I started in May. As of last week, my toes were poking through the mesh and the smooth-worn soles were flapping with every step. (I tripped a lot.)
But, thanks to the budget, I had been saving up for a pair of actual hiking shoes. They cost $100 ($100!) — knocked down to $80 with a coupon code. That’s pretty cheap as far as light trail shoes go, but for me it might as well be a million dollars.
Before the budget, I never would have been able to justify to myself the cost of these shoes. If I had bought them, I would have felt a guilty pang in my stomach every time I put them on. I shouldn’t have indulged myself in something I could have lived without, I would have told myself, because the money could have been better spent elsewhere.
YNAB has changed that kind of thinking for me.
Now that I have a dedicated clothing/shoe category, I can plan for more expensive purchases — even for things I don’t strictly need. I always believed I didn’t deserve Nice Things, because they cost money. I felt kind of noble going without Nice Things because that proved that I was careful with money (though somehow I was always broke).
Now, thanks to the budget, I am able to spend more on myself — plus I have more money for everything else. Go figure.
I’ll probably never get over my innate tightwad tendencies. Heck, before YNAB it took me six months to pull the $7 trigger on a new manual can opener (well, I already had one that kind of worked even though it hurt my hand, and who needs two can openers?).
But my point is this: While most people tend to think of a budget as a way to control their spending, I use mine to help me be less frugal.
When I head out on my morning hike in a few minutes, I’ll be wearing my new Merrells. And thanks to YNAB, I’ll be smiling all the way.…
I had told you about how we just got back from Europe, and Sarah in the comments had asked: “How did you track your expenses while on vacation?”
So my husband and I got to talking about it, and, using much of what he commented back to Sarah with, we came up with this post.
While it’s a challenge, keeping to a budget on vacation requires a LOT of preparation and foresight. We’ve now gone to Europe three times on a YNAB budget, and at this very moment I’m sitting in a cabin in Taos on another vacation. (I plan on telling you next how I manage my travel addiction.) So hopefully this will help in Europe or wherever your heart takes you next.
1. Before you purchase anything, know your conversion rate!
It’s not board game money, as much as you feel like it is. It’s real money; trust me. If you don’t want to do the math, then get an app for your phone that can do it for you.
2. Pay for as much as you can while you’re still home.
If you plan on traveling by train or bus once you’re there, then buy the tickets as early as they’ll let you. Many of the companies mail you the ticket, and others simply email them. But trust me, don’t wait until the last minute. Also, pay for any shows or tours that you plan on going to in advance. This helps a ton.
3. Check the admission prices on the websites of everything else you plan on attending and public transportation passes, and budget for all of those.
I made the mistake once of just estimating what I thought the museum admission prices would be, and it ended up being about 2-3 times more. If you spend a couple of days at home on your itinerary, you won’t regret it. You don’t even have to nail down the itinerary day by day, just list the things that you definitely want to do, and then plan then play it by ear, planning around the events that are set in stone. Also, do a search on the public transportation of the cities that you’re going to so that you can maximize on the passes that you’ll surely want to buy.
4. Give yourself a budget for gifts and souvenirs, and get cash out for those.
Once you have your cash, then use an envelope system for this. Keep you cash safe in a money belt or traveler’s wallet. This will also keep you from making regrettable impulse buys.
5. Give yourself a per diem for food and use cash.
Again, you’ll want to use the envelope system for this, keeping the cash safe in a traveler’s wallet or money belt. Also, be sure to do an internet search for what is customary tipping procedure. You may end up saving a little dough that way.
6. Make sure you and whomever you’re traveling with all have a chip in your credit/debit card.
If you don’t already have one, get this sent to you a few months in advance of your trip. Many vendors, especially in Eastern Europe, don’t take a credit card with a swipe. This prevented me from renting a bike one day because each of us needed our own card to register, so it was a major bummer. See - the US will be doing this very soon. (Tip: Call your credit card company and bank before you leave to let them know you’ll be overseas.)
7. Give yourself (if possible) a few hundred dollars of buffer so that mistakes or minor emergencies aren’t a big deal. You’re on vacation, and it’s awesome, and you don’t know when you’ll get to come back, and that this doesn’t happen every day, so you don’t want to be stressing out about money! A buffer will ease the stress as long as you don’t rely on it too heavily.
8. Download your transactions every night.
Any credit card/debit transactions will show up almost immediately as converted into dollars, so if you’re doing a best guess conversion during the day, at night you can look at your pending transactions and you’ll see those transactions in dollars. Make sure you use a credit card that doesn’t charge you a huge conversion fee (we use the Barclay Travel card because of its rewards and for this reason too).
9. Use Yelp or Trip Advisor to find places to eat out of the tourist areas to save money.
If you do this beforehand, it’s great. But if you don’t, you might consider getting data on your phone to make things easier (you can buy a prepaid SIM card while you’re there that’s pretty reasonable), but we managed to find enough places with free Wi-Fi to get the information we needed (the Trip Advisor app was my favorite). A lot of times you can end up going a few blocks outside of the normal tourist trap and get something more authentic and much cheaper. So I’d recommend looking at your itinerary now and finding options for where you’re going to be. I think the most expensive aspect of our trip was the panic-and-eat-out-at-a-tourist-trap scenario.
And as far as the nuts-and-bolts of your categories and whatnot, that’s up to you. We just had one category with a simple list of the budget in the notes. Some of you will probably want a master category with sub-categories for each thing (food, souvenirs, etc.), and that’s fine, too. My personal taste is to keep it a little simpler.
I hope this helps! Happy travels!?
Assemble, budgeting bushido! I, Jeremy, am sporting my pointy hat and fu manchu in preparation to bestow upon you, Grasshoppa, some basic techniques in the ancient art of FICO defense.
[Wooden flute plays while I balance atop my bamboo cane.]
The terms “FICO” and “credit score” are essentially the same creature and those terms are interchanged by bankers, since FICO is the company many use to get your credit report. Despite your or my feelings about it, your FICO…is. Though shrouded in mystery, since it can work either for or against you, it is wise to study its ways. Let us begin…
[Gong!]
If it is a mortgage you seek, you must face your FICO since banks employ it heavily in guiding their lending decisions. If married, you will not face it alone (though you may wish you could if your spouse has been tiger-style with their borrowing, yet sloth-style in paying back.)
FICO has no feelings about your borrowing history. FICO does not hear sincere reasons, nor poor excuses. It peers unbiasedly through you and gives you a score ranging from 300 to 850, with higher being better. Like middle-aged man’s abs, building your score up is long and difficult, while sagging is quick and easy.
FICO sees only darkness. FICO will not mention the time you ate only wild beetles, drank dew droplets, and lived without electricity to pay off your 30 year mortgage in seven grueling months. FICO will, however, point out the time you spent forty days meditating on Youtube cat videos and were late on paying your Target card…five years ago.
Let not your anger control you, Grasshoppa. Neither be seized by worry if you are now trying to remember every little thing you may have ever done wrong. The best course of action is to bring your FICO into the light. Journey to where you can request your credit report once per year without charge.
[There are many sites (even some banks/credit cards) which offer this service for a fee. Unless you want to monitor your credit score more than annually, this is letting your money take the way of the cherry blossom in the wind.]
Like waxing car and painting fence, checking your score is more than going through the motions since they are not always accurate. Be not intimidated! FICO will back down when confronted with good evidence of payment sent on time. Even a sincere call to the company reporting a genuine late payment may suffice. “Hello, friendly cell phone carrier person. I see I had a late payment last year. I was unconscious due to a nunchaku-induced head injury, but I paid as soon as I woke up and was never late otherwise. What might I do to clear that up?” It might do the trick, if it is the honest truth.
Many an unwary mortgage shoppa has been bitten in the backside by FICO and thus denied in their quest. When next we meet, I will teach of the anatomy of the FICO, and how to avoid those attacks. I will offer you further techniques, but they will be more valuable to you once you have seen your credit report with your own eyes.
That will be all for now, enlightened one. Now go and find what it is you seek. | http://reader.feedshow.com/show_items-feed=44694b72b74863ca628aeae16fece6ba | CC-MAIN-2015-11 | refinedweb | 15,424 | 81.83 |
Integrating Java objects and XML data
Document options requiring JavaScript are not displayed
Sample code
Help us improve this content
Level: Intermediate
Brett McLaughlin (brett@newInstance.com), Author and Editor, O'Reilly Media Inc.
01 Aug 2002
Quick is an open source data binding framework with an emphasis on runtime transformations. This instructional article shows you how to use this framework to quickly and painlessly turn your Java data into XML documents, without the class generation semantics required by other data binding frameworks. Extensive code samples are included.
XML has certainly taken the programming world by storm over the last several years. However, the complexity of XML applications, which started out high, has not diminished much in the recent past. Developers still have to spend weeks, if not months, learning the complicated semantics of XML, as well as APIs to manipulate XML, such as SAX and DOM. However, in just the last six to 12 months a new class of XML API, called Quick, has become increasingly popular as a simpler alternative to these more complex APIs.
Data binding allows you to directly map between the Java objects and XML, without having to deal with XML attributes and elements. Additionally, it allows Java developers to work with XML without having to spend hours boning up on XML specifications. Quick is one such data binding API -- a project that's geared toward business use in Java applications.
Installation and setup
Before diving into the details of using Quick, you'll need to download and install the project.
Visit Quick's Web site (see Resources ), and select Download.
You can then download a .zip file for the project; as of this writing, the latest version available was
Quick 4.3.1, accessed through the Quick4.3.1.zip file.
Expand the .zip file to create the Quick distribution. The directory hierarchy is shown in
Listing 1:
Quick4
|
+-- JARs
+-- BATs
+-- Doc
+-- dtdParserSrc
+-- DTDs
+-- examples
+-- JARs
+-- QDMLs
+-- QJMLs
+-- quickSrc
+-- UTILs
+-- utilSrc
+-- XSLs
The two directories of most interest to developers are Quick4/BATs, which should be
added to your PATH environment variable, and Quick4/JARs, which contains jar files that should be added to your CLASSPATH environment variable. Specifically, you need to add dtdparser115.jar, Quick4rt.jar, and Quick4util.jar to your current class path. You'll also need a SAX parser implementation, such as the Apache project's Xerces-J (see Resources ). Add xerces.jar, or your own favorite parser, to your class path as well.
PATH
CLASSPATH
dtdparser115.jar
Quick4rt.jar
Quick4util.jar
Java classes and XML documents
Data binding centers around XML and Java, so let's take a look at how these XML documents and Java
classes relate to Quick. To illustrate these points, I look at several simple Java classes,
and a simple XML document.
A simple XML document
First, Listing 2 shows a small XML document. I've kept things simple so that you
don't miss out on the concepts by wading through 10 or 15 Java classes.
<?xml version="1.0"?>
<!DOCTYPE person SYSTEM "person.dtd">
<person>
<firstName>Gary</firstName>
<lastName>Greathouse</lastName>
<address type="home">
<street>10012 Townhouse Drive</street>
<city>Waco</city>
<state>TX</state>
<zipCode>76713</zipCode>
</address>
<phoneNumber>
<type>home</type>
<number>2545550287</number>
</phoneNumber>
<phoneNumber>
<type>work</type>
<number>2545556127</number>
</phoneNumber>
</person>
Listing 2, while not a prime example of how to write XML, brings out several Quick points that are worth noting. You'll also want to take a look at the DTD for the document, shown in Listing 3.
<!ELEMENT person (firstName, lastName, address+, phoneNumber+)>
<!ELEMENT firstName (#PCDATA)>
<!ELEMENT lastName (#PCDATA)>
<!ELEMENT address (street, city, state, zipCode)>
<!ATTLIST address
type (home | work | other) "home"
>
<!ELEMENT street (#PCDATA)>
<!ELEMENT city (#PCDATA)>
<!ELEMENT state (#PCDATA)>
<!ELEMENT zipCode (#PCDATA)>
<!ELEMENT phoneNumber (type, number)>
<!ELEMENT type (#PCDATA)>
<!ELEMENT number (#PCDATA)>
The Java classes
In many data binding implementations, you would now need to generate Java source files to
represent this type of XML document. That sort of implementation assumes, generally falsely, that you
lack the Java business objects you want to use. More commonly, you have a set of Java classes that you want to
begin persisting to XML. This use case is exactly what Quick helps to solve. In light of that, then,
Listings 4, 5, and 6 are three Java source files that represent a person.
import java.util.LinkedList;
import java.util.List;
public class Person {
/** The first name of the person */
private String firstName;
/** The last name of the person */
private String lastName;
/** The addresses of the person */
private List addressList;
/** The phone numbers of the person */
private List phoneNumberList;
public Person() {
addressList = new LinkedList();
phoneNumberList = new LinkedList();
}
public Person(String firstName, String lastName,
List addressList, List phoneNumberList) {
this.firstName = firstName;
this.lastName = lastName;
this.addressList = addressList;
this.phoneNumberList = phoneNumberList;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public List getAddressList() {
return addressList;
}
public void setAddressList(List addressList) {
this.addressList = addressList;
}
public void addAddress(Address address) {
addressList.add(address);
}
public List getPhoneNumberList() {
return phoneNumberList;
}
public void setPhoneNumberList(List phoneNumberList) {
this.phoneNumberList = phoneNumberList;
}
public void addPhoneNumber(PhoneNumber phoneNumber) {
phoneNumberList.add(phoneNumber);
}
}
public class Address {
/** The type of address */
private String type;
/** The street address */
private String street;
/** The city */
private String city;
/** The state */
private String state;
/** The zip code */
private String zipCode;
public Address() { }
public Address(String type, String street, String city,
String state, String zipCode) {
this.type = type;
this.street = street;
this.city = city;
this.state = state;
this.zipCode = zipCode;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type =;
}
}
public class PhoneNumber {
/** The number itself */
private String number;
/** The type of number */
private String type;
public PhoneNumber() { }
public PhoneNumber(String type, String number) {
this.type = type;
this.number = number;
}
public String getNumber() {
return number;
}
public void setNumber(String number) {
this.number = number;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
}
These classes should be fairly self-explanatory. They are the analogs of the XML structure you
saw back in Listing 2. Presumably, you will want to convert XML documents to instances of these Java objects, and back again. Quick makes these conversions a piece of cake.
Initialization
Once you have your XML document (or documents), your DTD, and your Java classes, you need
to tell the Quick framework how to map data from one format to the other. This is a multi-step
process:
I'll show you how to accomplish each step in turn in the next several sections. While this may seem like a lot of steps to get Quick going, you should note that each of these only needs to occur one time so this is an up-front cost for using the Quick framework. Once you have taken care of these steps, you can use Quick in your applications as many times as you want, without this process overhead.
Creating QDML
The first thing you need to do in getting Quick ready to roll is create a QDML file. QDML (Quick Document Markup Language) is essentially Quick's version of a DTD, and defines
the XML document's structure for the Quick framework. At this point, you aren't providing any mapping
information; you're just defining your document in a format that Quick can understand. Of course, this
is accomplished through a Quick tool, which makes life easier for developers.
First, make sure that your class path is set up as indicated in Installation and setup. You can then use the
cfgDtd2Qdml script, located in the Quick distribution's
BATs directory. For Windows users, you would use cfgDtd2Qdml.bat; for Unix users,
use cfgDtd2Qdml.sh. (The examples in this article are all in Unix format, but you can easily
accomplish the tasks on Windows.)
cfgDtd2Qdml
Issue the following command:
sh cfgDtd2Qdml.sh -in=person.dtd -out=person.qdml
You won't see much exciting in terms of output, but you should get a new file,
called person.qdml. With your DTD in a format that's more easily understood by
Quick, you're almost ready to move to the next step.
Before moving on, you need to let Quick (and the QDML file it uses) know the root element of your XML document. In this case, it's the person element. To do this, use another Quick utility:
person
sh cfgSetQdmlRoot.sh -in=person.qdml -out=person.qdml -root=person
Creating QJML
You now need to create a QJML file for Quick to use. QJML is the
Quick Java Markup Language, and is the equivalent of a binding schema, for
those of you familiar with JAXB or other data binding implementations. Quick uses QJML to convert the constructs in your XML file to their Java counterparts, and vice versa.
It's possible to create a QJML file from scratch; however, Quick supplies a
tool for generating one automatically, and you generally only need to make minimal
changes. For that reason, this is the recommended approach. Use either the
cfgQdml2Qjml.bat or cfgQdml2Qjml.sh script to accomplish this,
supplying your newly-generated QDML file as input (Quick reads the QDML to determine
the constructs to map; see why you needed that file now?):
sh cfgQdml2Qjml.sh -in=person.qdml -out=person.qjml
You should now have a new file, person.qjml.
As I've said, you will need to make some changes to this file, since
Quick makes some problematic assumptions about Java variable names. Open
up your file, and add the bold coding so it resembles Listing 7.
<?xml version="1.0" encoding="ISO-8859-1" standalone="no"?>
<!DOCTYPE qjml SYSTEM "classpath:///qjml.dtd">
<qjml root="person">
<bean tag="person">
<targetClass>Person</targetClass>
<elements>
<item coin="firstName">
<property name="firstName"/>
</item>
<item coin="lastName">
<property name="lastName"/>
</item>
<item coin="address" repeating="True">
<property kind="list" name="addressList"/>
</item>
<item coin="phoneNumber" repeating="True">
<property kind="list" name="phoneNumberList"/>
</item>
</elements>
</bean>
<text tag="firstName"/>
<text tag="lastName"/>
<bean tag="address">
<targetClass>Address</targetClass>
<attributes>
<item coin="address.type" optional="True" value="home">
<property initializer="home" name="type"/>
</item>
</attributes>
<elements>
<item coin="street">
<property name="street"/>
</item>
<item coin="city">
<property name="city"/>
</item>
<item coin="state">
<property name="state"/>
</item>
<item coin="zipCode">
<property name="zipCode"/>
</item>
</elements>
</bean>
<text label="address.type" tag="type">
<enum value="home"/>
<enum value="work"/>
<enum value="other"/>
</text>
<text tag="street"/>
<text tag="city"/>
<text tag="state"/>
<text tag="zipCode"/>
<bean tag="phoneNumber">
<targetClass>PhoneNumber</targetClass>
<elements>
<item coin="type">
<property name="type"/>
</item>
<item coin="number">
<property name="number"/>
</item>
</elements>
</bean>
<text tag="type"/>
<text tag="number"/>
</qjml>
These changes should be pretty obvious when you compare the original file
to the one shown in Listing 7; they all involve mapping the XML property to the
correct Java property name, as defined in the source code (see Listings 4, 5, and 6 for a refresher on these).
Creating QIML
You're almost done with the setup work; you still need to perform one final step. Quick performs much better with a QIML file -- the Quick Internal Markup Language. As the name implies, Quick uses this internal format to avoid runtime compilation, conversion, and processing of QJML files. Perform the following simple step to put your QJML into a format that Quick can use more handily:
sh cfgQjml2Qiml.sh -in=person.qjml -out=person.qiml
Finally, to improve performance even further, turn this text-based format into
a Java source file that you can compile (binary objects are always better to work
with than non-compiled textual file formats):
sh cfgQiml2Java.sh -in=person.qiml -out=PersonSchema.java
-class=PersonSchema -key=person.qjml
You really don't need to worry too much about what is happening here; it's all
Quick-specific, and the utilities take care of things. You should now compile
this created source (and the rest of your Java source if you haven't already),
and make sure everything is added to your classpath. With all these steps
complete, you're ready to do some data binding.
Data binding
With all the setup work done, it is now trivial to use the Quick framework to
convert your XML document into Java bytecode and then back again. I'm going to keep
the example for this simple; once you've got the basics down,
the uses of data binding become limitless, and rather than assume I know your
use-case, I'll just let you use the information you need from the sample code and take
off! Take a look at Listing 8, and then I'll explain the main concepts.
import java.util.Iterator;
// Quick imports
import com.jxml.quick.QDoc;
import com.jxml.quick.Quick;
public class PersonTest {
public static void main(String[] args) {
try {
if (args.length != 2) {
System.err.println("Usage: java PersonTest [input file] [output file]");
return;
}
// Initialize Quick
QDoc schema = PersonSchema.createSchema();
// Convert input XML to Java
QDoc doc = Quick.parse(schema, args[0]);
// Get the result
Person person = (Person)Quick.getRoot(doc);
// Output block
System.out.println(" --------------- Person ------------------ ");
System.out.println(" First Name: " + person.getFirstName());
System.out.println(" Last Name : " + person.getLastName());
for (Iterator i = person.getAddressList().iterator(); i.hasNext(); ) {
Address address = (Address)i.next();
System.out.println(" Address (" + address.getType() + "):");
System.out.println(" " + address.getStreet());
System.out.println(" " + address.getCity() + ", " + address.getState() +
" " + address.getZipCode());
}
for (Iterator i = person.getPhoneNumberList().iterator(); i.hasNext(); ) {
PhoneNumber number = (PhoneNumber)i.next();
System.out.println(" Phone Number (" + number.getType() + "):");
System.out.println(" " + number.getNumber());
}
// Add a new address
Address address =
new Address("work", "357 West Magnolia Lane", "Waco", "TX", "76710");
person.getAddressList().add(address);
// Change a phone number
PhoneNumber number = (PhoneNumber)person.getPhoneNumberList().get(1);
number.setNumber("2547176547");
// Write out modified XML
Quick.express(doc, args[1]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
The portion of this code that deals with Quick turns out to be only about
four lines! You first must load the QIML (compiled as Java bytecode) so
Quick knows which XML elements and attributes become which Java classes
and properties. Do this through the createSchema()
method (which is static) of the generated PersonSchema
class. Once that schema is loaded, it and the input file are handed off to the
Quick.parse() method, which does the conversion. From
there, you simply have to grab the root element of the resulting
QDoc and work with it as you would any other Java
object. Quick enters the picture again in the last bit of the code where the express() method outputs a modified version of the XML. Pretty easy, isn't it?
createSchema()
PersonSchema
Quick.parse()
QDoc
express()
Note: Do not supply the same filename for both input and
output, as you will overwrite your original data, and can cause all sorts
of unexpected things to happen.
Conclusion
Ideally, you've seen some really intriguing functionality here. First, data binding
in general can greatly simplify programming tasks, especially when you need to
persist data to some type of static storage (like a file, as shown in this article).
Additionally, Quick provides a fast, simple way to achieve this in your own projects.
Take a spin around the block with Quick, and see how you like it. Enjoy, and I'll
see you online!
Download
Resources
About the author to the EJBoss project, an open source EJB application server, and Cocoon, an open source XML Web-publishing engine.
Rate this page
Please take a moment to complete this form to help us better serve you.
Did the information help you to achieve your goal?
Please provide us with comments to help improve this page:
How useful is the information? | http://www.ibm.com/developerworks/library/x-quick/index.html | crawl-001 | refinedweb | 2,617 | 56.66 |
I'm writing an action to set one field equal to the number of instances attached to it via a foreign key (see below)
Models.py
class competition(models): competition_name = models.CharField(max_length = 50) class people(models): competition = models.ForeignKey(competition) fullname = models.CharField(max_length = 100)
Admin.py
def people_in_competition: for X in queryset: X.number_of_people = X.count(X.set_all) #(I want the number of people in this competition in this part) X.save()
Of course this gives me an error as I cant seem to use _set_all in admin.py, does it only work in templates? What would be the best way to figure that number out? | http://www.howtobuildsoftware.com/index.php/how-do/Ey0/python-django-django-admin-how-can-i-count-the-number-of-instances-per-a-queryset-django | CC-MAIN-2019-09 | refinedweb | 108 | 62.24 |
5.2. More About Using Modules¶
Before we move on to exploring other modules, we should say a bit more about what modules are and how we typically use them. One of the most important things to realize about modules is the fact that they are data objects, just like any other data in Python. Module objects simply contain other Python elements.
The first thing we need to do when we wish to use a module is perform an
import. In the example above, the statement
import turtle creates a new name,
turtle, and makes it refer to a module object. This looks very much like
the reference diagrams we saw earlier for simple variables.
In order to use something contained in a module, we use the dot notation, providing the module name and the specific item joined together with a “dot”. For example, to use the
Turtle class, we say
turtle.Turtle. You should read
this as: “In the module turtle, access the Python element called Turtle”.
We will now turn our attention to a few other modules that you might find useful. | http://interactivepython.org/runestone/static/thinkcspy/PythonModules/MoreAboutUsingModules.html | CC-MAIN-2017-22 | refinedweb | 184 | 69.92 |
[Please keep the bug address on the CC list, so that this whole discussion gets archived.] > Date: Mon, 26 Jan 2015 08:16:38 +0900 > From: Mark Laws <address@hidden> > > >> +#define W32_EMACS_SERVER_GUID "{0B8E5DCB-D7CF-4423-A9F1-2F6927F0D318}" > > > > Where did this GUID come from? > > > I generated it myself. Is that safe? Do we care whether this GUID is globally unique? Why exactly do we need it to begin with? > Moved to ms-w32.h and removed changes relating to the header. Thanks. > >> +#else > >> +bool w32_is_daemon; > >> +bool w32_daemon_is_initialized; > >> +static HANDLE w32_daemon_event; > >> +#endif > > > > Why do we need anything beyond the event handle? Can't it serve > > double duty as a flag as well? We could use INVALID_HANDLE_VALUE > > and/or NULL as distinct values with specific meanings. > > I've reworked it to use only w32_daemon_event. One minor issue: to > avoid including any Windows-related headers in lisp.h, the extern > declaration of w32_daemon_event there is void* instead of HANDLE. I > doubt the type of HANDLE will ever change, so it shouldn't matter > much. That should be fine, but there's no need to have the handle in lisp.h, either. We could put it on w32.h, for example, which is already included by emacs.c and other files. If w32.h doesn't fit the bill, please try to find some other w32*.h which does. > >> +#ifndef WINDOWSNT > >> /* Make sure IS_DAEMON starts up as false. */ > >> daemon_pipe[1] = 0; > >> +#endif > > > > You should do a similar initialization on WINDOWSNT, to avoid using > > the value that was initialized when Emacs was dumped. > > I'm not sure I understand. Do you mean for w32_daemon_event? If so, it > should already always be zero when Emacs starts up. It might not be zero. Recall that Emacs is run during the build process, when it executes some code, and then "dumps itself", which involves writing portions of its memory to a disk file that thereafter becomes the emacs.exe you run normally. If, for some reason, w32_daemon_event is initialized in this first instance, it will not be zero in the dumped image. So we always re-initialize such variables explicitly, instead of relying on the linker initializations. You can see that in the sources, and that is also the reason why the Posix code initializes daemon_pipe[1]. > > Also, the call to daemon_check_preconditions should be outside of the > > #ifdef, as it is used on all platforms. > > daemon_check_preconditions already gets called for both WINDOWSNT and > !WINDOWSNT (see the #else). If we move it outside the #ifdef to > eliminate having to mention it for both cases, then "nfd" and "err" > will also have to be outside the ifdef in order to be compatible with > strict pre-C99 compilers that only accept variable declarations at the > beginning of blocks. C99 compliant compiler is a prerequisite in the development version of Emacs, so this problem doesn't exist. Thanks. | https://lists.gnu.org/archive/html/bug-gnu-emacs/2015-01/msg00830.html | CC-MAIN-2021-10 | refinedweb | 475 | 67.65 |
bsqlodbc man page
bsqlodbc — batch SQL script processor using ODBC
Synopsis
Description
bsqlodbc is a utility program distributed with FreeTDS.
bsqlodbc is a non-interactive equivalent of the ‘
isql’ utility programs distributed by Sybase and Microsoft. Like them, bsqlodbc uses the command ‘
go’ on a line by itself as a separator between batches. The last batch need not be followed by ‘
go’.
bsqlodbc makes use of the ODBC API provided by FreeTDS. This API is of course also available to application developers.
Options
- -U username
Database server login name.
- -P password
Database server password.
- -S server
Database server to which to connect.
- -D database
Database to use.
- -i input_file
Name of script file, containing SQL.
- -o output_file
Name of output file, holding result data.
- -e error_file
Name of file for errors.
- -t field_term
Specifies the field terminator. Default is two spaces ( ‘
\t’ ), carriage return ( ‘
\r’ ), newline ( ‘
\n’ ), and backslash ( ‘
\\’ ).
- -h
Print column headers with the data to the same file.
- -q
Do not print column metadata, return status, or rowcount. Overrides -h.
- -v
Verbose mode, for more information about the ODBC interaction. This also reports the result set metadata, including and return code. All verbose data are written to standard error (or -e), so as not to interfere with the data stream.
- -V odbc_version
Specify ODBC version (2 or 3).
Notes
bsqlodbc is a filter; it reads from standard input, writes to standard output, and writes errors to standard error. The -i, -o, and -e ⟨jklowden@freetds.org⟩. | https://www.mankier.com/1/bsqlodbc | CC-MAIN-2018-09 | refinedweb | 248 | 60.61 |
NodeMCU based WiFi Network Scanner with OLED Display
In this tutorial, we are going to learn about building a WiFi scanner. This will be a NodeMCU based WiFi Network Scanner with OLED Display. It will scan and display the existing WiFi networks with the help of ESP8266 nodemcu module. It will show the SSID’s on the Arduino Serial monitor. The values will also be displayed on the OLED display.
Components Required
- SSD1306 OLED 0.96inch Display x 1 [BEST BUY]
- ESP8266 NodeMCU x 1 [Best Buy]
- Jumper cables [Best Buy]
ESP8266 based. In fact, the difference between the models is the number of GPIOs as well as their memory. although usually the differences in appearance are usually recognizable. These modules support the internal WIFI network. The ESP8266 has 13 GPIOs with an analog input (A0).
SSID Concept
SSID stands for Service Set IDentifier and its the name of your network. If you open the list of Wi-Fi networks on your laptop or phone, you will see a list of SSIDs. Wireless routers broadcast SSID access points so that surrounding devices can find and display them. SSIDs can be up to 32 characters long, but there is no minimum size limit. Depending on your router brand manufacturers create default SSIDs which is a combination of company name with random numbers or letters.
OLED SSD1306 Display
OLED displays are commonly used in IoT and other embedded projects to display text and different values. These modules come in a variety of sizes depending on the size of the driver, one of the most popular being the SSD1306. There are different types of OLEd’s available in the market, mostly 0.96 and 1.3 inch sizes are widely used in projects. Also, these OLED use SPI/I2C as a communication protocol. In this tutorial we are using a SSD1306 OLED display. Its a 0.96-inch monochrome display with 128 x 64 pixels. The OLED display does not require backlighting, which results in a very good contrast in dark environments. Also, its pixels consume energy only when turned on, so the OLED screen consumes less power than other displays.
Working
In this project, we will print the existing SSID values in OLED display. After programming the board, all available channels will be scanned every few seconds and the values will be printed on the screen. Due to the size of OLED display only three of the SSIDs that have the highest signal strength are displayed with the total number of networks shows at the top of the page.
Required Libraries
The code needs 2 libraries to compile and run. Locate Sketch -> Include Library -> Manage Libraries
You can download Arduino IDE from Here.
Then search for the word “Adafruit GFX ” and install it as well.
Connection
Below table shows connectivity between OLED display and NodeMCU using SPI protocol.
Code Analysis
In this section, we will review the important parts of this code. In the first few lines, we introduce the required libraries.
#include <SPI.h> #include "ESP8266WiFi.h" #include <Adafruit_GFX.h> #include <Adafruit_SSD1306.h>
Below line of code is used to specify the dimensions of the display.
#define SCREEN_WIDTH 128 #define SCREEN_HEIGHT 64
In this line, we have set the ESP radio mode to STATION MODE for scanning.
WiFi.mode(WIFI_STA);
In this section, we display the SSID and RSSI values in the monitor serial. RSSI is actually the signal strength of the network.
Serial.print(i + 1); Serial.print(": "); Serial.print(WiFi.SSID(i)); Serial.print(" ("); Serial.print(WiFi.RSSI(i)); Serial.print(")"); Serial.println((WiFi.encryptionType(i) == ENC_TYPE_NONE) ? " " : "*");
In this section, we will print the total number of available WIFI networks at the top of the screen on the OLED display.
display.setTextColor(SSD1306_WHITE); display.setCursor(25,0); display.print("Networks: "); display.print(n);
Finally, we print the SSID values in each row.
display.setTextColor(SSD1306_WHITE); display.setCursor(0,8); display.println(WiFi.SSID(0)); display.println(WiFi.SSID(1)); display.println(WiFi.SSID(2)); display.println(WiFi.SSID(3)); display.println(WiFi.SSID(4)); display.setCursor(0,0); display.display(); display.clearDisplay();
Complete Code
You can download the code from below link. Open the code using Arduino IDE and upload the code in NodeMCU. If you are not sure how to upload the code, then check out the article below.
Easy NodeMCU Programming with Simple Steps
Suggested Projects:
- DHT11 Sensor with ESP-NOW and ESP32
- Smart Switch using Blynk | IoT Based WiFi Switch
- PIR based Motion Switch | PIR Sensor Light
- Touch Based Switch board using TTP223
- DHT11 based Temperature Humidity Monitoring IoT Project
- IoT based Door Security Alarm Project with Blynk
- Telegram NodeMCU based Home Automation
- NodeMCU Blynk Feedback Switch with Physical Switch Status
- NodeMCU ESP8266 IoT based LPG Gas Leakage Alarm
- Smart Doorbell using ESP32 Camera
- DS18B20 with NodeMCU Local WebServer
- How to read DHT11 sensor data using Blynk
- ESP32 based Switch Notification Project using Blynk
Building & Testing
Once you power the NodeMCU the Wi-Fi network scanner will start working as a station mode and scan for all available networks. Then it will show the data in the OLED display. We are using SPI protocol, so that the SCL and SDA pins will be connected to pins D5 and D6.
Conclusion
In this project, we managed to scan and display the surrounding Wi-Fi networks, and display these values in the OLED display. In this project, the ESP8266 board and the SSD1306 display module were used. If you are a beginner do try this and provide your feedback. Don’t forget to share this tutorial. | https://www.iotstarters.com/nodemcu-based-wifi-network-scanner-with-oled-display/ | CC-MAIN-2021-25 | refinedweb | 928 | 64.51 |
- rivaszarateAsked on August 21, 2010 at 02:40 PM
I have many javascript things in a page, but when i add the light box presentation to a form, one code i did with jQuerry stops functioning.
- aytekinAnswered on August 23, 2010 at 07:00 AMJotform Founder
Jotform uses prototype library and that can sometimes cause conflicts with jquery. I added this to our bug list. We will look into it. Thanks.
- DudesThatKnowStuffAnswered on September 24, 2010 at 05:51 PM
I don't have an answere, but this is definitely a struggle of mine. I hope once the jotForm prototype library is namespaced then there's a notification sent to my dashboard.
--Thanks
--Cody
- husmen73Answered on May 01, 2011 at 05:30 AM
Hello ~aytekin
We are on same issue on
Is there any update for Jotform or any different solution?
- liyamAnswered on May 01, 2011 at 06:02 AM
Hello rivaszarate, DudesThatKnowStuff, and husmen73:
Could you give us the URL of your webpages that have the jQuery conflict with Jotform? an additional bit of explanation that uses the jquery library will be of help as well so that we can know what to expect on the page without the conflict.
Will wait for your responses.
Thanks.
- TerriAnswered on August 01, 2011 at 04:32 PM
Hi - I just became a premium user. Thanks for a nice service.
I did just notice that my jQuery lightbox is breaking, but only when I choose the "Source" version.
"Embed" seems to work fine. I'd like to verify that this will be stable, and to add my vote to switch over to jQuery, which is in everything anyway.
-Terri
- NeilVicenteAnswered on August 01, 2011 at 06:03 PM
@Terri
Using Embed or iFrame should keep the form's own scripts from clashing with your website's jQuery plugins, since the form is isolated in an iframe, as a webpage within a webpage.
If you're wondering, Embed also generates iframe codes. The difference is that the Embed script automatically adjusts the iframe height depending on the form's length.
To sum it all up, yes, Embed and iFrame should make the form stable and compatible with your jQuery plugins.
Anyway, your suggestion about switching to jQuery is duly noted. Our developers will definitely take these ideas into consideration.
If you have other questions, or if you need assistance in creating your forms, please let us know. Thank you for choosing Jotform!
Neil
- | https://www.jotform.com/answers/5154-why-does-the-light-box-presentation-blocks-with-my-jquerry | CC-MAIN-2021-49 | refinedweb | 410 | 70.94 |
I want this code to scrape a specific region on my screen (based on coordinates) and then hash it based on the hash function. And print the hashoutcome to my screen but somehow nothing shows up on my screen, any thoughts on what I did wrong? I called the region that I wanted to scrape "region"
Attachment 734
I compiled the code with Netbeans and it created a *.jar file that I attached.
Code :
package scrape; /** * * @author Jim */ import java.awt.Robot; public class Scrape { public static void main(String[] args) { // TODO code application logic here } protected int x; protected int y; private int region; private final int xofs = 3; private final int yofs = 3; public Scrape(int x, int y) { this.x = x; this.y = y; } public int hash(Robot myRobot, int x, int y, int width, int height) { int sum = 0; sum = width*height; for (int i = 0; i < width; ++i) for (int j = 0; j < height; ++j) sum = (sum << 5 ^ sum >> 27) ^ (myRobot.getPixelColor(xofs + x + i + this.x, yofs + y + j + this.y).getRGB() & 0x00ffffff); return sum; } public int doScrape(Robot myRobot) { int sum = 0; region = hash(myRobot, 157, 133, 8, 14); System.out.println("Region 1 : "+region+" : "+getData(region)+"\r\n"); sum += (int) region; return sum; } private static String getData(int rgbSum) { //compiled code throw new RuntimeException("Compiled Code"); } public String region(Robot myRobot) { return getData(region); } } | http://www.javaprogrammingforums.com/%20paid-java-projects/10622-showing-output-screen-txt-file-screenscrape-piece-code-printingthethread.html | CC-MAIN-2015-35 | refinedweb | 232 | 70.84 |
:
- Inserted items fade in while later items slide down to make space.
- Removed items fade out while later items slide up to close the gap.
- Moved items slide from their previous location to their new location.
- Moved items which move out of or in to the visible area also fade out / fade in while sliding.:
- When an item is dropped, it now quickly slides into its proper position rather than suddenly appearing there.
- Control templates are corrected: a few things were missing compared to the standard ListBox control template, most notably the selected-item accent color.
- The scrollbar is now made visible while drag-scrolling.
- Fixed a bug that prevented moving a taller item to the top or bottom position when a shorter item was there.
- Fixed a bug that caused the list items to jump slightly after moving an item down by more than a page then dropping it.
And now, a video preview of the improved ReorderListBox control!
[Video]
Fantastic!
Just one quick… I have a strange behavior if I disable the scrollviewer inside.. and have one surrounding it…
The drag-and-drop reorder behavior integrates with the ListBox's internal ScrollViewer for several reasons, including auto-scrolling when you drag near the top or bottom edge of the ListBox. I'm not surprised that it doesn't work with an external ScrollViewer. It should be possible to fix the code to handle that, but I have not tried.
Just trying this out. After 5min copying and pasting, it worked instantly. Just adding some tweaks to the template now.
Really great work! Thanks!
Thanks for that great piece of example code. I was able to compile and run your code as-is in the WP7 Emulator, but when I try to copy your files into a new project I get the following error:
The tag 'List' does not exist in XML namespace 'clr-namespace:System.Collections.Generic;assembly=mscorlib'. c:usersdavedocumentsvisual studio 2010Projectslistbox_reorder_test_2listbox_reorder_test_2DesignData.xaml 1 2 listbox_reorder_test_2
I added these files to my new project:
Themes/Generic.xaml
DesignData.xaml
ReorderListBox.cs
ReorderListBoxItem.cs
and I added the following line to my MainPage.xaml file:
xmlns:rlb="clr-namespace:ReorderListBoxDemo"
Any idea on why I'm getting this error message? I did a Google search, but didn't find an answer.
Thanks.
@cohoman, You don't need to copy that DesginData.xml file into your project. It is only needed for the demo application. (To actually fix the error, I think you need to set the Build Action to "DesignData" in the project item properties.)
Any suggestions on how to make this work with the MVVM pattern? The reorder listbox moves items within its source collection (a collection of viewmodels), but the source often is readonly. Wouldn't it be better to raise an event to request to move an item so it can be handled by the model, which is then reflected in the viewmodel collection? In fact, I've tried the latter but it doesn't seem to work because some methods aren't called.
Bert.
Hi Jason,
I am new to WP7 Development and need to do the same functionality like you have shown above in my application. I have downloaded your code and it worked great on emulator 🙂
When I look into the code I have observed that you have used ObservableCollection of type string, where as in my case I am retrieving the json data from server and parse it to Object.
Please have a look on below my code…
IList<Quote> ResultQuoteInfo = new List<Quote>();
IList<JToken> results = null;
//retrived the JSON data
JObject yahooQuote = JObject.Parse(e.Result);
//Converted into List
results = yahooQuote["query"]["results"]["quote"].Children().ToList();
foreach (JToken Quoteinfo in results)
{
//Deserialize the object and mapped to Quote class
Quote QuoteInfoResult = JsonConvert.DeserializeObject<Quote>(Quoteinfo.ToString());
//Added the deserialize object into list
ResultQuoteInfo.Add(QuoteInfoResult);
}
this.reorderListBox.ItemsSource = null;
this.reorderListBox.Items.Clear();
//Bind the list to list box control
this.reorderListBox.ItemsSource = ResultQuoteInfo;
———————————————————————————————————————————
When I execute the code it properly populate the data in listbox and render on UI but when I tried to re-arrange the items it gives below error at sourceList.RemoveAt(fromIndex); in MoveItem function (ReorderListBox.cs)
InvalidOperationException was unhandled
"Operation not supported on read-only collection"
Please suggest where I am doing mistake and where to do the code changes to make my code working.
Thanks in advance 🙂
@Neel, you need to use an ObservableCollection<Quote> instead of a List<Quote>.
Love your control. I am trying to use it with a Context Menu attached to it. I am running into an issue with getting the underlying item that brought up the Context Menu.
When using a normal Listbox control, the below works great:
ListBoxItem selectedListBoxItem = this.mylistbox.ItemContainerGenerator.ContainerFromItem((sender as MenuItem).DataContext) as ListBoxItem;
When using the Reorderlistbox control, the above code will not return the correct entry if the listbox item was 'reordered'.
Any ideas?
Thanks Jason 🙂
After posting this post y'day , I did the same changes which you have suggested and wolha!!! it worked 🙂
Great control you have here. Before digging in too deep, do you think this control is portable to WPF?
I get a lot of discrepancies in terms of RootVisual, WritableBitmap and FindElementsInHostCoordinates that need to be addressed.
@Perry, Sorry I don't know what the problem could be there, I would expect that to work.
@AvSomeren, It has been a while since I've done much WPF programming, so I don't think I can give a good answer. I would expect you could find WPF equivalents for the things you mention, although you may encounter other problems if the WPF ListBox control happens to work differently than it does in Silverlight.
Hi Jason, great work!
I have one question/suggestion: would you be able to change the code so that it will still performs animations if we change the ListBox ItemsPanelTemplate? For instance:
<rlb:ReorderListBox.ItemsPanel>
<ItemsPanelTemplate>
<toolkit:WrapPanel
</ItemsPanelTemplate>
<rlb:ReorderListBox.ItemsPanel>
If i do this the animations won't be what one would like to see…. Would you consider adapting the code to account for other ItemsPanelTemplates? Or maybe this would be too complex?
Regards,
Eduardo Serrano
@Eduardo, That should be possible to do, by making the animation-generation code more generic based on the before and after positions of each item and the visible area of the items panel. But don't wait for me, I suggest you try it out yourself. 🙂
Thank you for your quick reply Jason. I'll definitely give it a try. I'm trying to finish something first and then i'll work on it. If i get it to work (or hit a wall) i'll tell you something =)
Thank you for posting this example, but i have a question. I would like to use some of your ideas in my app however after reading the Microsoft Public License (MS-PL) included in your code I am not sure I can.
I read the agreement to mean that if I use any portion of your code my app must be open source as well since your code will be distributed along with mine to a users phone. Is this correct?
Link: go.microsoft.com/fwlink
@jperry, the MS-PL does not require you to open-source any part of your app.
Lots of closed-source apps are built with MS-PL code, for example the popular Silverlight Toolkit: silverlight.codeplex.com/license
thank you for the reply jason, i was just concerned and i did not want to break any rules
Hello,
The control works fine, but when I bind the listbox to an observablecollection instead of a list, the "moveItem" method throws an exception in the third line:
object itemsSource = this.ItemsSource;
System.Collections.IList sourceList = itemsSource as System.Collections.IList;
int fromIndex = sourceList.IndexOf(item); <—-NullReferenceException
help please! :-S
Note: The itemsSource contains an OrderedEnumerable<MyObject>
thanks!
@jesus, Are you sure you're binding to an ObservableCollection? The ObservableCollection<T> class extends Collection<T> which does implement System.Collections.IList, so that code should work. The demo app uses an ObservableCollection<string> as its data source.
If you're actually binding to an IOrderedEnumerable<T>, that is not supported as a data source.
First of all thanks for this great peace of code
I need to upgrade the xml (since the order of the items change) that I used to populate the listbox Where is the best place to insert my code in the ReorderListBox Class.
I have been trying in the move Method but the drop of the item been moved kind of frees for a fraction of time while my code executed
@Hidroilio, You shouldn't need to modify the ReorderListBox class to do that. Just use an ObservableCollection<T> as the data source, and handle the CollectionChanged event for the collection. In that event-handler you'll have all the information you need to update the XML
To avoid freezing the UI thread, you should do any time-consuming work on a background thread. See this link for some discussion of how to do that: wildermuth.com/…/Architecting_WP7_-_Part_9_of_10_Threading
Thanks one more time
These tip help me a lot.
Hello Jason,
AutoScrolling doesn´t work for me when i drag near the top or bottom of the list, including the Sample Project.
Any Ideas?
Best Regards,
Stephan
@Stephan, you must be using the Mango SDK. In Mango there is a breaking change to the ScrollViewer behavior. Try setting ScrollViewer.ManipulationMode="Control" to restore the old behavior.
Hi Jason,
I want to change the background under the up/down arrows. In the Generic.xaml I modified the
<Style TargetType="rlb:ReorderListBoxItem">
<Setter Property="Background" Value="Transparent" />
to:
<Style TargetType="rlb:ReorderListBoxItem">
<Setter Property="Background" Value="{StaticResource ListItemBackground}" />
Each list item should have this background, that I defined in the begining of the Generic.xaml and its xaml:
<LinearGradientBrush x:
<GradientStop Color="#FFDDDDDD" Offset="1"/>
<GradientStop Color="White"/>
</LinearGradientBrush>
After the change it looks fine in Blend and at the begining of the application start. But if I want to reorder an item, the application will stop.
Do you have any ida what am I doing wrong and how could I fix it?
Regards,
Péter
Hi,
i found a bug in the reordered listbox class.
if anybody uses two applicationbar buttons to sort a list ascending and descending and press one button twice and then the other one, you will get a ordering issue with the rearrangeCanvas (this holds the items in the background). For fixing this, you must add in the "else" tree on line 826 following code:
this.rearrangeCanvas.Children.Clear();
here the complete fixed function AnimateRearrangeInternal:
private void AnimateRearrangeInternal(Action rearrangeAction, Duration animationDuration)
{
// Find the indices of items in the view. Animations are optimzed to only include what is visible.
int viewFirstIndex, viewLastIndex;
this.GetViewIndexRange(true, out viewFirstIndex, out viewLastIndex);
// Collect information about items and their positions before any changes are made.
RearrangeItemInfo[] rearrangeMap = this.BuildRearrangeMap(viewFirstIndex, viewLastIndex);
// Call the rearrange action callback which actually makes the changes to the source list.
// Assuming the source list is properly bound, the base class will pick up the changes.
rearrangeAction();
this.rearrangeCanvas.Visibility = Visibility.Visible;
// Update the layout (positions of all items) based on the changes that were just made.
this.UpdateLayout();
// Find the NEW last-index in view, which may have changed if the items are not constant heights
// or if the view includes the end of the list.
viewLastIndex = this.FindViewLastIndex(viewFirstIndex);
// Collect information about the NEW items and their NEW positions, linking up to information
// about items which existed before.
RearrangeItemInfo[] rearrangeMap2 = this.BuildRearrangeMap2(rearrangeMap,
viewFirstIndex, viewLastIndex);
// Find all the movements that need to be animated.
IEnumerable<RearrangeItemInfo> movesWithinView = rearrangeMap
.Where(rii => !Double.IsNaN(rii.FromY) && !Double.IsNaN(rii.ToY));
IEnumerable<RearrangeItemInfo> movesOutOfView = rearrangeMap
.Where(rii => !Double.IsNaN(rii.FromY) && Double.IsNaN(rii.ToY));
IEnumerable<RearrangeItemInfo> movesInToView = rearrangeMap2
.Where(rii => Double.IsNaN(rii.FromY) && !Double.IsNaN(rii.ToY));
IEnumerable<RearrangeItemInfo> visibleMoves =
movesWithinView.Concat(movesOutOfView).Concat(movesInToView);
// Set a clip rect so the animations don't go outside the listbox.
this.rearrangeCanvas.Clip = new RectangleGeometry() { Rect = new Rect(new Point(0, 0), this.rearrangeCanvas.RenderSize) };
// Create the animation storyboard.
Storyboard rearrangeStoryboard = this.CreateRearrangeStoryboard(visibleMoves, animationDuration);
if (rearrangeStoryboard.Children.Count > 0)
{
// The storyboard uses an overlay canvas with item snapshots.
// While that is playing, hide the real items.
this.scrollViewer.Visibility = Visibility.Collapsed;
rearrangeStoryboard.Completed += delegate
{
rearrangeStoryboard.Stop();
this.rearrangeCanvas.Children.Clear();
this.rearrangeCanvas.Visibility = Visibility.Collapsed;
this.scrollViewer.Visibility = Visibility.Visible;
this.AnimateNextRearrange();
};
this.Dispatcher.BeginInvoke(rearrangeStoryboard.Begin);
}
else
{
this.rearrangeCanvas.Children.Clear(); //remove item layer if no items are rearranged
this.rearrangeCanvas.Visibility = Visibility.Collapsed;
this.AnimateNextRearrange();
}
}
i forgot….
Best regards
Enno 🙂
Hi Jason, great work! but can you help a little, how I can put wrap panel to items panel? What I need to change? Thanks
@Serg, I think it would be difficult to change this to use a WrapPanel. Currently the drag-reordering and rearrange-animation logic is all done with the assumption that items are moving only up and down. At a minimum you'd have to re-write a lot of that code, which is moderately complex now and would be much more complex when items can move both horizontally and vertically. I'm not sure what else would have to change.
"you must be using the Mango SDK. In Mango there is a breaking change to the ScrollViewer behavior. Try setting ScrollViewer.ManipulationMode="Control" to restore the old behavior."
doesn t work.
any other idea?
thanks,
olaf
hi,
Thanks for making a nice control.
I have tried your control and notice one strange thing..
I have added 100 items in list box.
Scroll down down to last item.. scroll up down up down..
and suddenly Image area for Rearrange is gone…. (Image on which user click and move item up and down)
Any idea regarding this issue?
Thanks & Regards,
Joyous Suhas
Hi,
Thanks for this great control. I am using VB.NET for my app, and it works perfectly when referencing the C# phone application project that you created. However, when installing on a phone, both applications are installed and that is not the desired behaviour.
So, I created a C# class library (Silverlight for Windows Phone) and put the Themes/Generic.xaml + both the ReorderListBox and the ReorderListBoxItem classes in there. At runtime, I keep getting the "ReorderListBoxItem must have a DragHandle ContentPresenter part." error. Any idea how to solve this?
Thanks,
Joris
This is awesome…. thanks for sharing. You should probably put this up on Codeplex or something. Great work!
Amazing work! Thanks for sharing.
It's awesome! Thanks for such a great work!
One question, how should I move the draggable area to the front(left).
I tried to switch the column position of ContentContainer and HandleContainer, I also change the column definition, but it seems does not work.
Could you teach me how to change the draggable area position?
Thank you very much! Once again, excellent work!
I found out it's because i forgot to change the position of the DragInterceptor.
Please ignore my question 😛
Thank you
Great work that I am thankfully re-using. I found an issue when using it in my application where the selected item would often not be shown in the accent color. A normal listbox does not have this issue. After a lot of experimentation I found out that this issue can be resolved by changing the type of the DragHandle in Generic.xaml to ContentControl i.s.o. ContentContainer. The same type change must be done in file ReorderListBoxItem.cs in lines 21, 224 and 237.
Thanks again for a great control.
hi, i'm facing some problems with the dropping part. There's no error in the codes but when i drag the item, it will jump back to its original position. How can this be resolved?
Thanks for sharing this code.
Thanks so much.
It works like a charm with a list has >300 items 🙂
No we need this control as an "ReorderListView" for Windows 8! 🙂
Hi Jason,
I use your control in my todo-list app 2Day (). However, the control does not work when I upgrade the project to WP8. In this case, the dragged item does not move but all the other items are dragged. Do you have any idea ?
I've just tried with WP8 and it works fine.
Hi Jason
Thanks so much !
I aslo use your control in WP8
The problem is the focusd item dosn't move but the whole list is moving
i don't know why ,it's worked in WP7
OK, here is fix for WP8:
In function dragInterceptor_ManipulationStarted() on the first row add:
scrollViewer.VerticalScrollBarVisibility = ScrollBarVisibility.Disabled;
Then in function dragInterceptor_ManipulationCompleted() on the last row add:
scrollViewer.VerticalScrollBarVisibility = ScrollBarVisibility.Visible;
working good but how can i get the index of the dragged item and the index of the dropped position?
thanks
Index of position to drop: dropTargetIndex
Lookig at the code I cannot figure out how to programatically move an item from position X to position Y. Is it possible?.
Hi Jason Ginchereau,
In your ReorderListBox, how can i disable top item? Don't alow move and show drag icon?
Thank you in advance.
Hi Jason Ginchereau,
How I can handle event reorder action? thanks
Hi Jason, excellent work! But in my app i need to use StackPanel instead of VirtualizingStackPanel and it fails in AnimateDrop() because itemContainer.RenderSize is zero. Can you help me with this fix? Thanks.
Hi Jason,
This is fantastic feature. I'm very excited to add this feature to my application. But I got following error when I set ItemSource from the IQueryable list, as below:
var Query = from it in AppDB.TableName
select it;
ReorderListBox.ItemsSource = Query.ToList();
here's the details of my error.
"InvalidOperationException", Additional information: Operation not supported on read-only collection.
Mohamed, that error message is pretty self-explanatory: you cannot reorder a read-only collection. The Enumerable.ToList() method returns a read-only list. Try wrapping your query results in a new List<T> or ObservableCollection<T> before using it as the ItemsSource.
Thanks for the code 🙂
It really helped me a lot.
But I have one doubt,that is their a way to hide the re-order image and make the entire row to get selected and dragged?
is there a version for wp8.1(window runtime)? | https://blogs.msdn.microsoft.com/jasongin/2011/01/03/wp7-reorderlistbox-improvements-rearrange-animations-and-more/?replytocom=671 | CC-MAIN-2018-13 | refinedweb | 3,081 | 58.28 |
getaddrinfo fails with numerical IPv6 values
Bug Description
Binary package hint: libc6
The function "getaddrinfo" returns an error when it receives a numerical IPv6 value as hostname.
The following sample demonstrates the bug:
$ cat > bug.c <<EOF
#include <stdio.h>
#include <sys/socket.h>
#include <netdb.h>
const char * candidate=
int main(int argc, char * argv[])
{
int ret=0;
struct addrinfo *ai = NULL;
ret = getaddrinfo(
if (ai) freeaddrinfo(ai);
if (ret != 0) printf("Error on '%s': %s.\n", candidate, gai_strerror(ret));
return 0;
}
EOF
$ gcc -o bug bug.c
$ ./bug
Error on '2001:db8::1': Address family for hostname not supported.
I have tested the same code on a Debian Etch and the bug does not appear.
I am running a Ubuntu Hardy with latest updates. My libc6 package is 2.7-10ubuntu3.
$ lsb_release -rd
Description: Ubuntu 8.04
Release: 8.04
$ apt-cache policy libc6
libc6:
Installed: 2.7-10ubuntu3
Candidate: 2.7-10ubuntu3
Version table:
*** 2.7-10ubuntu3 0
500 http://
100 /var/lib/
this patch is applied in Ubuntu; Tollef, could you have a look at this one?
Still present in Jaunty!?
Looks like this is caused by the same (stupid) patch that is causing #374674.
AFAIK glibc2.9 no longer requires this patch as it implements a unified A/AAAA lookup mechanism. Upstream is no longer applying the patch to the best of my knowledge.
-Patrick
This has been fixed in Thierry Carrez's PPA version of glibc6
https:/
-Patrick
eglibc (2.10.1-0ubuntu2) karmic; urgency=low
* Merge with Debian (r3733, eglibc-2.10 branch).
* Update to r8758 from the eglibc-2.10 branch.
* Remove testcases from expected results, which don't fail anymore (ia64).
* Mark test-memchr.out as failing on sparc.
* patches/
* Work around Ubuntu buildd limitation: allow just 2.6.15 for libc6
installation, although 2.6.18 is required for some patches (requested by
soyuz maintainers).
-- Matthias Klose < <email address hidden>> Tue, 04 Aug 2009 00:36:31 +0200
This problem is described in https:/
Does this bug apply to any of the currently supported releases (i.e. Maya/12.04, Petra/13.10 or Qiana/14.04)? If not, this bug should be closed.
I just noticed this discussion: http://
bugs.debian. org/cgi- bin/bugreport. cgi?bug= 435646 which is most likely related to what I am seeing (I have no global IPv6 on my test machine). So maybe this is not a bug but a feature... hum... | https://bugs.launchpad.net/ubuntu/+source/glibc/+bug/239701 | CC-MAIN-2015-35 | refinedweb | 409 | 69.99 |
Provided by: plainbox_0.25-1_all
NAME
plainbox-run - run a test job
SYNOPSIS
plainbox run [-h] [--non-interactive] [-n] [--dont-suppress-output] [-f FORMAT] [-p OPTIONS] [-o FILE] [-t TRANSPORT] [--transport-where WHERE] [--transport-options OPTIONS] [-T TEST-PLAN-ID] [-i PATTERN] [-x PATTERN] [-w WHITELIST]
DESCRIPTION
Run a test job This command runs zero or more Plainbox jobs as a part of a single session and saves the test results. Plainbox will follow the following high-level algorithm during the execution of this command. 1. Parse command line arguments and look if there's a session that can be resumed (see RESUMING below). If so, offer the user a choice to resume that session. If the resume operation fails move to the next qualifying session. Finally offer to create a new session. 2. If the session is being resumed, replay the effects of the session execution from the on-disk state. This recreates generated jobs and re-introduces the same resources into the session state. In other words, no jobs that have run in the past are re-ran. If the resumed session was about to execute a job then offer to skip the job. This allows test operators to skip jobs that have caused the system to crash in the past (e.g. system suspend tests) If the session is not being resumed (a new session was created), set the incomplete flag. 3. Use the job selection (see SELECTING JOBS below) to derive the run list. This step involves resolving job dependencies and reordering jobs if required. 4. Follow the run list, executing each job in sequence if possible. Jobs can be inhibited from execution by failed dependencies or failed (evaluating to non-True result) resource expressions. If at any time a new job is being re-introduced into the system (see GENERATED JOBS below) then the loop is aborted and control jumps back to step 3 to re-select jobs. Existing results are not discarded so jobs that already have some results are not executed again. Before and after executing any job the session state is saved to disk to allow resuming from a job that somehow crashes the system or crashes Plainbox itself. 5. Remove the incomplete flag. 6. Export the state of the session to the desired format (see EXPORTING RESULTS) and use the desired transport to send the results (see TRANSPORTING RESULTS). 7. Set the submitted flag. SELECTING JOBS Plainbox offers two mechanisms for selecting jobs. Both can be used at the same time, both can be used multiple times. Selecting jobs with patterns The first mechanism is exposed through the --include-pattern PATTERN command-line option. It instructs Plainbox to select any job whose fully-qualified identifier matches the regular expression PATTERN. Jobs selected this way will be, if possible, ordered according to the order of command line arguments. For example, having the following command line would run the job foo before running the job bar: plainbox run -i '.*::foo' -i '.*::bar' Selecting jobs with whitelists The second mechanism is the --whitelist WHITELIST command-line option. WhiteLists (or test plans, which is somewhat easier to relate to). Whitelists are simple text files composed of a list of regular expressions, identical to those that may be passed with the -i option. Unlike the -i option though, there are two kinds of whitelists. Standalone whitelists are not associated with any Plainbox Provider. Such whitelists can be distributed entirely separately from any other component and thus have no association with any namespace. Therefore, be fully qualified, each pattern must include both the namespace and the partial identifier components. For example, this is a valid, fully quallified whitelist: 2013.com.canonical.plainbox::stub/.* It will unambiguously select some of the jobs from the special, internal StubBox provider that is built into Plainbox. It can be saved under any filename and stored in any directory and it will always select the same set of jobs. In contrast, whitelists that are associated with a particular provider, by being stored in the per-provider whitelists/ directory, carry an implicit namespace. Such whitelists are typically written without mentioning the namespace component. For example, the same "stub/.*" pattern can be abbreviated to: stub/.* Typically this syntax is used in all whitelists specific to a particular provider unless the provider maintainer explicitly wants to include a job from another namespace (for example, one of the well-known Checkbox job definitions). GENERATED JOBS Plainbox offers a way to generate jobs at runtime. There are two motivations for this feature. Instantiating Tests for Multiple Devices The classic example is to probe the hardware (for example, to enumerate all storage devices) and then duplicate each of the store specific tests so that all devices are tested separately. At this time jobs can be generated only from jobs using the plugin type local. Jobs of this kind are expected to print fully conforming job definitions on stdout. Generated jobs cause a few complexities and one limitation that is currently enforced is that generated jobs cannot generate additional jobs if any of the affected jobs need to run as another user. Another limitation is that jobs cannot override existing definitions. Creating Parent-Child Association A relatively niche and legacy feature of generated jobs is to print a verbatim copy of existing job definitions from a local job definition named afer a generic testing theme or category. For example the Checkbox job definition __wireless__ prints, with the help of cat (1), all of the job definitions defined in the file wireless.txt. This behavior is special-cased not to cause redefinition errors. Instead, existing definitions gain the via attribute that links them to the generator job. This feature is used by derivative application such as Checkbox. Plainbox is not using it at this time. RESUMING Plainbox offers a session resume functionality whereas a session that was interrupted (either purposefully or due to a malfunction) can be resumed and effectively continued where it was left off. When resuming a session you may be given an option to either re-run, pass, fail or skip the test job that was being executed before the session was interrupted. This is intended to handle both normal situations, such as a "system reboot test" where it is perfectly fine to "pass" the test without re-running the command. In addition it can be used to handle anomalous cases where the machine misbehaves and re-running the same test would cause the problem to occur again indefinitely. Limitations This functionality does not allow to interrupt and resume a test job that is already being executed. Such job will be restarted from scratch. Plainbox tries to ensure that a single session is consistent and the assumptions that held at the start of the session are maintained at the end. To that end, Plainbox will try to ensure that job definitions have not changed between two separate invocations that worked with a single session. If such a situation is detected the session will not be resumed. EXPORTING RESULTS Plainbox offers a way to export the internal state of the session into a more useful format for further processing. Selecting Exporters The exporter can be selected using the --output-format FORMAT command-line option. A list of available exporters (which may include 3rd party exporters) can be obtained by passing the --output-format ? option. Some formats are more useful than others in that they are capable of transferring more of the internal state. Depending on your application you may wish to choose the most generic format (json) and process it further with additional tools, choose the most basic format (text) just to get a simple summary of the results or lastly choose one of the two specialized formats (xml and html) that are specific to the Checkbox workflow. Out of the box the following exporters are supported: html This exporter creates a static HTML page with human-readable test report. It is useful for communicating with other humans and since it is entirely standalone and off-line it can be sent by email or archived. json This exporter creates a JSON document with the internal representation of the session state. It is the most versatile exporter and it is useful and easy for further processing. It is not particularly human-readable but can be quite useful for high-level debugging without having to use pdb and know the internals of Plainbox. rfc822 This exporter creates quasi-RFC822 documents. It is rather limited and not used much. Still, it can be useful in some circumstances. text This is the default exporter. It simply prints a human-readable representation of test results without much detail. It discards nearly all of the internal state though. xlsx This exporter creates a standalone .xlsx (XML format for Microsoft Excel) file that contains a human-readable test report. It is quit similar to the HTML report but it is easier to edit. It is useful for communicating with other humans and since it is entirely standalone and off-line it can be sent by email or archived. It depends on python3-xlsxwriter package hexr This exporter creates a rather confusingly named XML document only applicable for internal Canonical Hardware Certification Team workflow. It is not a generic XML representation of test results and instead it carries quite a few legacy constructs that are only retained for compatibility with other internal tools. If you want generic processing look for JSON instead. Selecting Exporter Options Certain exporters offer a set of options that can further customize the exported data. A full list of options available for each exporter can be obtained by passing the --output-options ? command-line option. Options may be specified as a comma-separated list. Some options act as simple flags, other options can take an argument with the option=value syntax. Known exporter options are documented below: json with-io-log: Exported data will include the input/output log associated with each job result. The data is included in its native three-tuple form unless one of the squash-io-log or flatten-io-log options are used as well. IO logs are representations of the data produced by the process created from the shell command associated with some jobs. squash-io-log: When used together with with-io-log option it causes Plainbox to discard the stream name and time-stamp and just include a list of base64-encoded binary strings. This option is more useful for reconstructing simple "log files" flatten-io-log: When used together with with-io-log option it causes Plainbox to concatenate all of the separate base64-encoded records into one large base64-encoded binary string representing the whole communication that took place. with-run-list: Exported data will include the run list (sequence of jobs computed from the desired job list). with-job-list: Exported data will include the full list of jobs known to the system with-resource-map: Exported data will include the full resource map. Resources are records of key-value sets that are associated with each job result for jobs that have plugin type resource. They are expected to be printed to stdout by such resource jobs and are parsed and stored by Plainbox. with-job-defs: Exported data will include some of the properties of each job definition. Currently this set includes the following fields: plugin, requires, depends, command and description. with-attachments: Exported data will include attachments. Attachments are created from stdout stream of each job having plugin type attachment. The actual attachments are base64-encoded. with-comments: Exported data will include comments added by the test operator to each job result that has them. with-job-via: Exported data will include the via attribute alongside each job result. The via attribute contains the checksum of the job definition that generated a particular job definition. This is useful for tracking jobs generated by jobs with the plugin type local. with-job-hash: Exported data will include the hash attribute alongside each job result. The hash attribute is the checksum of the job definition's data. It can be useful alongside with with-job-via. machine-json: The generated JSON document will be minimal (devoid of any optional whitespace). This option is best to be used if the result is not intended to be read by humans as it saves some space. rfc822 All of the options have the same meaning as for the json exporter: with-io-log, squash-io-log, flatten-io-log, with-run-list, with-job-list, with-resource-map, with-job-defs, with-attachments, with-comments, with-job-via, with-job-hash. The only exception is the machine-json option which doesn't exist for this exporter. text Same as with rfc822. xlsx with-sys-info: Exported spreadsheet will include a worksheet detailing the hardware devices based on lspci, lsusb, udev, etc. with-summary: Exported spreadsheet will include test figures. This includes the percentage of tests that have passed, have failed, have been skipped and the total count. with-job-description: Exported spreadsheet will include job descriptions on a separate sheet with-text-attachments: Exported spreadsheet will include text attachments on a separate sheet xml client-name: This option allows clients to override the name of the application generating the XML document. By default that name is plainbox. To use this option pass --output-options client-name=other-name command-line option. TRANSPORTING RESULTS Exported results can be either saved to a file (this is the most basic, default transport) or can be handed to one of the transport systems for further processing. The idea is that specialized users can provide their own transport systems (often coupled with a specific exporter) to move the test results from the system-under-test to a central testing result repository. Transport can be selected with the --transport option. Again, as with exporters, a list of known transports can be obtained by passing the --transport ? option. Transports need a destination URL which can be specified with the --transport-where= option. The syntax of the URL varies by transport type. Plainbox comes equipped with the following transports: launchpad This transport can send the results exported using xml exporter to the Launchpad Hardware Database. This is a little-known feature offered by the website. certification This transport can send the results exported using the xml exporter to the Canonical Certification Website (). This transport is of little use to anyone but the Canonical Hardware Certification Team that also maintains Plainbox and Checkbox but it is mentioned here for completeness.
OPTIONS
Optional arguments: --non-interactive skip tests that require interactivity -n, --dry-run don't really run most jobs --dont-suppress-output don't suppress the output of certain job plugin types -f, --output-format save test results in the specified FORMAT (pass ? for a list of choices) -p, --output-options comma-separated list of options for the export mechanism (pass ? for a list of choices) -o, --output-file save test results to the specified FILE (or to stdout if FILE is -) -t, --transport use TRANSPORT to send results somewhere (pass ? for a list of choices) Possible choices: ? --transport-where where to send data using the selected transport --transport-options comma-separated list of key-value options (k=v) to be passed to the transport -T, --test-plan load the specified test plan -i, --include-pattern include jobs matching the given regular expression -x, --exclude-pattern exclude jobs matching the given regular expression -w, --whitelist load whitelist containing run patterns
SEE ALSO
plainbox-dev-analyze
AUTHOR
Zygmunt Krynicki & Checkbox Contributors
2012-2014 Canonical Ltd | http://manpages.ubuntu.com/manpages/disco/man1/plainbox-run.1.html | CC-MAIN-2019-30 | refinedweb | 2,601 | 53.71 |
Thank you for your excellent link.
But what makes me feel strange is that how can I add the code
String paramEncoding = application.getInitParameter("PARAMETER_ENCODING");
request.setCharacterEncoding(paramEncoding);
into my JSP file.
And now to my JSP file, I turn to the fmt namespace.
I added <fmt:requestEncoding to my JSP webpage.
And it finally works for me. :)
Thanks again.
2007/7/2, Johnny Kewl <john@kewlstuff.co.za>:
> Best article I have found so far on this subject, is this one....
> It seems that POST is more difficult than GET....
>
>
> good luck...
>
> ----- Original Message -----
> From: "Niu Kun" <haoniukun@gmail.com>
> To: <users@tomcat.apache.org>
> Sent: Sunday, July 01, 2007 4:16 PM
> Subject: Problem about posting Chinese characters to tomcat server.
>
>
> > Dear all,
> >
> > I've just got a simple form to post Chinese characters to my jsp file.
> > But the data submitted can't be seen on my web browser.
> > After analyzing the data posted and shown on my browser, I find the
> > following problem.
> >
> > The letters I submit are "e7 89 9b e5 9d a4" which are in UTF-8 form.
> > And the letters catched on the return webpage are
> > "c3 a7 c2 89 c2 9b c3 a5 c2 9d c2 a4".
> > We can see that each single character is doubled.
> > Each one is first prefixed with "c3" and then two "c2".
> > It's true that I add URIEncoding="UTF-8" to my connector's parameter list.
> > And all my files are encoded as "UTF-8" format.
> >
> > My tomcat version is 5.5.
> > My jdk is 1.5.
> > And my os is Debian lenny.
> >
> > Any help would be appreciated and thanks in advance.
> >
> > Regards
> >
> >
> > ---------------------------------------------------------------------
> >
>
>
--
失业
牛坤
MSN:haoniukun@hotmail.com | http://mail-archives.apache.org/mod_mbox/tomcat-users/200707.mbox/%3Cec9e7ff10707020818r43277bf1pf76829938dffbdd5@mail.gmail.com%3E | CC-MAIN-2016-30 | refinedweb | 283 | 69.99 |
- Get Input from User
This article is created to cover multiple programs in Java that are based on receiving inputs from user. Here are the list of programs included in this article:
- Get integer input in Java
- Continue receiving inputs until user enters 0
- How to handle with invalid inputs in Java ?
- Get character input in Java
- Get string input in Java
Get Integer Input in Java
The question is, write a Java program to ask the user to enter an integer value and print the entered value back on the output screen. The program given below is its answer. This program basically shows, how to read an integer value in Java using Scanner and nextInt()
import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { int num; Scanner scan = new Scanner(System.in); System.out.print("Enter an Integer Value: "); num = scan.nextInt(); System.out.println("\nYou've entered: " +num); } }
The snapshot given below shows the sample run of above program, with user input 20:
You can use following methods, to scan values of other types:
- nextDouble() - to read value of double data type
- nextFloat() - to read value of float type
- nextLong() - to read value of long type
- nextShort() - to read value of short type
- nextByte() - to read value of byte type
Continue Receiving Integer Input until User enters 0 in Java
This program is created in a way, to continue receiving the inputs from user, until enters 0. You can modify this program, to use in a way, like to continue receiving inputs from user, until user enters a character 'x' or whatever you want.
import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { Scanner scan = new Scanner(System.in); System.out.print("Enter an Integer Value: "); int num = scan.nextInt(); while(num!=0) num = scan.nextInt(); System.out.println("\nProgram Closed!"); } }
Here is its sample run with some user inputs:
How to Handle with Invalid Inputs in Java ?
Now the question is, what if user enters an invalid input ?
Like when we need to get integer input, but user enters some other type of value such as a floating-point input, character input, or string input. Let's check it out with first program of this article, using another sample run, but with non-integer value say c, a character input, this time:
Now we need to put the scanner statement inside the try block, so that, we can catch that exception using the catch block. Here is the complete version of the code, created after modifying the first program of this article. This program handles with invalid input:
import java.util.Scanner; import java.util.InputMismatchException; public class CodesCracker { public static void main(String[] args) { int num; Scanner scan = new Scanner(System.in); System.out.print("Enter an Integer Value: "); try { num = scan.nextInt(); System.out.println("\nYou've entered: " +num); } catch(InputMismatchException ime) { System.out.println("\nInvalid Input!"); } } }
Here is its sample run with same user input as of previous sample run, that is c:
In above program, the following two statements:
import java.util.Scanner; import java.util.InputMismatchException;
can also be replaced with a single statement given below:
import java.util.*;
Take Character Input in Java
This program is created to show you, how the character input can be received from user at run-time of the program.
import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { Scanner scan = new Scanner(System.in); System.out.print("Enter a Character: "); char ch = scan.next().charAt(0); System.out.println("\nYou've entered: " +ch); } }
The sample run with user input Y is shown in the snapshot given below:
In above program, the next() method is used to receive string input, whereas the charAt() method is used to get the character available at any particular index specified using its parameter. Therefore, charAt(0) scans the very first character or the character available at 0th index of the string. Therefore, if you enter any string like codescracker on the sample run of above program, then the first character, that is c will get initialized to ch. Here is its sample run with user input codescracker:
Get String Input from User in Java
To get string input from user in Java, we have following two methods:
- nextLine()
The next() method is used to scan a single word, name, or any string without space(s). Whereas the nextLine()
method is used, when we need to scan and receive the whole string with or without spaces, typed before pressing the
ENTER key. Let's create the program for both the methods.
Get String Input in Java - Without Space
This program uses next() method to scan a word or string without spaces.
import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { Scanner scan = new Scanner(System.in); System.out.print("Enter a String: "); String str = scan.next(); System.out.println("\nYou've entered: " +str); } }
Here is its sample run with user input codescracker:
Here is another sample run with user input codescracker dot com:
Get String Input in Java - With Spaces
To receive the complete string with spaces, typed in a line, use nextLine() instead of next(). Rest of all the codes remains same as of previous program.
Here is its sample run, when you use nextLine(), while receiving the string input from user:
Get Multiple Inputs from User in Java
This is the last program of this article, created to show you, how you can get multiple inputs in Java language.
import java.util.Scanner; public class CodesCracker { public static void main(String[] args) { Scanner scan = new Scanner(System.in); System.out.print("How many numbers to enter ? "); int n = scan.nextInt(); int[] arr = new int[n]; System.out.print("Enter " +n+ " Numbers: "); for(int i=0; i<n; i++) arr[i] = scan.nextInt(); System.out.println("\nYou've entered: "); for(int i=0; i<n; i++) System.out.print(arr[i]+ " "); } }
Here is its sample run with user input 5 as size, and 23, 34, 45, 56, 67 as five numbers:
Same Program in Other Languages
« Previous Program Next Program » | https://codescracker.com/java/program/java-program-take-input-from-user.htm | CC-MAIN-2022-21 | refinedweb | 1,026 | 51.78 |
1 /* Soot - a J*va Optimization Framework2 * Copyright (C) 1999 Patrice Pominville,-2003. 22 * See the 'credits' file distributed with Soot for the complete list of23 * contributors. (Soot is distributed at)24 */25 26 27 package soot.toolkits.graph;28 29 import java.util.*;30 import java.io.*;31 import soot.*;32 import soot.jimple.Stmt;33 import soot.baf.Inst;34 35 36 /**37 * A CFG where the nodes are {@link Block} instances, and where 38 * {@link Unit}s which include array references start new blocks.39 * Exceptional control flow is ignored, so40 * the graph will be a forest where each exception handler 41 * constitutes a disjoint subgraph.42 */43 public class ArrayRefBlockGraph extends BlockGraph 44 {45 /**46 * <p>Constructs an {@link ArrayRefBlockGraph} from the given47 * {@link Body}.</p>48 *49 * <p> Note that this constructor builds a {@link50 * BriefUnitGraph} internally when splitting <tt>body</tt>'s51 * {@link Unit}s into {@link Block}s. Callers who need both a52 * {@link BriefUnitGraph} and an {@link ArrayRefBlockGraph}53 * should use the constructor taking the <tt>BriefUnitGraph</tt> as a54 * parameter, as a minor optimization.</p>55 *56 * @param the Body instance from which the graph is built.57 */58 public ArrayRefBlockGraph(Body body)59 {60 this(new BriefUnitGraph(body));61 }62 63 64 /**65 * Constructs an <tt>ArrayRefBlockGraph</tt> corresponding to the66 * <tt>Unit</tt>-level control flow represented by the 67 * passed {@link BriefUnitGraph}. 68 *69 * @param unitGraph The <tt>BriefUnitGraph</tt> for which70 * to build an <tt>ArrayRefBlockGraph</tt>.71 */72 public ArrayRefBlockGraph(BriefUnitGraph unitGraph)73 {74 super(unitGraph);75 76 soot.util.PhaseDumper.v().dumpGraph(this, mBody);77 }78 79 80 /**81 * <p>Utility method for computing the basic block leaders for a82 * {@link Body}, given its {@link UnitGraph} (i.e., the83 * instructions which begin new basic blocks).</p>84 *85 * <p> This implementation chooses as block leaders all86 * the <tt>Unit</tt>s that {@link BlockGraph.computerLeaders()},87 * and adds:88 *89 * <ul>90 *91 * <li>All <tt>Unit</tt>s which contain an array reference, as 92 * defined by {@link Stmt.containsArrayRef()} and 93 * {@link Inst.containsArrayRef()}.94 *95 * <li>The first <tt>Unit</tt> not covered by each {@link Trap} (i.e.,96 * the <tt>Unit</tt> returned by {@link Trap.getLastUnit()}.</li>97 *98 * </ul></p>99 *100 * @param unitGraph is the <tt>Unit</tt>-level CFG which is to be split101 * into basic blocks.102 *103 * @return the {@link Set} of {@link Unit}s in <tt>unitGraph</tt> which104 * are block leaders.105 */106 protected Set computeLeaders(UnitGraph unitGraph) {107 Body body = unitGraph.getBody();108 if (body != mBody) {109 throw new RuntimeException ("ArrayRefBlockGraph.computeLeaders() called with a UnitGraph that doesn't match its mBody.");110 }111 Set leaders = super.computeLeaders(unitGraph);112 113 for (Iterator it = body.getUnits().iterator(); it.hasNext(); ) {114 Unit unit = (Unit) it.next();115 if (((unit instanceof Stmt) && ((Stmt) unit).containsArrayRef()) ||116 ((unit instanceof Inst) && ((Inst) unit).containsArrayRef())) {117 leaders.add(unit);118 }119 }120 return leaders;121 }122 }123 124 125
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/soot/toolkits/graph/ArrayRefBlockGraph.java.htm | CC-MAIN-2016-50 | refinedweb | 518 | 52.26 |
.
The IPRoute class is a 1-to-1 RTNL mapping. There are no implicit interface lookups and so on.
Some examples:
from socket import AF_INET from pyroute2 import IPRoute from pyroute2 import IPRouteRequest from pyroute2.common import AF_MPLS # get access to the netlink socket ip = IPRoute() # print interfaces print(ip.get_links()) # create VETH pair ip.link_create(ifname='v0p0', peer='v0p1', kind='veth') # lookup the interface and add an address idx = ip.link_lookup(ifname='v0p0')[0] ip.addr('add', index=idx, address='10.0.0.1', broadcast='10.0.0.255', prefixlen=24) # create a route with metrics req = IPRouteRequest({'dst': '172.16.0.0/24', 'gateway': '10.0.0.10', 'metrics': {'mtu': 1400, 'hoplimit': 16}}) ip.route('add', **req) # create a MPLS route (requires kernel >= 4.1.4) # $ sudo modprobe mpls_router # $ sudo sysctl -w net.mpls.platform_labels=1000 req = IPRouteRequest({'family': AF_MPLS, 'via': {'family': AF_INET, 'add': '172.16.0.10'}, 'newdst': {'label': 0x20, 'bos': 1}}) ip.route('add', **req) #'). | https://pypi.org/project/pyroute2/0.3.12/ | CC-MAIN-2022-27 | refinedweb | 159 | 71.82 |
#include <opencv2/imgcodecs.hpp>
Imread flags.
#include <opencv2/imgcodecs.hpp>
Imwrite flags.
#include <opencv2/imgcodecs.hpp>
Imwrite PAM specific tupletype flags used to define the 'TUPETYPE' field of a PAM file.
#include <opencv2/imgcodecs.hpp>
Imwrite PNG specific flags used to tune the compression algorithm.
These flags will be modify the way of PNG image compression and will be passed to the underlying zlib processing stage.
#include .
#include <opencv2/imgcodecs.hpp>
Saves an image to a specified file.
The function imwrite saves the image to the specified file. The image format is chosen based on the filename extension (see cv::imread for the list of extensions). In general, only 8-bit single-channel or 3-channel (with 'BGR' channel order) images can be saved using this function, with these exceptions: and save it to a PNG file. It also demonstrates how to set custom compression parameters: | https://docs.opencv.org/3.4/d4/da8/group__imgcodecs.html | CC-MAIN-2019-22 | refinedweb | 146 | 58.08 |
How to: Configure AD FS 2.0 as an Identity Provider
Published: April 7, 2011
Updated: February 28, 2013
Applies To: Windows Azure
Applies To
- Windows Azure Active Directory Access Control (also known as Access Control Service or ACS)
- Active Directory® Federation Services 2.0
Summary
This How To describes how to configure Active Directory Federation Services (AD FS) 2.0 as an identity provider. Configuring AD FS 2.0 as identity provider for your ASP.NET web application will allow your users to authenticate to your ASP.NET web application by logging on to their corporate account managed by Active Directory.
Contents
- Objectives
- Overview
-
Objectives
- Configuring trust between ACS and AD FS 2.0.
- Improving security of token and metadata exchange.
Overview
Configuring AD FS 2.0 as the identity provider enables reusing existing accounts managed by corporate Active Directory for authentication. It eliminates the need for either building complex account synchronization mechanisms or developing custom code that performs the tasks of accepting end user credentials, validating them against the credentials store, and managing the identities. Integrating ACS and AD FS 2.0 is accomplished by configuration only—no custom code is needed.
Step 1 - Add AD FS 2.0 as an Identity Provider in the ACS Management Portal
This step adds AD FS 2.0 as an identity provider in the ACS Management Portal.
To add AD FS 2.0 as an identity provider in the Access Control namespace
In the ACS Management Portal main page, click Identity Providers.
Click Add Identity Provider.
Next to Microsoft Active Directory Federation Services 2.0, click Add.
In the Display name field, enter a display name for this identity provider. Note that this name will appear both in the ACS Management Portal and by default on the login pages for your applications.
In the WS-Federation metadata field, enter the URL to the metadata document for your AD FS 2.0 instance, or use the File option to upload a local copy of the metadata document. When using a URL, the URL path to the metadata document can be found in the Service\Endpoints section of the AD FS 2.0 Management Console. The next two steps deal with login page options for your relying party applications; they are optional and can be skipped.
If you want to edit the text that is displayed for this identity provider on the login pages for your applications, enter the desired text in the Login link text field.
If you want to display an image for this identity provider on the login pages for your applications, enter a URL to an image file in the Image URL field. Ideally, this image file should be hosted at a trusted site (using HTTPS, if possible, to prevent browser security warnings), and you should have permission from your AD FS 2.0 partner to display this image. See help on Login Pages and Home Realm Discovery for additional guidance on login page settings.
If you want to prompt users to log on using their email address instead of clicking a link, then enter the email domain suffixes that you want to associate with this identity provider in the Email domain name(s) field. For example, if the identity provider hosts user accounts whose email addresses end with @contoso.com, then enter contoso.com. Use semicolons to separate the list of suffixes (for example, contoso.com; fabrikam.com). See help on Login Pages and Home Realm Discovery for additional guidance on login page settings.
In the Relying party applications field, select any existing relying party applications that you want to associate with this identity provider. This causes the identity provider to appear on the login page for that application and it enables claims to be delivered from the identity provider to the application. Note that rules still need to be added to the application's rule group that define which claims to deliver.
Click Save.
Step 2 - Add a Certificate to ACS for Decrypting Tokens Received From AD FS 2.0 in the ACS Management Portal (Optional)
This step adds and configures a certificate for decrypting tokens that are received from AD FS 2.0. This is an optional step that helps in strengthening security. Specifically, it helps in protecting the token’s contents from being viewed and tampered with.
To add a certificate to the Access Control namespace for decrypting tokens received from AD FS 2.0 (optional)
Navigate to ().
If you were not authenticated using Windows Live ID (Microsoft account), you will be required to do so.
After being authenticated with your Windows Live ID (Microsoft account), you are redirected to the My Projects page on the Windows Azure portal.
Click the desired project name on the My Project page.
On the Project:<<your project name>> page, click the Access Control link next to the desired namespace.
On the Access Control Settings: <<your namespace>> page, click the Manage Access Control link.
On the ACS Management Portal main page, click Certificates and Keys.
Click Add Token Decryption Certificate.
In the Name field, enter a display name for the certificate.
In the Certificate field, browse for the X.509 certificate with a private key (.pfx file) for this Access Control namespace, and then enter the password for the .pfx file in the Password field. If you do not have a certificate, then follow the on-screen instructions to generate one, or see help on Certificates and Keys for additional guidance on obtaining a certificate.
Click Save.
Step 3 - Add Your Access Control namespace as a Relying Party in AD FS 2.0
This step helps to configure ACS as a relying party in AD FS 2.0.
To add the Access Control namespace as a relying party in AD FS 2.0
In the AD FS 2.0 Management console, click AD FS 2.0, and then, in the Actions pane, click Add Relying Party Trust to start the Add Relying Party Trust Wizard.
On the Welcome page, click Start.
On the Select Data Source page, click Import data about the relying party published online or on a local network, type the name of your Access Control namespace, and then click Next.
On the Specify Display Name page, enter a display name, and then click Next.
On the Choose Issuance Authorization Rules page, click Permit all users to access this Relying Party, and then click Next.
On the Ready to Add Trust page, review the relying party trust settings, and then click Next to save the configuration.
On the Finish page, click Close to exit the wizard. This also opens the Edit Claim Rules for WIF Sample App properties page. Leave this dialog box open, and then go to the next procedure.
Step 4 - Add Claim Rules for the Access Control namespace in AD FS 2.0
This step configures claims rules in AD FS 2.0. This way, you ensure that the desired claims are passed from AD FS 2.0 to ACS.
To add claim rules for the Access Control namespace in AD FS 2.0
On the Edit Claim Rules properties page, on the Issuance Transform Rules tab, click Add Rule to start the Add Transform Claim Rule Wizard.
On the Select Rule Template page, under Claim rule template, click Pass Through or Filter an Incoming Claim on the menu, and then click Next.
On the Configure Rule page, in Claim rule name, type a display name for the rule.
In the Incoming claim type drop-down list, select the identity claim type you want to pass through to the application, and then click Finish.
Click OK to close the property page and save the changes to the relying party trust.
Repeat steps 1-5 for each claim that you want to issue from AD FS 2.0 to your Access Control namespace.
Click OK. | http://msdn.microsoft.com/en-us/library/windowsazure/gg185961.aspx | CC-MAIN-2013-20 | refinedweb | 1,310 | 65.32 |
Helpful information and examples on how to use SQL Server Integration Services.
Today’s post is by Sergio Clemente Filho – a developer on the SQL Server Integration Services team.
--------------------------------------
One of the first new things you will notice in the solution explorer when you create a new SSIS project (opening existing SQL Server 2008 R2 or previous versions will not show this node, unless you convert the project to “Project Deployment Model”) is the “Connection Managers” node (See Figure 1). This is a new feature in Denali that allows sharing connection managers across multiple packages.
Figure 1 - Solution explorer
To create a project connection manager, right click on the “Connection Managers” node and click on the “New Connection Manager” option (as seen in Figure 2).
Figure 2 - Creating new project connection manager
This will prompt an existing familiar dialog to choose the connection manager type, then the connection manager information as it can be shown in figures Figure 3 and Figure 4 respectively.
Figure 3 - Select connection manager type
Figure 4 - Configuring connection manager
After the project connection manager is created, it will automatically appear in both solution explorer and connection manager list view as it can be shown on Figure 5. Currently project connection managers are being shown in bold but this might change before RTM.
Figure 5 - After creation
Once the project connection manager is created, it becomes available for being used similar to how package connection managers are used. An example is given below with an Execute SQL Task in Figure 6:
Figure 6 - Using project connection managers in SQL Task
The package should successfully run as shown in Figure 7 .
Figure 7 - Running in BIDS
Project connection managers can me demoted to package connection managers as can be shown below in Figure 8. Once a project connection manager gets demoted all other packages that use this project connection will have their reference broken.
Figure 8 - Demoting a project connection
You can also promote a package connection back to a project connection manager by right clicking on the package connection and choosing the option “Convert to Project Connection”
Note: Is worth noting that all operations on project connection managers do not participate in the undo transaction. This is true for creation, deletion, editing, promotion and demotion of project connection managers. This is unfortunately a by design behavior because undo cannot span across different documents.
Note: Is worth noting that all operations on project connection managers do not participate in the undo transaction. This is true for creation, deletion, editing, promotion and demotion of project connection managers. This is unfortunately a by design behavior because undo cannot span across different documents.
Let’s now see how to use project connection managers programmatically. Table 1 shows the code to create a project connection manager and access the newly created connection from the package Connections collection.
· Line 8: Creates a project
· Line 9: Creates an OLEDB project connection with the stream name “Connection.conmgr”. The two arguments of the ConnectionManagerItems.Add are explained below:
o Creation name: The connection type of the connection manager, examples are: ADO, ADO.NET, FILE, FLATFILE, HTTP, etc. This is the identical creation name used in Connections.Add ()
o Stream name: An unique file name that ends with the suffix “.conmgr”. The name cannot have more than 128 characters.
· Line 10: Sets the name of the underlying runtime object. cmi.ConnectionManager is a reference to a ConnectionManager object ()
· Line 12-14: Creates a package and adds to the project
· Line 16: Accesses the project connection manager from the package connections. One thing worth noticing is that the project connection managers will automatically appear in the Package.Connections connections. This is why it automatically appeared in the existing UIs without any effort.
Table 1 - Creating SCM programmatically.
One important concept of project connection managers is that the same object is shared across all packages. This allows caching the information and reuse in multiple packages which will improve performance. For the next example I will quickly show how a cache connection manager can be used to share information across two child packages.
Imagine I have the following parent package as it can be seen in Figure 9:
- Contains a data flow that populates a cache connection manager that is at project scope.
- Executes two child packages (Child1, Child2)
Figure 9 - Parent Package
The data flow of the parent it’s pretty straightforward and it’s shown in Figure 10. The OLE DB Source retrieves all columns from the table Person from AdventureWorks database and the cache connection managers will contain all columns and will index FirstName and LastName with indexes 1 and 2 respectively.
Figure 10 - Data flow that populates the cache
Figure 11 - Cache connection manager
Once this is done, the child packages can reference the project connection manager named “Shared CCM” and use them.
Figure 12 - Child package 1
In the lookup transform, make sure to select “Cache connection manager” as the connection type as it can be seen in Figure 13 and select the connection “Shared CCM” manager in the “Connections” tab as it can be seen in Figure 14.
Figure 13 - Cache connection manager UI 1
Figure 14 - Cache connection manager UI 2
Hope that was a useful overview of project connection managers, we saw how to create the project connection manager from BIDS and from API. We saw that project connection managers will show automatically in existing UIs (Unless the name collide with a package connection) so you can use them as it was a normal connection manager. We also saw a more advanced example where the project connection manager was used in order to fetch the information only once through the cache connection manager.
· Expressions are not supported on project connection managers. BIDS will hide the expressions option in the property grid.
· Logging might not always work. There might be scenarios where if you log on a project connection manager the logging won’t appear in the package logging
· If you try to click on “Parse Query” on SQLTask you will get a “Specified cast is not valid”
These existing issues should be addressed before RTM
Excellent article.
Will this undo feature in project connection available in RTM?
This is one of the most anticipated enhancement that I like. Kudos to the SSIS Team.
Do project level shared connection managers support dynamic configurations? For example, we might want to share a connection across our packages but point to a certain server for development and a certain server for production.
I am using Microsoft.SQlServer.management.IntegreationServices namespace to programmatically run my SSIS 2012 packages. Everything seems to work great but I have a requirement to dynamically set my connection strings at runtime. I know there are easy ways to do this using Microsoft.SqlServer.Dts.Runtime because there is a Connections property on the Microsoft.SqlServer.Dts.Runtime.Package class. There seems to be no similar equivalent in the Microsoft.SQlServer.management.IntegreationServices. Any ideas how I could set connection strings using the new Managed Object model? I am executing packages that live in the Catalog.
Hi very good.
Could you please let me know how to create connection manager in project level on SSIS2008. | http://blogs.msdn.com/b/mattm/archive/2011/07/19/project-connection-managers.aspx | CC-MAIN-2015-18 | refinedweb | 1,204 | 52.19 |
In the next few posts I'll roll out a little project for Excel that uses Open XML and LINQ. The scenario for this little Office solution has to do with my massive collection of music. I've collected over 1000 albums of music both from CDs and from purchases online.
[SideNote: Purchasing music legally is so easy and so cheap. Come on, everybody, let's get legal!]
Well- I wanted a way to query, sort, and work with my music inventory in more flexible ways than the Zune software will allow. The Zune application is really just about the listening experience. It's not intended to be a customizable database application. So, I decided to export my music inventory to Excel. This would allow me to see, sort, and work with my music in the rich goodness that is Excel. Here you see an image of my band list in Excel, and I can sort by artist, album, or song:
While Excel is great for sorting and working with the data in many ways, but imagine that I have plans for using the data that go beyond just working in Excel (and imagine that I also needed some practice working with Open XML!). To fulfill those additional requirements, I decided to create a Winform app. I'll wire the app up to the Excel spreadsheet via Open XML. Here you can see the application running:
I'll spare you the details of how I do the export of my music library. Suffice it to say that I use System.IO to recursively roll through the music library directories and create XMLElements for each of the artists, albums, and songs. Once that is complete, it's an XML file that I can open in Excel 2010. My application relies heavily on the LtxOpenXML Namespace that features a number of extension classes (big thanks to Eric for all of his posts on Open XML). These classes greatly simplify the amount of code you need to write to walk and search through spreadsheets in Open XML. I could have written all of the code to talk to the TableRows in the spreadsheet, but the LtxOpenXML classes make it so much easier.
Read Eric White's blog post to find out more about this.
Alright-so here's how it works. First, I have a procedure that picks up the search term from a text box on the form. It also checks to see what kind of search is being performed (Band, Album, or Song). The procedure is nearly identical for all three types of searches. I'll show just the album search.
public void QuerySimpleTable(searchType sType, string searchTerm) { using (SpreadsheetDocument spreadsheet = SpreadsheetDocument.Open(filename1, false)) { // search for songs that match search query criteria if (sType == searchType.Album) { var r = from c in spreadsheet.Table("Bands").TableRows() where (string)c["Album"] == searchTerm select new BandRecord() { Band = (string)c["Band"], Album = (string)c["Album"], Song = (string)c["Song"] }; ListResults(r); } } }
As you see, the code does the following:
1) Opens the spreadsheet using the DocumentFormat.OpenXml.Packaging API.
2) Uses LINQ to create a new query that targets the "Bands" table defined in the spreadsheet and brings back the tables rows matching the search criteria.
3) Associates the search results with an instance of the BandRecord class.
4) Calls a procedure to list all of the results into the form's listbox.
The BandRecord class looks like this:
public class BandRecord { public string Band { get; set; } public string Album { get; set; } public string Song { get; set; } }
The ListResults procedure simply loops through the list of BandRecord instances and adds the songs for the band, album, or song search into the Listbox.
public void ListResults(IEnumerable<BandRecord> list) { foreach (var z in list) try { listBox1.Items.Add(z.Song); } catch (ArgumentNullException ex) { continue; } int i = listBox1.Items.Count; itemCount.Text =i.ToString(); }
You can see that the code is fairly streamlined, and that's a good thing. Now that the basic plumbing is working, what remains is to add features to the application and make it more useful. Things that are very much needed:
1) Making the search case-insensitive. Right now the query works only if the cases match.
2) Making the search look for any part of the term-basically a CONTAINS keyword search.
3) Provide a better view of the search results. For example, including the track numbers is not really necessary.
4) Extending the application to do more meaningful things.
Those are things that I will cover in subsequent posts.
Rock Thought of the Day: The Stars Are Projectors by Modest Mouse
This song is getting a lot of rotation on my Zune player right now even though the album has been out a long time. The dissonance in the first part of the song is so beautiful. Out of the lush clashing of sounds emerges a mix of colors and textures that I believe cannot be found any other way. Give it a listen-- "all the stars are projects, yeah, projecting our lives down to this planet earth."
Rock On | https://blogs.msdn.microsoft.com/johnrdurant/2010/02/19/excel-open-xml-linq-part-i/ | CC-MAIN-2018-30 | refinedweb | 851 | 71.65 |
Verifying Your Shopify Webhooks in Serverless Functions
Raw Body Needed
In my first approach setting this up for a client, I've come across this nasty pitfall: To generate the correct HMAC digest out of the webhook request body and the secret key, we need the requests raw body. Next.js gives you the option to deactivate the default body parser. This requires that we handle the
http.IncomingMessage Stream ourselves and extract the requests
Buffer from within. We will use a library called
raw-body to do exactly this job for us.
api/sync.jsapi/sync.js
import getRawBody from 'raw-body' export default async function (req, res) { // We need to await the Stream to receive the complete body Buffer const body = await getRawBody(req) // ... } // We turn off the default bodyParser provided by Next.js export const config = { api: { bodyParser: false, }, }
Creating the HMAC Digest
Now that we got the raw body
Buffer stored in a variable, we digest the raw body into a hmac hash using our secret key and compare it with the
X-Shopify-Hmac-SHA256 header string in the end. Since the HMAC header is base64 encoded, we need to encode our digest as well. If the digest and the header are equal, the webhook can be deemed valid.
To create a hmac digest, Node provides us with the
crypto module and its
crypto.createHmac() method. We import the module into our API route and feed in the data we prepared in the previous steps. For more information on the crypto module, have a look at the official documentation.
api/sync.jsapi/sync.js
import getRawBody from 'raw-body' import crypto from "crypto" export default async function (req, res) { // We need to await the Stream to receive the complete body Buffer const body = await getRawBody(req) // Get the header from the request const hmacHeader = req.headers['x-shopify-hmac-sha256'] // Digest the data into a hmac hash const digest = crypto .createHmac('sha256', process.env.SHOPIFY_SECRET) .update(body) .digest('base64') // Compare the result with the header if (digest === hmacHeader) { // VALID - continue with your tasks res.status(200).end() } else { // INVALID - Respond with 401 Unauthorized res.status(401).end() } } // We turn off the default bodyParser provided by Next.js export const config = { api: { bodyParser: false, }, }
Wrapping Up
Keeping the data transfers concerning pricing and inventory safe is certainly a critical feature to headless e-commerce in general. With just a few lines of code you're able to leverage the validation built into Shopify webhook system in your serverless functions. | https://johnschmidt.de/blog/verifying-your-shopify-webhooks-in-serverless-functions | CC-MAIN-2021-21 | refinedweb | 423 | 51.99 |
>? It's quite okay to ask such questions here, but you should not assume to be the first to run into these problems:-) Have a search through, especially the bookshelf, the mailing-list archives for haskell and haskell-cafe, and the wiki: questions & answers (which is similar to a list of frequently asked questions, without the ordering implied by a list..). This particular question has also been raised several times on comp.lang.functional, so you can find a lot of related discussion in Google's UseNet search. You won't find all the answers you need there, and even if your specific questions have been answered there, the answers may not be helpful to you. Then, ask here, and say what resources you've tried and why they didn't help you. That way, the resources at can be improved every time the questions are asked on this list. > ...what's surprising me is that do I really have to turn everything into > an IO action like this just to do things with the String hidden in the IO > String? Part of the answer can already be found in the wiki at, but as you say you've tried some monad tutorials, here goes another longish explanation attempt: There is no String hidden in an IO String (at least, there need not be one). If you have a function f :: String -> String, there need not be a String hidden in f -- a call to f could just give you back something constructed from its parameter. So f promises a String when passed a String, and the only way to get at that result String is by applying f to a String. If you have i :: IO String, the situation is similar: - f is a function with an explicit parameter and calling this function returns a String - i is an IO-action, with implicit access to an IO-environment, and executing it may do things to the IO-environment, and will produce a String In both cases, you've got a promise of a String, but not necessarily a String. The difference is that f only has access to its definition and its parameter, and it only returns a String, so you can use it in any context that supplies a String parameter and expects a String result. In contrast, i also wants access to an IO-environment, and it returns a String and may modify the IO-environment, so you can use it in any context that supplies access to an IO-environment and expects a String and (potential) changes to that IO-environment. With this background, your question is easily answered: an IO String action only promises a String, and to get that String you have to execute the action in an IO-environment. You can't do that inside an expression that isn't of type IO something, because expressions that are not of that type shouldn't have access to an IO-environment (they may pass IO actions around, but they can't execute them). So, you don't need to convert everything to an IO action to do something with the String, but you need to be able to execute the IO action that promises the String. And you can't embed that IO action in a non-IO expression, so your overall programm will be an IO action:-( However, you can embed functional expressions in an IO action:-) And that's just a complicated way to describe what you've already discovered: f :: String -> Char {- no IO here, and f could be an arbitrarily complex functional computation -} f s = head s i :: IO String {- if given access to an IO-environment, this should produce a String -} i = readFile "input" (i >>= (\s-> return (f s))) >>= putChar {- or: do { s <- i; c<- return (f s); putChar c } or: do { s <- i; putChar (f s) } -} The "return" embeds an arbitrary expression into an IO-action that does not access its IO-environment (as far as that environment is concerned, it is a null action). And the "s<- i; return (f s)" part binds s to the String returned by i *and* it composes the effects that i and "return (f s)" might have on the IO-environment. That's why you can't simply use a let-binding instead of the monadic binding: let doesn't know about that extra IO-environment. Or, in monad-speak: function application and let-binding take place in the identity monad (the monad which doesn't add anything extra). IO actions and their bindings take place in the IO monad (the monad that adds access to IO-environments to functional computations). In contrast to other monads, such as List, MayBe, .., you won't find an operation of type M a -> a if M is the IO monad (guess what, you will, but it's unsafe;-). The reason is that other agents observe the IO-environment, so changes to it won't go unnoticed (you can throw away the evidence that you really had a list of results instead of just a single result, but you can't throw away the evidence that you've reached outside your functional program..). This brings us back to your idea of "tainting": not the Strings themselves are tainted (they are as pure as anything else), the computation that produces the String is tainted if it needs IO to produce a String. For "untainted", purely functional String computations, there is no difference (apart from resource usage) between the computations and the Strings they produce, but for "IO-tainted" computations, there is such a difference. Anyone still reading?-) If yes, and if it should have been helpful, perhaps someone could condense this and add it to the wiki? Claus PS. Once Upon A Long Ago, I tried to put some of the various functional IO schemes into a logical development (a kind of "design proof" or "design derivation").. a bit dated, and not necessarily helpful to those currently struggling with IO in Haskell, but perhaps of historical interest?-) Those who like that kind of thing can find it in chapter 3 of someone's thesis: | http://www.haskell.org/pipermail/haskell-cafe/2001-August/002109.html | CC-MAIN-2014-35 | refinedweb | 1,028 | 56.42 |
Wikiversity:Subpages/Forking and organizing
This is a collaborative essay about how Wikiversity might better employ the Subpage structure. The title represents the two different reasons why subpages are used.
Organization[edit]
- As an example of "organizing", consider a content-style resource and add a subpage for a quiz. Then a third level might be added for a Quizbank compatible version. See, for example Venus/Quiz/Quizbank.
- A study may become deep, with many subtopics. If this is a coherent study, it can be done with a top-level page (which might or might not be top-level in mainspace), and subpages, and subpages of subpages, etc. Using subpage links, the entire structure becomes portable.
Forking[edit]
- At the other extreme we have "forking", which I think of as the splitting of two incompatible resources on the same subject. There is no one "right" way to teach a subject. Wikiversity needs to host a variety of "good" was to teach the same subject. A student may pick which resource to use, may may use both, or may create their own study.
- It has been said that multiple resources on the same topic is "confusing." Deep education must confront this. Out of the mud grows the lotus, out of confusion comes fusion, integration, knowledge. Avoiding confusion avoids depth.
- Forking has been used to avoid conflict and allow free exploration and expression, while maintaining overall neutrality. The forked pages are linked from a top-level page, neutrally. The top-level page enjoys, typically, 100% consensus. The subpages, the "forks" or "sections" or "essays" may include opinion, point of view, original research, and need not be verifiable or neutral.
Other considerations?[edit]
- An opinion: There is another reason for subpage structures, that upon reflection, is false. Unlike Wikipedia, Wikiversity and Wikibooks strives to build educational materials. This is a much more ambitious endeavor because multiple resources are required for each resource, especially in mathematical fields where it is essential to present ideas at the correct level of sophistication. But even the social sciences require different and incompatible viewpoints to be presented. Wikiversity needs parallel resources, "forking", as described above. So with so many different resources all on the same and related topics, would it be a good idea to organize the titles? The answer is no, because an idea that is impossible to realize is a bad idea.
However, "impossible" is typically an interpretation not based on experience. It might be beyond the imagination of the person, who has never seen a counterexample. It might be difficult. parallel resources implies side-by-side, neutrally presented. Differing page names do not accomplish this. What does is a top-level name as a neutral "node." That can be done with distinct page names, all at the top level, but the organization is far less transparent and neutral access is not guaranteed. It can be done, however, the logistics, what it takes to maintain structures like this, is a far greater burden. A parallel structure using subpages takes minutes to set up, if that.
- It is hard enough to create different approaches to teaching the same subject.
It is trivially easy, if multiple users are involved in the creation. Sometimes a single user can create multiple approaches. If there are two professors of physics who want to work on a single topic, they may do so by agreement, or they may develop "sections." Or some combination of that, say a top-level resource that enjoys consensus (not only theirs but that of the community), and then essay or section pages which they individually control.
The concept given is focused on "teaching," and the original essayist here is a professor. However, Wikiversity has a major focus on "learning by doing," where people who want to study a subject create resources. This is, itself, a whole pedagogical movement, quite successful, Self-directed learning, which then will use many resources (including teachers).
- Organizing and cataloging them into schools, topics, and so forth is nearly impossible.
Again, the impossibility argument. The difficulty is only finding people interested in organization. The actual organization is not generally difficult. It has been suggested that some standard epistemology or classification of knowledge be used, with, then, some exception or "miscellaneous" category. The goal of organization has never been to exclude content or to force users to only address "notable" topics or the like. It is simply to organize resources, and the effect is generally protective. Isolated resources, not clearly a part of some overall educational project, have often been deleted, even very recently.
- The attempt to organize has great value. But that attempt will never be more than a partial success.
Of course. The organization of knowledge is an educational task, and education is never complete.
However, Wikiversity is quite disorganized. There is plenty to do! The process is, in itself, educational. Organizing resources, one learns about many different subjects, if one actually reads sources, etc.
- Do we want to argue about which resource deserves to be called "Physics"? I don't think so.
That is correct. We don't. So who gets to use that name? The first one to use it? What has been discovered is that subpaging into sections completely resolves conflict, usually.
There was recent conflict over page names (written September 7, 2015). However, this was the first conflict over this issue seen in a long time, and the roots of that conflict are still being explored. The ultimate resolution may be some months or years away. In the case involved, forks were not appropriate because there was no forking issue, or very little content disagreement, there was only the issue of appropriate resource name. (A secondary issue was the inclusion of Wikipedia links, which is an aspect of neutrality.)
As is obvious, there is not necessarily a simple and easy solution that respects fully both the individual freedom of the user and the organization of the site. Organization, however makes it more accessible to users, and makes neutrality possible even with highly controversial topics, and it may not have been obvious, but the topic involved in the recent dispute is highly controversial by nature. It is, real-world, in-your-face disruptive. Ultimately, the Wikiversity community will find consensus. But this kind of conflict is very rare.
- Some attempt to organize namespace is good, but we can't get carried away with the attempt to get everything right. Nobody chooses bans a movie or book because its title does not fit in with other titles, and to some extent, the same tolerance should also be applied to namespace.
There is no issue of "ban" in the organizational process. Organization, per se, does not throw away recyclable material. It places it in a place where it may be used. Resources, placed in this way, may be developed with almost complete freedom. Our organizational process has become radically inclusive, much to the chagrin of a few, who have mostly gone home.
The comment is opposed to perfectionism, which is obvious, who isn't? It seems to think that organization and tolerance are opposed in some way.
However, tt's a simple idea that organization will never be perfect, and that there may be alternate ways of organization. In the recent dispute, the issue largely devolved around the categorization of a set of art movements as "Avant-garde." This was in no way a rigid proposal, inflexible. It was treated as such. As it happens, the author of the resources appears to be politically aligned or active with a movement that is classified, on wikipedia and by the academic community, as "avant-garde" but that may explicitly reject the "avant-garde," that is anarchist but that attacks anarchism and anarchists, etc. (Perhaps as a joke, and pranks are a big part of that movement.) The protest may be against formalism itself, an anti "ism" ism. So it's not a good case to use to develop or reject general organizational principles, which we already know how to apply in ways that almost always work well.
It was not always this way! WV:Requests for Deletion used to be a busy place, with high contention, plus many custodians routinely deleted resources against the consent of the creators. We basically killed that problem, almost entirely, but some have forgotten what that was like. --Abd (discuss • contribs) 23:41, 7 September 2015 (UTC) | https://en.wikiversity.org/wiki/Wikiversity:Subpages/Forking_and_organizing | CC-MAIN-2018-26 | refinedweb | 1,398 | 57.06 |
[[!meta title="Call for testing: 3.0~beta2"]]
[[!meta date="Mon, 08 Mar 2017 20:00:00 +0000"]]
[[!pagetemplate template="news.tmpl"]]
[[!tag announce]]
You can help Tails! The second beta for the upcoming version 3.0 is
out. We are very excited and cannot wait to hear what you think about
it :)
[[!toc levels=1]]
# What's new in 3.0~beta2?
Tails 3.0 will be the first version of Tails based on Debian 9
(Stretch). As such, it upgrades essentially all included software.
Other changes since Tails 3.0~beta1 include:
* All changes brought by [[Tails 2.11|news/version_2.11]].
* Upgrade to current Debian 9 (Stretch).
* Upgrade Linux to 4.9.0-2 (version 4.9.13-1).
* Make it possible to start graphical applications in the *Root Terminal*.
* Improve styling of the *GNOME Shell* window list.
Technical details of all the changes are listed in the
[Changelog]().
# How to test Tails 3.0~beta2?
**We will provide security updates for Tails 3.0~beta
<a href="#known_issues">known issue of this release</a> or a
[[longstanding known issue|support/known_issues]].
Download and install
--------------------
<a class="download-file use-mirror-pool" href="">Tails 3.0~beta2 ISO image</a>
<span class="openpgp-small-link">[[OpenPGP signature|torrents/files/tails-amd64-3.0~beta2.iso.sig]]</span>
To install 3.0~beta2, follow our usual
[[installation instructions|install]], skipping the **Download and
verify** step.
<a id="known_issues"></a>
Known issues in 3.0~beta2
=========================
* The documentation was not adjusted yet.
* The <span class="guilabel">Formats</span> settings chosen in Tails
Greeter have no effect ([[!tails_ticket 12079]]).
* There is no <span class="guilabel">Read-Only</span> feature for the
persistent volume anymore; it is not clear yet whether it will be
re-introduced in time for Tails 3.0 final ([[!tails_ticket 12093]]).
* If you use the *KeePassX* persistence feature, you need to manually
import your passwords database ([[!tails_ticket 10956]]).
* Some command-line programs (at least *Monkeysign*, *Git*, and
*wget*) display confusing error messages in the *Terminal*,
although they work fine: [[!tails_ticket 11736]],
[[!tails_ticket 12091]], [[!tails_ticket 12205]].
* *I2P* fails to start ([[!tails_ticket 12108]]). Note that *I2P*
will be removed in Tails 2.12 and 3.0.
* [Open tickets for Tails 3.0]()
* [[Longstanding known issues|support/known_issues]] | https://gitlab.com/Tails/tails/blame/f6a05b5c94bf771c83f87302425d0fba3668738a/wiki/src/news/test_3.0-beta2.mdwn | CC-MAIN-2019-35 | refinedweb | 378 | 54.39 |
DSDL definitions in their original form are hard to read by humans which impedes the adoption of UAVCAN and DS-015. Yet DSDL is sufficient for describing behaviors of a distributed computing system (DCS) without the need to resort to additional means of documentation (that would run the risk of divergence). It is therefore desirable to make DSDL specifications more approachable for humans without changing the language or specifications themselves.
To illustrate, suppose that you want to implement the servo network service as defined by the DS-015 standard. You go to the service definition file:
…whereat you see that to fully grasp what’s in there you need to do quite a bit of jumping around the files in the repo that are not even syntax-highlighted. This is a serious obstacle if you are just evaluating whether UAVCAN/DS-015 are the right solutions for you.
We, therefore, need to come up with a better presentation of DSDL definitions. The solution I propose is to define an additional target for Nunavut that yields HTML pages with documentation per DSDL root namespace. But before we get to that, there is one blocker to take care of:
Exposing comments in the AST constructed by PyDSDL
PyDSDL is the DSDL processing front-end used by Nunavut. It accepts a root namespace and yields a well-annotated AST based on that. Currently, PyDSDL discards comments, so we need to change this behavior:
The AST should be extended with two extra entities — composite type documentation and attribute documentation:
# This header comment is the documentation for this composite type. # It may span an arbitrary number of lines and is terminated by the first non-comment line. float64[4] foo # This is an attribute comment for field "foo" bool bar # This is an attribute comment for field "bar". # It spans multiple lines. # This comment is not attached to anything because it follows a blank line, so it is dropped. uavcan.primitive.Empty.1.0 baz # This is for "baz". # And this one is for "baz", too. --- # This comment is attached to the response section. void64 # This comment is for the padding field. int64 MATH_PI = 4 # This is the best known approximation of Pi.
The composite type documentation is to be exposed via new property
doc:str on
pydsdl.CompositeType. A similar property should be added to
pydsdl.Attribute.
Comments can be extracted from the source file by adding a new node handler
visit_comment() to the internal class
pydsdl.parser._ParseTreeProcessor.
The leading
# and the space after it (if present) should be removed.
Once this is done, we can proceed to the second part.
Emitting HTML using Nunavut
Proper templates provided, Nunavut can map a DSDL root namespace to a fully-static website (which may be contained in one or several HTML files, perhaps with additional files for styles, scripts, or other resources; in the interest of portability it might be better to bundle everything into one large file). It is important to rely on a web-compatible format because we can’t require the user to download any artifacts to be able to explore DSDL.
The view should be similar to a directory tree. Take the standard root namespace
uavcan:
- uavcan + diagnostic + file + internet + metatransport + node + pnp + primitive + register + si + time
The user clicks on a namespace and it expands in-place. The same goes for data type definitions, this is important:
- uavcan - diagnostic + Record.1.0 [fixed subject-ID 8184, extent 300 bytes] - Record.1.1 [fixed subject-ID 8184, extent 300 bytes] Generic human-readable text message for logging and displaying purposes. Generally, it should be published at the lowest priority level. + uavcan.time.SynchronizedTimestamp.1.0 timestamp Optional timestamp in the network-synchronized time system; zero if undefined. The timestamp value conveys the exact moment when the reported event took place. + Severity.1.0 severity uint8[<256] text Message text. Normally, messages should be kept as short as possible, especially those of high severity. + Severity.1.0 + file + internet + metatransport + node + pnp + primitive + register + si + time
The text should be syntax-highlighted but it does not need to replicate the source token-by-token (it is not even possible because the AST does not contain the required information). It is easier to re-generate the text by simply invoking
__str__() on each attribute and adding the docs around them:
>>> import pydsdl >>> composites = pydsdl.read_namespace('public_regulated_data_types/uavcan') >>> str(composites[1].attributes[2]) 'saturated uint8[<=112] text'
The user may click any attribute inside a composite type and it would expand in-place in the same manner. Another kind of click (with a modifier key like shift+click or using a dedicated button) should take the user directly to the definition of the attribute’s type instead of unfurling it in-place.
Hovering over a field, type, or namespace should display its contents along with key information like size but without doc comments in a quick pop-up.
PyDSDL provides the offset information per field; it should be displayed next to the field to simplify manual serialization and to keep the user aware of the data footprint.
Many doc comments contain references to other data types. They lack any special formatting but full data type names are sufficiently unique to unambiguously detect them in text as-is. For example:
Notice the reference to
reg.drone.physics.kinematics.translation.Velocity1VarTs. The version number is not given, which means that the latest one is implied (v0.1 in this case). Such references should be automatically highlighted as clickable links. There may also be links to namespaces (with or without the trailing
.*:
This fragment should take the user to the namespace
reg.drone.service.actuator.common.sp.
Due to the fact that Nunavut is unable to process more than one namespace at once, links to foreign root namespaces would necessarily navigate the user to a different generated site. If the generated site is compressed into a single HTML file the navigation would be trivial to implement since we know that an entity like
reg.anything can be reached via URI like
reg.html#anything.
There are special data type definitions that are used to document namespaces. They are named
_ (single low line), one is shown above. Such data types need not be shown in the output but instead, their contents should be expanded directly under the corresponding namespace entry.
I think it is sensible to interpret the text of doc comments as Markdown to allow data type developers to construct more appealing documentation. It would require fixing the formatting across the public regulated data types repository but it is no big deal.
@bbworld1 Would you like to work on this? This is very high-priority right now (above Yukon) because it is perceived to be an adoption blocker.
@scottdixon Did I miss anything important? | https://forum.uavcan.org/t/generate-rich-navigable-html-docs-using-nunavut/1163 | CC-MAIN-2021-25 | refinedweb | 1,138 | 54.02 |
Hi, I am reading from a text file and stored it onto a string variable. Now I want to split the string and I tried looking it up on the web and found the strtok () function but that only works on a char type.
Here is my code snippet:
#include <iostream> #include "Stack.h" #include <string.h> #include <fstream> using namespace std; int main() { Stack list; ifstream ReadFrom; string line; (getline(ReadFrom, line)); split = strtok(line.c_str(),",.( )") // this does not work.;
Can some one give me an idea on how to accomplish this. Note: variable line has to be string type unless there is another way to read whole line from a text file.
thanks. | https://www.daniweb.com/programming/software-development/threads/233008/splitting-a-string-using-strtok | CC-MAIN-2017-47 | refinedweb | 115 | 91.31 |
You may want to search:
500w 36v 15ah samsung lithium battery power electric scooter
US $400-500 / Piece
10 Pieces (Min. Order)
import electric scooters from china powerful 250w adults and kids electric scooters
US $180-350 / Piece
1 Piece (Min. Order)
battery power strong big wheel electric scooter
US $550-680 / Piece
5 Pieces (Min. Order)
HX Electric Bagpack Quadruple folding Eco-Friendly Lithium-Ion Powered Scooter
US $205-230 / Piece
10 Pieces (Min. Order)
1200W Big power Electric motorcycle/Electric scooter --BP8
US $650-700 / Piece
26 Pieces (Min. Order)
Wholesale 3 Wheel elderly folding electric gas power mobility scooter
US $348-485 / Set
1 Set (Min. Order)
2018 best selling powerful 1000w 60v citycoco 2 seat electric mobility scooter
US $280-666 / Piece
1 Piece (Min. Order)
20-40km Range Per Charge and Foldable Electric Drifting Scooter 30km/h solar powered Electrical Scooters with smart APP
US $180-198 / Piece
1 Piece (Min. Order)
2 Wheel Chinese Brand New Battery Powered Adult Electric Balancing Scooter
US $199-499 / Set
1 Set (Min. Order)
48v 20amp lead acid battery powered electric scooter charger for mobility scooter 3 wheel disabled
US $499-599 / Set
2 Sets (Min. Order)
2 Wheel Battery Cool Power Xiaomi Mi Home Smart Electric Scooter
US $265-298 / Piece
10 Pieces (Min. Order)
New cheap atv 200 cc motorcycle shredder gas powered electric scooter for sale
US $4950.0-4950.0 / Yard
1 Yard (Min. Order)
Yes Foldable and 201-500w Power foldable mini electric scooter
US $350-500 / Piece
1 Piece (Min. Order)
Lightweight 8.8Ah battery power motor foldable 2 wheel smart balance electric scooter
US $300-799 / Set
1 Set (Min. Order)
Citycoco / Seev / Woqu 2 Wheel 1000W Electric Powered Go Kart Scooter Ce / FCC / RoHS / UL
US $299-379 / Unit
1 Unit (Min. Order)
120w power mini 24v brushed electric kids scooter shanghai
US $50-70 / Unit
10 Units (Min. Order)
2 wheel green power electric scooter adults kick scooter
US $300.0-330.0 / Piece
1 Piece (Min. Order)
New Version Dual Motor 35 KM High Speed Strong Power Electric Scooter e Scooter
US $1-399 / Unit
10 Units (Min. Order)
TNE electric bike 2 wheels powered unicycle self balance scooter
US $1-800 / Piece
1 Piece (Min. Order)
36V Powerful Ultra-Light Weight Foldable Electric Scooter for with APP
US $239.0-249.0 / Piece
1 Piece (Min. Order)
Powerful High Speed lead-acid battery Citycoco 2000w electric scooter with EEC electric scooter
US $230.0-320.0 / Pieces
10 Pieces (Min. Order)
Shenzhen Best Unicool E Moto Free Shipping Green Power Light Weight Retro Electric Golf Scooter
US $300-388 / Unit
1 Unit (Min. Order)
Wholesale 2000w power big tyre harly electric scooter with 60v20ah lithium battery, citycoco scooter
US $320-420 / Unit
2 Units (Min. Order)
3 wheels Powered Smart Self Balance Electric Scooter drift scooter
US $50-65 / Piece
30 Pieces (Min. Order)
10inch 36V 48V 13AH 18AH light powerful suspension electric scooter with seat without seat
US $277.0-351.0 / Pieces
5 Pieces (Min. Order)
Battery power electric scooter mini cheap scooter electric self balancing scooter
US $180-220 / Set
1 Set (Min. Order)
2017 foldable battery power electric scooter korea
US $39-73 / Piece
100 Pieces (Min. Order)
Green power electric scooter 480w electric aguila ava scooter electric scooter cyprus
US $705.0-735.0 / Sets
100 Sets (Min. Order)
E-TWOW long range with full power Electric Scooter
100 Pieces (Min. Order)
four wheels folding mobility electric power scooter
US $410.0-460.0 / Pieces
10 Pieces (Min. Order)
2018 Popular 60V 2000w 35ah LG battery power electric scooter with front and rear disc brake
US $800.0-1300.0 / Piece
1 Piece (Min. Order)
5.5 inch 2 wheel foldable 250w motor power li-ion battery electric scooter
US $150-190 / Piece
100 Pieces (Min. Order)
Nzita 2018 Free Shipping New Model Big 2 Wheel 1000w Powerful Electric Scooter Wholesale
US $215-550 / Piece
1 Piece (Min. Order)
2018 Hiley Design big power scooter Maxforce standing electric scooter
US $1000-3000 / Piece
1 Piece (Min. Order)
2 Wheels Battery Power Kids 120W Electric Scooter
US $49.5-95 / Piece
1 Piece (Min. Order)
Cheapest mini power 120W Foldable Electric Scooters for kids from China
US $50-53 / Unit
100 Units (Min. Order)
Lithium Battery Electric Powered Blue Electric Scooter Price
US $550-625 / Unit
1 Unit (Min. Order)
2 wheel electric scooter 1 person electric scooter high power electric bike
US $278-298 / Piece
10 Pieces (Min. Order)
- About product and suppliers:
Alibaba.com offers 34,248 electric scooter powerful products. About 25% of these are electric scooters, 1% are physical therapy equipments. A wide variety of electric scooter powerful options are available to you, such as 60v, 48v, and 36v. You can also choose from ce, eec, and ccc. As well as from 501-1000w, 201-500w, and 1001-2000w. And whether electric scooter powerful is no, or yes. There are 34,200 electric scooter powerful suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Taiwan, and South Korea, which supply 99%, 1%, and 1% of electric scooter powerful respectively. Electric scooter powerful products are most popular in North America, Western Europe, and Domestic Market. You can ensure product safety by selecting from certified suppliers, including 6,903 with ISO9001, 3,259 with Other, and 2,857 with ISO14001 certification.
Buying Request Hub
Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE
Do you want to show electric scooter powerful or other products of your own company? Display your Products FREE now! | http://www.alibaba.com/showroom/electric-scooter-powerful.html | CC-MAIN-2018-17 | refinedweb | 942 | 66.03 |
06 May 2010 18:16 [Source: ICIS news]
LONDON (ICIS news)--The phenol and acetone market will remain highly volatile in the second half of 2010, Borealis’ chief financial officer (CFO), Daniel Shook, said on Thursday.
The CFO of the Austria-based polyolefin maker added that there was still very little visibility in the market and that it was hard to predict.
“Those markets, as they have proven over the last few years, can turn quite quickly and we have to keep an eye on it,” Shook said.
“Even so, the volumes in the phenol and the acetone business were up quite nicely compared to last year. There was also good margin improvement as well,” Shook added.
Meanwhile, Borealis announced that it had closed its phenol plant at its Porvoo facility in ?xml:namespace>
The phenol plant has a capacity of 195,000 tonne/year, according to the ICIS plants and projects database.
Borealis said that the turnaround was ongoing and that the plant would be started up again during the weekend.
At the end of April, market sources said there were serious concerns in the European phenol market over tight supply amid a declaration of force majeure, a stock allocation and planned turnarounds.
At the same time, the value of European acetone had hit the €1,000/tonne ($1,282/tonne) mark, almost doubling year on year because of supply issues.
( | http://www.icis.com/Articles/2010/05/06/9356978/phenol-acetone-market-volatile-and-hard-to-predict.html | CC-MAIN-2014-41 | refinedweb | 232 | 65.86 |
Blog | About | Contact | Submit Mod | Join Mod DB | Site Map | Media Kit | RSS
10 comments by Wariscool on May 13th, 2014
Hey there,
So other than posting pictures of models( which do look great I must say) I suspect some of you will be wondering what we have been up to. We have a couple of projects on the go at the moment but as I am sure most of you are aware it is that time of year when exams are taking place all over the world at various levels of degrees. Real life does of course come first which is why myself and others from the team are busy studying hard and not doing much else really... :D
However we have not been too idle and once this busy period has passed we should have a nice healthy composition of shiny things for you to sample. This is an exciting period for all of us and as the model production is probably about 80% complete we can focus on other aspects such as rigging, texturing and unique gameplay experience. We are however along way from release so any help is greatly appreciated. This can be in any form, such as a "well done!" to a comprehensive talk on why a problem exists and how to fix it.
Also something which has recently come to my attention is the small amount of people within this community that seem to have a problem with this mod just existing and not being released by now. We work at our own pace and when we see fit to do so. It is not a race nor a comparison with anyone else's mod or add-on. I fully understand that some Members have colourful history with other Members which is understandable but it should not be turned into petty insults or jibes at anyone's work, whatever it maybe. I did not join this project to fight and argue about petty things. If I wanted to do that I would go back to junior school!!
So please, grow up and get on with your lives.
Wariscool
The Shapers - YVaW
UPDATE:
Of course we are still alive lol??
Lots of coding work going on, textures also. We will announce more later but Smallpox has been helping us a bit, we have some wonderful surprises on the way.
Space pirates!
Mandalorian Prototype Fleet
Bonadan Defense Fleet
A special ghost story by kharcov and strobe....
Stay tuned ;)
Is it alive?
Last updated two days ago, so I'm going to say yes.
Awesome haha
I think having the game saying something requsition this ship instead building this ship or something makes more logical sense since technically speaking most fleets in wars are already built not just building up.
Will you be able to build the Republic class cruiser?
This is the KDY class cruiser, buildable at Kuat
I had a nightmare this mod came out and was only compatible with windows 2000.
Nah its Windows ME
Good to know! | http://www.moddb.com/mods/yuuzhan-vong-at-war/page/8 | CC-MAIN-2014-41 | refinedweb | 504 | 76.56 |
I noticed that my branded Standalone IDE has a different looking Welcome Screen that the one displayed when I am using MPS. How can I get my Standalone IDE Welcome Screen to look like the MPS Welcome Screen with my branding (logos, etc). I am using MMPS 2017.2.2
It looks, like you patch MPSPlatformExtensions.xml in your build script and remove/replace this row:
Which switches off flat layout of welcome screen.
Thanks for the response Victor! I have tried to remove that from the mps-workbench.jar!/META-INF/MPSPlatformExtensions.xml in my IDE Build Script by using the replace regex operations as follows:
jar mps-workbench.jar
folder META-INF
file $mps_home/lib/mps-workbench.jar!/META-INF/MPSPlatformExtensions.xml (644)
replace regex ".*com.intellij.openapi.wm.impl.welcomeScreen.FlatWelcomeFrameProvider.*" /<no flags> -> <empty>
replace regex ".*com.intellij.ide.TipOfTheDayManager.*" /<no flags> -> <empty>
import files from mpsStandalone::lib/mps-workbench.jar
exclude META-INF/MPSPlatformExtensions.xml
This removes the TipOfTheDayManager, but I noticed the welcomeScreen is still in the jar file! Is there some other way to remove it?
I even edited the mps-workbench.jar file and removed the welcomeScreen provider, it still showed the Flat Welcome Screen.
Am I doing this right? | https://mps-support.jetbrains.com/hc/en-us/community/posts/115000657330-Standalone-IDE-Welcome-Screen-different-than-MPS-Welcome-Screen | CC-MAIN-2021-49 | refinedweb | 207 | 62.04 |
Keyword arguments (“kwargs”) are handy. A former colleague (thanks!) once warned me about a kwargs gotcha that I didn’t know about. I thought “let’s add the warning to my Django book”, but it is a bit too specialistic and it detracts too much from the main flow of the text. So I’m taking it out and posting it here. (It is also quite hard to explain; I’m still not quite happy with this text. But I’m not going to do more tweaking.)
Watch out with the type of default parameters for keyword arguments!
Technically: immutable types are fine, mutable types not. In plain language it
means that
name='thatcher',
amount=5 and
template=None are all
fine.
But don’t use lists or dictionaries. In Python, a variable name is just a
pointer to a memory address. If you do something like
amount = amount + 4,
amount would start pointing at a new memory adress. Just open a Python
prompt and type in the following piece of code to test it out:
>>> def example(immutable=5, mutable={'amount': 0}): ... print immutable ... print mutable ... mutable['amount'] += 10 ... >>> example() 5 {'amount': 0} >>> example() 5 {'amount': 10} >>> example() 5 {'amount': 20}
We probably wanted to pass in a dictionary with some defaults already in place. However, this backfires because we’re changing values inside the very same dictionary that our keyword points to by default.
The problem is that every variable in Python is only a memory pointer to a value. With an immutable value, a new value means a fresh pointer. The contents of a mutable value, like a list, can be changed in-place without the pointer to the actual list changing; that is the whole point of them.
The solution is straightforward once you’ve seen it. For dictionaries and
lists where you want a default, pick
None as the default value and check
for that like this:
def example(immutable=5, mutable=None): if mutable is None: mutable = {'amount': 0} mutable['amount'] += 10
So there you): | https://reinout.vanrees.org/weblog/2012/04/18/default-parameters.html | CC-MAIN-2021-21 | refinedweb | 341 | 72.87 |
On Wed, Jan 26, 2005 at 01:39:01PM -0000, Simon Marlow wrote: > On 25 January 2005 19:45, Duncan Coutts wrote: > > > On Tue, 2005-01-25 at 19:12 +0000, Ben Rudiak-Gould wrote: > >> My concern here is that someone will actually use the library once it > >> ships, with the following consequences: > >> > >> 1. Programs using the library will have predictable > >> (exploitable) bugs in pathname handling. > >> > >> 2. It will never be possible to change the current weird > >> behavior, because it might break legacy code. The System.FilePath > >> library will > >> have to remain in GHC forever in its current form, enticing > >> programmers with its promise of easy pathname handling and then > >> cruelly breaking its contract. > >> > >> If no one uses it in production code then we can fix it at our > >> leisure, and having it out there with "experimental" status isn't > >> necessarily a bad thing in that case. It just feels like we're > >> playing a dangerous game. > > > > That's a sufficiently persuasive argument for me! > > > > Could we just punt this library for this release. After all we can add > > libraries in a later point release (eg 6.4.1) you just can't change > > existing APIs. > > We can't add libraries in a point release, because there's no way for > code to use conditional compilation to test the patchlevel version > number. > > This seems to be a common misconception, probably brought about by the > fact that the time between major releases of GHC is getting quite long. > Perhaps I should stop writing email and get some work done :) too bad we can't do things like #if exists(module System.Path) import System.Path #else ... #endif I still find it perplexing that there isn't a decent standard haskell preprocessor.... John -- John Meacham - ⑆repetae.net⑆john⑈ | http://www.haskell.org/pipermail/haskell-cafe/2005-January/008835.html | CC-MAIN-2014-35 | refinedweb | 296 | 62.58 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hi,
thanks for the kind words both from Maxon and the community. I am looking forward to my upcoming adventures with the SDK Team and Cinema community.
Cheers,
Ferdinand
as @Cairyn said the problem is unreachable code. I also just saw now that you did assign the same ID to all your buttons in your CreateLayout(). Ressource and dialog element IDs should be unique. I would generally recommend to define your dialogs using a resource, but here is an example on how to do it in code.
CreateLayout()
BUTTON_BASE_ID = 1000
BUTTON_NAMES = ["Button1", "Button2", "Button3", "Button4", "Button5"]
BUTTON_DATA = {BUTTON_BASE_ID + i: name for i, name in enumerate(BUTTON_NAMES)}
class MyDialog(gui.GeDialog):
def CreateLayout(self):
"""
"""
self.GroupBegin(id=1013, flags=c4d.BFH_SCALEFIT, cols=5, rows=4)
for element_id, element_name in BUTTON_DATA.items():
self.AddButton(element_id, c4d.BFV_MASK, initw=100,
name=element_name)
self.GroupEnd()
return True
def Command(self, id, msg):
"""
"""
if id == BUTTON_BASE_ID:
print "First button has been clicked"
elif id == BUTTON_BASE_ID + 1:
print "Second button has been clicked"
# ...
if id in BUTTON_DATA.keys(): # or just if id in BUTTON_DATA
self.Close()
return True
that your script is not working has not anything to do with pseudo decimals, but the fact that you are treating numbers as strings (which is generally a bad idea) in a not very careful manner. When you truncate the string representation of a number which is represented in scientific notation (with an exponent), then you also truncate that exponent and therefor change the value of the number.
pseudo decimals
To truncate a float you can either take the floor of my_float * 10 ** digits and then divide by 10 ** digits again or use the keyword round.
float
floor
my_float * 10 ** digits
10 ** digits
round
data = [0.03659665587738824,
0.00018878623163019122,
1.1076812650509394e-03,
1.3882258325566638e-06]
for n in data:
rounded = round(n, 4)
floored = int(n * 10000) / 10000
print(n, rounded, floored)
0.03659665587738824 0.0366 0.0365
0.00018878623163019122 0.0002 0.0001
0.0011076812650509394 0.0011 0.0011
1.3882258325566637e-06 0.0 0.0
[Finished in 0.1s]
Cheers
zipit
sorry for all the confusion. You have to pass actual instances of objects. The following code does what you want (and this time I actually tried it myself ;)).
import c4d
def main():
"""
"""
bc = doc.GetAllTextures(ar=doc.GetMaterials())
for cid, value in bc:
print cid, value
if __name__=='__main__':
main(): Forgot this, but should have put this first - Implementing a bevel tool is not trivial. Covering the default case is not that hard, but covering all the corner cases is, especially when you want to maintain secondary mesh attributes like texture coordinates. I would not go for reinventing the wheel here. Is there a reason why you do not want to use Cinema's edge bevel? Does it not work with SendModellingCommand?
SendModellingCommand
you use GetActiveDocument() in a NodeData environment. You cannot do this, since nodes are also executed when their document is not the active document (while rendering for example - documents get cloned for rendering).
GetActiveDocument()
NodeData
Hoi,
@blastframe said in Moving a Point Along an Arc by Distance from another Point:
In your code comment you mention spherical interpolation between the arc segment. Is that the same as the quadratic bezier interpolation from the example you provided (code below)? I think that's also called slerp?
Yeah, slerp is a synonym for spherical linear interpolation. While linear interpolation, a.k.a. lerp, gives you points on the line segment spanned by its two input vectors, a spherical linear interpolation gives you points on the arc segment spanned between the unit vectors of the two input vectors. Cinema does not have a slerp for vectors for some reason, only one for quaternions. The whole interpolation and interpolation naming situation in Cinema's classic API is a huge mess IMHO. So you would have to write your own slerp. And slerp is not directly related to splines, only in the sense that you will need it when you want to linearly place points on that spline, as you then have to do the arc-length parametrization of that spline for which you will need slerp.
slerp
lerp
How do I get the points to stop at the limits of the arc? In this case, it would seem to be limiting the angle to 0 and math.pi, but what if the start & end angles are different?
math.pi
Basically the same way you did it in your code. The computed angle is an angle in radians. So to limit it to let's say [0°, 180°] you would have to clamp it to [0 rad,π rad]. There is also the problem that angles work the other way around in relation to the circle functions sine and cosine and your example. They "start on the right side" of the unit circle and then go counter clockwise. Which is why my code flips things around. Getting back to "your setup" should just a matter of inverting the angle, i.e. -theta instead of theta, and then adding π, i.e. 180°.
[0°, 180°]
[0 rad,π rad]
sine
cosine
If I'm not to use c4d.Vector.GetDistance, how would you recommend I get the geodesic norm? I found modules online like geopy, but I'd rather do it with standard Python.
c4d.Vector.GetDistance
Its the same thing in green. You get the Euclidean norm, a.k.a the "length" of your vectors, implying the radius of the circle they are sitting on. Then you can compute the angle spanned between them with the dot product. Finally you put this into the relation shown above, solve for the arc length and you are done. And to be clear, Geodesic norm is just a fancy name for saying the distance between two points on a sphere, while the Euclidean norm is the distance in a plane.
you have to invoke AddUserArea and then attach an instance of your implemented type to it. Something like this:
AddUserArea
my_user_area = MyUserAreaType()
self.AddUserArea(1000,*other_arguments)
self.AttachUserArea(my_user_area, 1000)
I have attached an example which does some things you are trying to do (rows of things, highlighting stuff, etc.). The gadget is meant to display a list of boolean values and the code is over five years old. I had a rather funny idea of what good Python should look like then and my attempts of documentation were also rather questionable. I just wrapped the gadget into a quick example dialog you could run as a script. I did not maintain the code, so there might be newer and better ways to do things now.
Also a warning: GUI stuff is usually a lot of work and very little reward IMHO.
import c4d
import math
import random
from c4d import gui
# Pattern Gadget
IDC_SELECTLOOP_CELLSIZE = [32, 32]
IDC_SELECTLOOP_GADGET_MINW = 400
IDC_SELECTLOOP_GADGET_MINH = 32
class ExampleDialog(gui.GeDialog):
"""
"""
def CreateLayout(self):
"""
"""
self.Pattern = c4d.BaseContainer()
for i in range(10):
self.Pattern[i] = random.choice([True, False])
self.PatternSize = len(self.Pattern)
self.gadget = Patterngadget(host=self)
self.AddUserArea(1000, c4d.BFH_FIT, 400, 32)
self.AttachUserArea(self.gadget, 1000)
return True
class Patterngadget(gui.GeUserArea):
"""
A gui gadget to modify and display boolean patterns.
"""
def __init__(self, host):
"""
:param host: The hosting BaseToolData instance
"""
self.Host = host
self.BorderWidth = None
self.CellPerColumn = None
self.CellWidht = IDC_SELECTLOOP_CELLSIZE[0]
self.CellHeight = IDC_SELECTLOOP_CELLSIZE[1]
self.Columns = None
self.Height = None
self.Width = None
self.MinHeight = IDC_SELECTLOOP_GADGET_MINH
self.MinWidht = IDC_SELECTLOOP_GADGET_MINW
self.MouseX = None
self.MouseY = None
"""------------------------------------------------------------------------
Overridden methods
--------------------------------------------------------------------"""
def Init(self):
"""
Init the gadget.
:return : Bool
"""
self._get_colors()
return True
def GetMinSize(self):
"""
Resize the gadget
:return : int, int
"""
return int(self.MinWidht), int(self.MinHeight)
def Sized(self, w, h):
"""
Get the gadgets height and width
"""
self.Height, self.Width = int(h), int(w)
self._fit_gadget()
def Message(self, msg, result):
"""
Fetch and store mouse over events
:return : bool
"""
if msg.GetId() == c4d.BFM_GETCURSORINFO:
base = self.Local2Screen()
if base:
self.MouseX = msg.GetLong(c4d.BFM_DRAG_SCREENX) - base['x']
self.MouseY = msg.GetLong(c4d.BFM_DRAG_SCREENY) - base['y']
self.Redraw()
self.SetTimer(1000)
return gui.GeUserArea.Message(self, msg, result)
def InputEvent(self, msg):
"""
Fetch and store mouse clicks
:return : bool
"""
if not isinstance(msg, c4d.BaseContainer):
return True
if msg.GetLong(c4d.BFM_INPUT_DEVICE) == c4d.BFM_INPUT_MOUSE:
if msg.GetLong(c4d.BFM_INPUT_CHANNEL) == c4d.BFM_INPUT_MOUSELEFT:
base = self.Local2Global()
if base:
x = msg.GetLong(c4d.BFM_INPUT_X) - base['x']
y = msg.GetLong(c4d.BFM_INPUT_Y) - base['y']
pid = self._get_id(x, y)
if pid <= self.Host.PatternSize:
self.Host.Pattern[pid] = not self.Host.Pattern[pid]
self.Redraw()
return True
def Timer(self, msg):
"""
Timer loop to catch OnMouseExit
"""
base = self.Local2Global()
bc = c4d.BaseContainer()
res = gui.GetInputState(c4d.BFM_INPUT_MOUSE,
c4d.BFM_INPUT_MOUSELEFT, bc)
mx = bc.GetLong(c4d.BFM_INPUT_X) - base['x']
my = bc.GetLong(c4d.BFM_INPUT_Y) - base['y']
if res:
if not (mx >= 0 and mx <= self.Width and
my >= 0 and my <= self.Height):
self.SetTimer(0)
self.Redraw()
def DrawMsg(self, x1, y1, x2, y2, msg):
"""
Draws the gadget
"""
# double buffering
self.OffScreenOn(x1, y1, x2, y2)
# background & border
self.DrawSetPen(self.ColBackground)
self.DrawRectangle(x1, y1, x2, y2)
if self.BorderWidth:
self.DrawBorder(c4d.BORDER_THIN_IN, x1, y1,
self.BorderWidth + 2, y2 - 1)
# draw pattern
for pid, state in self.Host.Pattern:
x, y = self._get_rect(pid)
self._draw_cell(x, y, state, self._is_focus(x, y))
"""------------------------------------------------------------------------
Public methods
--------------------------------------------------------------------"""
def Update(self, cid=None):
"""
Update the gadget.
:param cid: A pattern id to toggle.
"""
if cid and cid < self.Host.PatternSize:
self.Host.Pattern[cid] = not self.Host.Pattern[cid]
self._fit_gadget()
self.Redraw()
"""------------------------------------------------------------------------
Private methods
--------------------------------------------------------------------"""
def _get_colors(self, force=False):
"""
Set the drawing colors.
:return : Bool
"""
self.ColScale = 1.0 / 255.0
if self.IsEnabled() or force:
self.ColBackground = self._get_color_vector(c4d.COLOR_BG)
self.ColCellActive = c4d.GetViewColor(
c4d.VIEWCOLOR_ACTIVEPOINT) * 0.9
self.ColCellFocus = self._get_color_vector(c4d.COLOR_BGFOCUS)
self.ColCellInactive = self._get_color_vector(c4d.COLOR_BGEDIT)
self.ColEdgeDark = self._get_color_vector(c4d.COLOR_EDGEDK)
self.ColEdgeLight = self._get_color_vector(c4d.COLOR_EDGELT)
else:
self.ColBackground = self._get_color_vector(c4d.COLOR_BG)
self.ColCellActive = self._get_color_vector(c4d.COLOR_BG)
self.ColCellFocus = self._get_color_vector(c4d.COLOR_BG)
self.ColCellInactive = self._get_color_vector(c4d.COLOR_BG)
self.ColEdgeDark = self._get_color_vector(c4d.COLOR_EDGEDK)
self.ColEdgeLight = self._get_color_vector(c4d.COLOR_EDGELT)
return True
def _get_cell_pen(self, state, _is_focus):
"""
Get the color for cell depending on its state.
:param state : The state
:param _is_focus : If the cell is hoovered.
:return : c4d.Vector()
"""
if state:
pen = self.ColCellActive
else:
pen = self.ColCellInactive
if self.IsEnabled() and _is_focus:
return (pen + c4d.Vector(2)) * 1/3
else:
return pen
def _draw_cell(self, x, y, state, _is_focus):
"""
Draws a gadget cell.
:param x: local x
:param y: local y
:param state: On/Off
:param _is_focus: MouseOver state
"""
# left and top bright border
self.DrawSetPen(self.ColEdgeLight)
self.DrawLine(x, y, x + self.CellWidht, y)
self.DrawLine(x, y, x, y + self.CellHeight)
# bottom and right dark border
self.DrawSetPen(self.ColEdgeDark)
self.DrawLine(x, y + self.CellHeight - 1, x +
self.CellWidht - 1, y + self.CellHeight - 1)
self.DrawLine(x + self.CellWidht - 1, y, x +
self.CellWidht - 1, y + self.CellHeight - 1)
# cell content
self.DrawSetPen(self._get_cell_pen(state, _is_focus))
self.DrawRectangle(x + 1, y + 1, x + self.CellWidht -
2, y + self.CellHeight - 2)
def _get_rect(self, pid, offset=1):
"""
Get the drawing rect for an array id.
:param pid : the pattern id
:param offset : the pixel border offset
:return : int, int
"""
pid = int(pid)
col = pid / self.CellPerColumn
head = pid % self.CellPerColumn
return self.CellWidht * head + offset, self.CellHeight * col + offset
def _get_id(self, x, y):
"""
Get the array id for a coord within the gadget.
:param x : local x
:param y : local y
:return : int
"""
col = (y - 1) / self.CellHeight
head = (x - 1) / self.CellWidht
return col * self.CellPerColumn + head
def _is_focus(self, x, y):
"""
Test if the cell coords are under the cursor.
:param x : local x
:param y : local y
:return : bool
"""
if (self.MouseX >= x and self.MouseX <= x + self.CellWidht and
self.MouseY >= y and self.MouseY <= y + self.CellHeight):
self.MouseX = c4d.NOTOK
self.MouseY = c4d.NOTOK
return True
else:
return False
def _fit_gadget(self):
"""
Fit the gadget size to the the array
"""
oldHeight = self.MinHeight
self.CellPerColumn = int((self.Width - 2) / self.CellWidht)
self.Columns = math.ceil(
self.Host.PatternSize / self.CellPerColumn) + 1
self.MinHeight = int(IDC_SELECTLOOP_GADGET_MINH * self.Columns) + 3
self.MinWidht = int(IDC_SELECTLOOP_GADGET_MINW)
self.BorderWidth = self.CellWidht * self.CellPerColumn
if oldHeight != self.MinHeight:
self.LayoutChanged()
def _get_color_vector(self, cid):
"""
Get a color vector from a color ID.
:param cid : The color ID
:return : c4d.Vector()
"""
dic = self.GetColorRGB(cid)
if dic:
return c4d.Vector(float(dic['r']) * self.ColScale,
float(dic['g']) * self.ColScale,
float(dic['b']) * self.ColScale)
else:
return c4d.Vector()
if __name__ == "__main__":
dlg = ExampleDialog()
dlg.Open(c4d.DLG_TYPE_ASYNC, defaultw=400, defaulth=400)
Hi @bentraje,
thank you for reaching out to us. As @Cairyn already said, there is little which prevents you from writing your own cloner. Depending on the complexity of the cloner, this can however be quite complex. Just interpolating between two positions or placing objects in a grid array is not too hard. But recreating all the complexity of a MoGraph cloner will be. It is also not necessary. Here are two more straight forward solutions:
file: cube_bend_python.c4d
code:
"""Example for modifying the cache of a node and returning it as the output
of a generator.
This specific example drives the bend strength of bend objects contained in
a Mograph cloner object. The example is designed for a Python generator
object with a specific set of user data values. Please use the provided c4d
file if possible.
Note:
This example makes use of the function `CacheIterator()` for cache
iteration which has been proposed on other threads for the task of walking
a cache, looking for specific nodes. One can pass in one or multiple type
symbols for the node types to be retrieved from the cache. I did not
unpack the topic of caches here any further.
We are aware that robust cache walking can be a complex subject and
already did discuss adding such functionality to the SDK toolset in the
future, but for now users have to do that on their own.
As discussed in:
plugincafe.maxon.net/topic/13275/
"""
import c4d
# The cookie cutter cache iterator template, can be treated as a black-box,
# as it has little to do with the threads subject.
def CacheIterator(op, types=None):
"""An iterator for the elements of a BaseObject cache.
Handles both "normal" and deformed caches and has the capability to
filter by node type.
Args:
op (c4d.BaseObject): The node to walk the cache for.
types (Union[list, tuple, int, None], optional): A collection of type
IDs from one of which a yielded node has to be derived from. Will
yield all node types if None. Defaults to None.
Yields:
c4d.BaseObject: A cache element of op.
Raises:
TypeError: On argument type violations.
"""
if not isinstance(op, c4d.BaseObject):
msg = "Expected a BaseObject or derived class, got: {0}"
raise TypeError(msg.format(op.__class__.__name__))
if isinstance(types, int):
types = (types, )
if not isinstance(types, (tuple, list, type(None))):
msg = "Expected a tuple, list or None, got: {0}"
raise TypeError(msg.format(types.__class__.__name__))
# Try to retrieve the deformed cache of op.
temp = op.GetDeformCache()
if temp is not None:
for obj in CacheIterator(temp, types):
yield obj
# Try to retrieve the cache of op.
temp = op.GetCache()
if temp is not None:
for obj in CacheIterator(temp, types):
yield obj
# If op is not a control object.
if not op.GetBit(c4d.BIT_CONTROLOBJECT):
# Yield op if it is derived from one of the passed type symbols.
if types is None or any([op.IsInstanceOf(t) for t in types]):
yield op
# Walk the hierarchy of the cache.
temp = op.GetDown()
while temp:
for obj in CacheIterator(temp, types):
yield obj
temp = temp.GetNext()
def main():
"""
"""
# The user data.
node = op[c4d.ID_USERDATA, 1]
angle = op[c4d.ID_USERDATA, 2]
fieldList = op[c4d.ID_USERDATA, 3]
# Lazy parameter validation ;)
if None in (node, angle, fieldList):
raise AttributeError("Non-existent or non-populated user data.")
# Get the cache of the node and clone it (so that we have ownership).
cache = node.GetDeformCache() or node.GetCache()
if cache is None:
return c4d.BaseObject(c4d.Onull)
clone = cache.GetClone(c4d.COPYFLAGS_NONE)
# Iterate over all bend objects in the cache ...
for bend in CacheIterator(clone, c4d.Obend):
# ..., sample the field list for the bend object position, ...
fieldInput = c4d.modules.mograph.FieldInput([bend.GetMg().off], 1)
fieldOutput = fieldList.SampleListSimple(op, fieldInput,
c4d.FIELDSAMPLE_FLAG_VALUE)
if (not isinstance(fieldOutput, c4d.modules.mograph.FieldOutput) or
fieldOutput.GetCount() < 1):
raise RuntimeError("Error sampling field input.")
# ... and set the bend strength with that field weight as a multiple
# of the angle defined in the user data.
bend[c4d.DEFORMOBJECT_STRENGTH] = angle * fieldOutput.GetValue(0)
# Return the clone's cache.
return clone
Hello @shetal,
thank you for reaching out to us. The reformulation of your question and the conformance with the forum guidelines on tagging is also much appreciated.
About your question: As stated in the forum guidelines, we cannot provide full solutions for questions, but provide answers for specific questions. Which is why I will not show here any example code, the first step would have to be made by you. I will instead roughly line out the purpose of and workflow around VideoPostData, which I assume is what you are looking for anyway.
VideoPostData
VideoPostData is derived from NodeData, the base class to implement a node for a classic API scene graph. Node means here 'something that lives inside a document and is an addressable entity', examples for such nodes are materials, shaders, objects, tags, ..., and as such 'video post' node. As mentioned in its class description, VideoPostData is a versatile plugin interface which can be used to intervene a rendering process in multiple ways. The most tangible place for VideoPostData in the app is the render settings where video post plugins can be added as effects for a rendering process as shown below with the built-in water mark video post node.
VideoPostData is an effect, meaning that you cannot use it to invoke a rendering process and on its own it also cannot forcibly add itself to a rendering and must be included manually with the RenderData, the render settings of a rendering. However, a user could make a render setting which includes such watermark effect part of his or her default render settings. One could also implement another plugin interface, SceneHookData, to automatically add such effect to every active document. We would not encourage that though, as this could be confusing or infuriating for users. Finally, such VideoPostData plugin would be visible by default like all NodeData plugins, i.e., it would appear as something in menus that the user can add and interact with. To prevent this if desired, one would have to register the plugin with the flag PLUGINFLAG_HIDE suppress it from popping up in the 'Effect ...' button menu. I cannot tell you with certainty if it is possible to hide programmatically added effect nodes from the users view in the effect list of a render settings. There are some flags which can be used to hide instances of nodes, but I would have to test myself if this also applies in this list, it is more likely that this will not be possible.
RenderData
SceneHookData
PLUGINFLAG_HIDE
To implement a VideoPostData plugin interface, one can override multiple methods and take different approaches, the most commonly used one is to override VideoPostData::Execute(Link) which will be called multiple times for each rendered frame. The method follows a flag/message logic which is commonly used in Cinema 4D's classic API, where one gets passed in a flag which signalizes in which context the method is being called. Here the context is at which state of the rendering this call is being made, and the chain is:
VideoPostData::Execute
VIDEOPOSTCALL::FRAMESEQUENCE
VIDEOPOSTCALL::FRAME
VIDEOPOSTCALL::SUBFRAME
VIDEOPOSTCALL::RENDER
VIDEOPOSTCALL::INNER
These flags are accompanied by information if the flags denotes the opening or closing of that 'step' in the rendering process. A developer often then restricts its plugin functionality to a certain flag. I.e., in your case you only want to execute some code when the closing VIDEOPOSTCALL::FRAME is being passed, i.e., after a single frame and all its sub-frames have been rendered. Execute() also passes in a pointer to a VideoPostStruct(Link) which carries information about the ongoing rendering. One of its fields is render, a pointer to a Render(Link). This data structure represents a rendering with multiple buffers and provides the method GetBuffer() which returns a pointer to VPBuffer buffer. In your case you would want to retrieve the RGBA buffer for the rendering by requesting the VPBUFFER_RGBA buffer (Link) with GetBuffer().
VideoPostStruct
render
Render
GetBuffer()
VPBuffer
VPBUFFER_RGBA
This buffer is then finally the pixel buffer, the bitmap data you want to modify. The buffer is being read and written in a line wise fashion with VPBuffer::GetLine() and ::SetLine(). Here you would have to superimpose your watermark information onto the frame. I would do this in a shader like fashion, i.e., write a function which I can query for a texture coordinate for every pixel/fragment in every line and it will then return an RBGA value which I could then combine with the RGBA information which is in the buffer at that coordinate. The details on that depend on what you want to do, e.g.,
VPBuffer::GetLine()
::SetLine()
and the answers to that are mostly algorithmic and not directly connected to our API which limits the amount of support we can provide for them. If this all sounds very confusing to you, it might be helpful to look at our video post examples I did post in the previous thread, e.g., vpreconstructimage.cpp, as this will probably make things less daunting.
If you decide that you do not want to take this route for technical or complexity reasons, you could write a SceneHookData plugin which listens via NodeData::Message for MSG_MULTI_RENDERNOTIFICATION(Link), a message family which is being sent in the context of a rendering. There you would have to evaluate the start field in the RenderNotificationData(Link) accompanying the message, to determine if the call is for the start or end of a rendering. Then you could grab the rendering output file(s) on disk with the help of the render settings from disk and 'manually' superimpose your watermark information. This will come with the drawback that you might have to deal with compressed video files like mpeg or Avi and all the image formats. Some complexity in that can be hidden away with our BaseBitmap type I did mention in my last posting, but not all of it. There is also the fact that you might run into problems when this plugin runs on a render server, where you cannot easily obtain write or even read access to files of the render output.
NodeData::Message
MSG_MULTI_RENDERNOTIFICATION
start
RenderNotificationData
BaseBitmap
I hope this give you some desired guidance,
Ferdinand
Hello @holgerbiebrach,
please excuse the wait. So, this is possible in Python and quite easy to do. This new behavior is just the old dialog folding which has been reworked a little bit. I have provided a simple example at the end of the posting. There is one problem regarding title bars which is sort of an obstacle for plugin developers which want to distribute their plugins, it is explained in the example below.
I hope this helps and cheers,
Ferdinand
The result:
The code:
"""Example for a command plugin with a foldable dialog as provided with the
Asset Browser or Coordinate Manger in Cinema 4D R25.
The core of this is just the old GeDialog folding mechanic which has been
changed slightly with R25 as it will now also hide the title bar of a folded
dialog, i.e., the dialog will be hidden completely.
The structure shown here mimics relatively closely what the Coordinate Manger
does. There is however one caveat: Even our internal implementations do not
hide the title bar of a dialog when unfolded. Instead, this is done via
layouts, i.e., by clicking onto the ≡ icon of the dialog and unchecking the
"Show Window Title" option and then saving such layout. If you would want
to provide a plugin which exactly mimics one of the folding managers, you
would have to either ask your users to take these steps or provide a layout.
Which is not ideal, but I currently do not see a sane way to hide the title
bar of a dialog. What you could do, is open the dialog as an async popup which
would hide the title bar. But that would also remove the ability to dock the
dialog. You could then invoke `GeDialog.AddGadegt(c4d.DIALOG_PIN, SOME_ID)`to
manually add a pin back to your dialog, so that you can dock it. But that is
not how it is done internally by us, as we simply rely on layouts for that.
"""
import c4d
class ExampleDialog (c4d.gui.GeDialog):
"""Example dialog that does nothing.
The dialog itself has nothing to do with the implementation of the
folding.
"""
ID_GADGETS_START = 1000
ID_GADGET_GROUP = 0
ID_GADGET_LABEL = 1
ID_GADGET_TEXT = 2
GADGET_STRIDE = 10
GADEGT_COUNT = 5
def CreateLayout(self) -> bool:
"""Creates a some dummy gadgets.
"""
self.SetTitle("ExampleDialog")
flags = c4d.BFH_SCALEFIT
for i in range(self.GADEGT_COUNT):
gid = self.ID_GADGETS_START + i * self.GADGET_STRIDE
name = f"Item {i}"
self.GroupBegin(gid + self.ID_GADGET_GROUP, flags, cols=2)
self.GroupBorderSpace(5, 5, 5, 5)
self.GroupSpace(2, 2)
self.AddStaticText(gid + self.ID_GADGET_LABEL, flags, name=name)
self.AddEditText(gid + self.ID_GADGET_TEXT, flags)
self.GroupEnd()
return True
class FoldingManagerCommand (c4d.plugins.CommandData):
"""Provides the implementation for a command with a fordable dialog.
"""
ID_PLUGIN = 1058525
REF_DIALOG = None
@property
def Dialog(self) -> ExampleDialog:
"""Returns a class bound ExampleDialog instance.
"""
if FoldingManagerCommand.REF_DIALOG is None:
FoldingManagerCommand.REF_DIALOG = ExampleDialog()
return FoldingManagerCommand.REF_DIALOG
def Execute(self, doc: c4d.documents.BaseDocument) -> bool:
"""Folds or unfolds the dialog.
The core of the folding logic as employed by the Asset Browser
or the Coordinate manager in R25.
"""
# Get the class bound dialog reference.
dlg = self.Dialog
# Fold the dialog, i.e., hide it if it is open and unfolded. In C++
# you would also want to test for the dialog being visible with
# GeDialog::IsVisible, but we cannot do that in Python.
if dlg.IsOpen() and not dlg.GetFolding():
dlg.SetFolding(True)
# Open or unfold the dialog. The trick here is that calling
# GeDialog::Open will also unfold the dialog.
else:
dlg.Open(c4d.DLG_TYPE_ASYNC, FoldingManagerCommand.ID_PLUGIN)
return True
def RestoreLayout(self, secret: any) -> bool:
"""Restores the dialog on layout changes.
"""
return self.Dialog.Restore(FoldingManagerCommand.ID_PLUGIN, secret)
def GetState(self, doc: c4d.documents.BaseDocument) -> int:
"""Sets the command icon state of the plugin.
This is not required, but makes it a bit nicer, as it will indicate
in the command icon when the dialog is folded and when not.
"""
dlg = self.Dialog
result = c4d.CMD_ENABLED
if dlg.IsOpen() and not dlg.GetFolding():
result |= c4d.CMD_VALUE
return result
def RegisterFoldingManagerCommand() -> bool:
"""Registers the example.
"""
return c4d.plugins.RegisterCommandPlugin(
id=FoldingManagerCommand.ID_PLUGIN,
str="FoldingManagerCommand",
info=c4d.PLUGINFLAG_SMALLNODE,
icon=None,
help="FoldingManagerCommand",
dat=FoldingManagerCommand())
if __name__ == '__main__':
if not RegisterFoldingManagerCommand():
raise RuntimeError(
f"Failed to register {FoldingManagerCommand} plugin.")
I cannot reproduce this neither. The interesting question would be what does print random.seed for you and is this reproduceable on your end?
print random.seed
My suspicion would be that someone or something treated random.seed() like a property instead of like a function, which then led to - with all the "Python functions are first class objects" thing - random.seed being an integer. Something like this:
random.seed()
random.seed
>>>
Hi @C4DS and @Motion4D,
thanks you two, that is very kind of you. A happy new year to you too and everyone in the forum.
preference data is often even for native c4d features implemented as a PreferenceData plugin. You have to access that plugin then. To get there you can drag and drop description elements into the python command line, delete the __getitem__() part (the stuff in brackets), and get the __repr__ of the object. With that you can figure out the plugin ID of the corresponding BasePlugin and then access your values.
PreferenceData
__getitem__()
__repr__
BasePlugin
For your case as a script example:
import c4d
from c4d import plugins
# Welcome to the world of Python
def main():
# Search for a plugin ID with a known str __repr__ of a BasePlugin. We got from the console:
# Drag and drop: Plugins[c4d.PREF_PLUGINS_PATHS]
# >>> Plugins
# <c4d.BaseList2D object called 'Plugins/Plugins' with ID 35 at 0x0000028FE8157A50>
pid, op = None, plugins.GetFirstPlugin()
while op:
if 'Plugins/Plugins' in str(op):
pid = op.GetID()
break
op = op.GetNext()
print "pid:", pid
# Use that ID
op = plugins.FindPlugin(pid)
if op:
print op[c4d.PREF_PLUGINS_PATHS] # we know the enum from the console
# Execute main()
if __name__=='__main__':
main()
You can use then that Plugin ID in cpp to do the last part (Use that ID) there.
Use that ID()
Hello,
here are some answers:
CTRacks
BaseList2D.GetUserDataContainer()
DescID
BaseContainer
CTrack
BaseList2D
Here is a script which does what you want. I didn't went overboard with your name matching rules, but the rest should be there.
""" I broke things into two parts:
1. get_matches() deals with building a data structure of matching DescID
elements.
2. add_ctracks() then does the building of CTracks.
You could probably also streamline some stuff here and there, but I tried
to be verbose so that things are clear. The script also only deals with
the currently selected object.
"""
import c4d
def get_matches():
""" Returns a list of tuples of the configuration (source_descid, targets),
where source_descid is a DescID for which there is a CTrack in op, and
targets is a list of DescIDs that match source_descid in type and name,
but there is no CTrack for them in op.
"""
res, user_data_container = [], op.GetUserDataContainer()
""" Step through the user data container of op and find elements (sources)
for which there is a CTrack in op."""
for source_descid, source_bc in user_data_container:
if op.FindCTrack(source_descid) is None:
continue
target_descid_list = []
""" Step through the user data container again and find elements
(targets) for which there is NO CTrack in op and which match the
current source in type and name."""
for target_descid, target_bc in user_data_container:
no_track = op.FindCTrack(target_descid) is None
if not no_track:
continue
match_name = (source_bc[c4d.DESC_NAME][:-2] ==
target_bc[c4d.DESC_NAME][:-2])
match_type = type(op[source_descid]) == type(op[target_descid])
is_new = sum(target_descid in data for _, data in res) == 0
if no_track and match_type and match_name and is_new:
target_descid_list.append(target_descid)
res.append((source_descid, target_descid_list))
return res
def add_ctracks(data):
""" We copy the CTrack for each source DescID the number of target DescID
which are attached to that source times back into op and set the CTrack
DescID each time to the target DescID.
"""
for source_did, target_descid_list in data:
source_ctrack = op.FindCTrack(source_did)
for target_did in target_descid_list:
new_ctrack = source_ctrack.GetClone()
new_ctrack.SetDescriptionID(op, target_did)
op.InsertTrackSorted(new_ctrack)
def main():
if op is not None:
data = get_matches()
add_ctracks(data)
op.Message(c4d.MSG_UPDATE)
c4d.EventAdd()
# Execute main()
if __name__ == '__main__':
main()
I am not familiar with any C++ screen-grab libraries, so I cannot say much.
PIL
pillow
you have to unindent op = op.GetNext() one tab or the while loop condition will always be True unless your whole document consists of spline objects.
op = op.GetNext()
while loop
True
a
b,c, d
b: a, c: a, d: a
class Node(object):
"""A very simple node type for a tree/trie graph.
"""
def __init__(self, **kwargs):
"""The constructor for ``Node``.
Args:
**kwargs: Any non-graph related attributes of the node.
"""
# You might want to encapsulate your attributes in properties, so
# that you can validate / process them, I took the lazy route.
if "name" not in kwargs:
kwargs["name"] = "Node"
self.__dict__.update(kwargs)
self.parent = None
self.children = []
self.prev = None
self.next = None
self.down = None
def __repr__(self):
"""The representation is: class, name, memory location.
"""
msg = "<{} named {} at {}>"
hid = "0x{:0>16X}".format(id(self))
return msg.format(self.__class__.__name__, self.name, hid)
def add(self, nodes):
"""Adds one or multiple nodes to the instance as children.
Args:
nodes (list[Node] or Node): The nodes to add.
Raises:
TypeError: When nodes contains non-Node elements.
"""
nodes = [nodes] if not isinstance(nodes, list) else nodes
# last child of the instance, needed for linked list logic
prev = self.children[-1] if self.children else None
for node in nodes:
if not isinstance(node, Node):
raise TypeError(node)
node.parent = self
node.prev = prev
if prev is not None:
prev.next = node
else:
self.down = node
self.children.append(node)
prev = node
def pretty_print(self, indent=0):
"""Pretty print the instance and its descendants.
Args:
indent (int, optional): Private.
"""
tab="\t" * indent
a = self.prev.name if self.prev else None
b = self.next.name if self.next else None
c = self.down.name if self.down else None
msg = "{tab}{node} (prev: {prev}, next: {next}, down: {down})"
print msg.format(tab=tab, node=self, prev=a, next=b, down=c)
for child in self.children:
child.pretty_print(indent+1)
def build_example_tree():
"""
"""
root = Node(name="root")
node_0 = Node(name="node_0")
node_00 = Node(name="node_00")
node_01 = Node(name="node_01")
node_02 = Node(name="node_02")
node_1 = Node(name="node_1")
node_10 = Node(name="node_10")
root.add(nodes=[node_0, node_1])
node_0.add(nodes=[node_00, node_01, node_02])
node_1.add(nodes=node_10)
return root
root = build_example_tree()
root.pretty_print()).
__file__
os.path.join(os.path.split(__file__)[0], "splinedata")
You can also get the 'important' c4d paths via c4d.storage.GeGetC4DPath().
c4d.storage.GeGetC4DPath() | https://plugincafe.maxon.net/user/ferdinand/best | CC-MAIN-2022-27 | refinedweb | 5,658 | 51.34 |
Using the Request-Acknowledge-Push Pattern to Display Progress of Long Running Tasks
- Posted: Jan 17, 2013 at 11:52 AM
- 2,025 Views
- 10 Comments
Something went wrong getting user information from Channel 9
Something went wrong getting user information from MSDN
Something went wrong getting the Visual Studio Achievements
Right click “Save as…”
Many web sites need to deal with long-running tasks. However long-running tasks don't
play very well with the HTTP request-response paradigm. In this episode we'll
go through a very simple pattern: Request-Acknowledge-Push that enables a
simple, efficient, and scalable way of dealing with long running tasks.
Source code of this episode can be found at:
You'll need to update both Web Role configuration and Worker Role configuraiton to use your own Service Bus namespace.
And the blog article mentioned in the video haishi bai. thanks. most welcome these type of subjects in presentations on channel 9. are you also willing to post the whole solution for download? your referred link to your blog shows detailed code snippets in the mentioned article but being able to play as developer with these implementations in vs2012 would be better in understanding the applied pattern. thx.
@peternl: Thank you for your kind comment. I've added source code link to the post. We'll have more episodes coming very soon, keep tuned
.
Most welcome. Thanks. Only the added link appears not to work now for download:
<Error>
<Code>ResourceNotFound</Code>
The specified resource does not exist. RequestId:1dae8449-d41f-4cc0-92d3-d7f5366c815f Time:2013-01-19T12:49:28.1287046Z
</Error>
I agree : the link is not working (yet)
Source code link is fixed.
Promises like: "We'll have more episodes coming very soon"
are easy to make, but seem hard to deliver..
Your "patients" are asked for a lot of patience !
LOL. Sorry for the typo. Fixed now | http://channel9.msdn.com/Series/Cloud-Patterns/Episode-1-Long-running-tasks-Request-Acknowledge-Push-pattern?format=html5 | CC-MAIN-2013-20 | refinedweb | 317 | 70.02 |
In the previous tutorial, we have discussed about Multithreading in Java. In this tutorial, we will discuss in detail Thread in Java, how to create a java thread and thread lifecycle.
Java Threads
A thread in Java is a lightweight process or the smallest unit of a process. Each thread performs a different task and hence a single process can have multiple threads to perform multiple tasks. Threads use shared resources and hence requires less memory and less resource usage. Threads are an important concept in multithreading. The main method itself is a thread that runs the Java program.
Java thread constructors
Below are the common constructors of the Java Thread class.
Java thread methods
Below are the methods of the Thread class
Create a Java thread
We can create a new Java thread in 2 different ways as described below:
Thread class
We can create a new Thread object by extending the Thread class. To initiate the thread execution, we can use the start() method. Once it executes the start() methods, it automatically calls the run() method that we define while extending the Thread class. We generally create a thread using the Thread class when we want to implement only the Thread functionality
Example
The below example shows how to create a Java thread using the Thread class. Here we create 2 threads and invoke them using the start() method. When the thread executes the start() method, it automatically calls the run() method.
public class ThreadDemo extends Thread { public void run() { System.out.println("Thread " + Thread.currentThread().getId() + " running"); } public static void main(String[] args) { ThreadDemo t = new ThreadDemo(); ThreadDemo t1 = new ThreadDemo(); t.start(); t1.start(); } }
Thread 12 running Thread 13 running
Runnable interface
Another method to create a new thread is by implementing the Runnable interface. This interface has only 1 public run() method that we need to implement in the class that creates a thread. The run() method is executed automatically when we invoke the start() method using the thread object. We normally create a new thread by using the Runnable interface when we want to implement more functionalities other than the thread.
Example
This example shows how to create a Java thread by implementing the Runnable interface where we need to override the run() method.
public class ThreadRunnableDemo implements Runnable{ public static void main(String[] args) { ThreadRunnableDemo tr = new ThreadRunnableDemo(); Thread t = new Thread(tr); t.start(); } @Override public void run() { System.out.println("Thread " + Thread.currentThread().getId() + " running"); } }
Thread 12 running
Advantages of Java thread
- Threads share resource and hence results in less memory usage
- It is a light-weight process
- Context-switching is less expensive
- Inter-thread communication is easier
- Performs asynchronous processing
Thread lifecycle
A Java thread traverses through various stages where at each stage it performs different functionalities. The lifecycle of a thread starts when the thread calls the start() method.
-
Naming a thread
We can provide a name to a thread in Java which helps us to distinguish between different threads that are executing. It is possible to pass the name as a string while creating a new thread either using the Thread class or a Runnable interface. To retrieve the thread name, we can use the method
getName() that belongs to the Thread class.
public class ThreadDemo extends Thread { public static void main(String[] args) { Thread t1 = new Thread("Thread 1") { public void run() { System.out.println("Thread " + Thread.currentThread().getId() + " running"); } }; Thread t2 = new Thread("Thread 2") { public void run() { System.out.println("Thread " + Thread.currentThread().getId() + " running"); } }; t1.start(); System.out.println("Thread name: " + t1.getName()); t2.start(); System.out.println("Thread name: " + t2.getName()); } }
Thread name: Thread 1 Thread 12 running Thread name: Thread 2 Thread 13 running
We can also pass the thread name while creating a thread using the Runnable interface as described below.
public class ThreadRunnableDemo implements Runnable{ public static void main(String[] args) { ThreadRunnableDemo tr = new ThreadRunnableDemo(); Thread t = new Thread(tr, "Thread 1"); t.start(); } @Override public void run() { System.out.println("Thread " + Thread.currentThread().getId() + " running"); System.out.println("Thread Name: " + Thread.currentThread().getName()); } }
Thread 12 running Thread Name: Thread 1
Pause a thread
The Thread class has an inbuilt method
sleep() that makes a thread pause the execution or sleep for a certain amount of specified time. The sleep() method accepts milliseconds as a parameter that makes the thread sleep for the mentioned parameter value. For example, if we mention 10000 milliseconds, it waits for 10 seconds before continuing the thread execution.
import java.util.Date; public class ThreadSleepDemo extends Thread { public void run() { System.out.println("Start time: " + java.util.Calendar.getInstance().getTime()); try { Thread.sleep(10000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Start time: " + java.util.Calendar.getInstance().getTime()); } public static void main(String[] args) { ThreadSleepDemo t = new ThreadSleepDemo(); t.start(); } }
Start time: Sat Feb 06 17:37:34 IST 2021 Start time: Sat Feb 06 17:37:44 IST 2021
Stop a thread execution
The built-in
stop() method of the Thread class is no more available and is deprecated. This is because it does not know in which thread state we are stoping the thread. Hence it can cause the application to stop or fail unexpectedly if other threads were holding the same resource where the current thread was stopped.
We can implement custom
stop() methods to create the actual behavior of stopping a thread. In the below example, we can see how to stop a thread execution using our own method.
We create a custom class ThreadStopDemo that implements the Runnable interface and then create a thread using this Runnable interface instance. We define a custom
stop() method that assigns the boolean stop variable with value as true. Hence whenever we call this stop method, it assigns this boolean value as true. We define another custom method
running() that assigns the same stop variable as false to continue the execution process. Now, in the
run() method, we call the
running() method that executes as long as the stop variable returns false. After executing for a while, we call the
stop() method from the main method that assigns the stop variable as true and hence stops the thread execution since the
running() method now returns false.
public class ThreadStopDemo implements Runnable{ private boolean stop = false; public synchronized void stop() { this.stop = true; System.out.println("Thread stopped"); } private synchronized boolean running() { return this.stop == false; } public static void main(String[] args) { ThreadStopDemo ts = new ThreadStopDemo(); Thread t = new Thread(ts); t.start(); try { Thread.sleep(10000); } catch(InterruptedException e) { e.printStackTrace(); } ts.stop(); } @Override public void run() { while(running()) { System.out.println("Thread running"); try { Thread.sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } } } }
Thread running Thread running Thread running Thread stopped
Thread.currentThread()
The
currentThread() method of the Thread class returns the reference of the current thread in execution. We can access several properties of this current thread like getId(), getName(), getPriority(), etc
System.out.println("Thread " + Thread.currentThread().getId() + " running");
Thread methods examples
The below example shows how to set and retrieve various thread properties.
public class ThreadExample extends Thread{ public void run() { System.out.println("Thread running"); System.out.println("Thread state: " + Thread.currentThread().getState()); System.out.println("Is thread alive: " + Thread.currentThread().isAlive()); } public static void main(String[] args) { ThreadExample te = new ThreadExample(); Thread t = new Thread(te, "Thread1"); t.start(); System.out.println("Before setName - Thread name: " + t.getName()); System.out.println("Before set priority - Thread priority: " + t.getPriority()); t.setName("Thread demo"); t.setPriority(2); System.out.println("Thread id: " + t.getId()); System.out.println("After setName - Thread name: " + t.getName()); System.out.println("After set priority - Thread priority: " + t.getPriority()); System.out.println("Thread state: " + t.getState()); System.out.println("Is thread alive: " + t.isAlive()); System.out.println("Is it Daemon thread: " + t.isDaemon()); System.out.println("Is thread interrupted: " + t.isInterrupted()); } }
Before setName - Thread name: Thread1 Thread running Before set priority - Thread priority: 5 Thread id: 13 After setName - Thread name: Thread demo Thread state: RUNNABLE After set priority - Thread priority: 2 Is thread alive: true Thread state: RUNNABLE Is thread alive: false Is it Daemon thread: false Is thread interrupted: false
join() method
The
join() method of the Thread class waits for the thread to terminate. The main use of the
join() method is for inter-thread communication. The below example shows the usage of the
join() method.
public class ThreadJoinDemo extends Thread { public void run() { System.out.println("Thread running"); System.out.println("Is thread alive: " + Thread.currentThread().isAlive()); } public static void main(String[] args) throws InterruptedException { Thread t = new ThreadJoinDemo(); System.out.println("Thread created"); t.start(); System.out.println("Joining thread"); t.join(); System.out.println("Is thread alive: " + t.isAlive()); } }
Thread created Joining thread Thread running Is thread alive: true Is thread alive: false
Different types of thread
A thread can be of 2 types:
- User thread: The thread that we create is called a user thread. Each user thread has a specific task to do. The JVM shuts down only after it completes the execution of all the user threads. In other words, it waits for all the user threads to complete before it shuts down.
- Daemon thread: Daemon threads run in the background and are low priority threads. The JVM does not wait for the daemon threads to complete. It automatically shuts down when the user thread execution is complete.
Process vs Thread
Conclusion
This comes to the end of this tutorial where we have learned about threads, how to create a thread, its method, and thread lifecycle. | https://www.tutorialcup.com/java/thread-in-java.htm | CC-MAIN-2021-49 | refinedweb | 1,599 | 57.67 |
When I run this
fabfile.py
from fabric.api import env, run, local, cd
def setenv(foo):
env.hosts = ['myhost']
def mycmd(foo):
setenv(foo)
print(env.hosts)
run('ls')
fab mycmd:bar
['myhost']
No hosts found. Please specify (single) host string for connection:
env.hosts
mycmd
run
hosts
@Chris, the reason you're seeing this behavior is because the host list is constructed before the task function is called. So, even though you're changing
env.hosts inside the function, it is too late for it to have any effect.
Whereas the command
fab setenv:foo mycmd:bar, would have resulted in something you would have expected:
$ fab setenv:foo mycmd:bar [myhost] Executing task 'mycmd' ['myhost'] [myhost] run: ls
This is the same as the accepted answer, but because of the way
setenv is defined, an argument is needed.
Another example:
from fabric.api import env, run, local, cd env.hosts = ['other_host'] def setenv(foo): env.hosts = ['myhost'] def mycmd(foo): setenv(foo) print('env.hosts inside mycmd: %s' % env.hosts) run('ls')
The output of this is:
$ fab mycmd:bar [other_host] Executing task 'mycmd' env.hosts inside mycmd: ['myhost'] [other_host] run: ls Fatal error: Name lookup failed for other_host Underlying exception: (8, 'nodename nor servname provided, or not known') Aborting.
As you can see, the host-list is already set to
['other_host', ] when fabric starts to execute
mycmd. | https://codedump.io/share/6Jw6NxTblTru/1/how-can-i-properly-set-the-envhosts-in-a-function-in-my-python-fabric-fabfilepy | CC-MAIN-2017-09 | refinedweb | 231 | 60.21 |
Hi
> Congratulations to the release! I have played with it a
> little with the intention of using it for RAD prototyping
> of a site I will be working on and it looks promising.
>
> I wonder about one thing though. My need would be to call
> a certain servlet with an additional path that would be
> used as a parameter, I have done this with java servlets,
> and it can be a nice way of hiding the application structure
> from the users.
>
> For example
> /cgi-bin/WebKit.cgi/Bands/Metallica/LovesNapster
>
> is sent to the Bands.py servlet with the rest as a parameter.
> Using mod_rewrite you could even hide the cgi-bin and WebKit.cgi
> parts. I didn't get this to work, did I miss something or shouldn't
> it be possible in WebKit.
>
> Marcus Ahlfors
> mahlfors@...
>
>
>
>
> _______________________________________________
> Webware-discuss mailing list
> Webware-discuss@...
>
>
---------------------------------------------------
WorldPilot - Get Synced -
The Open Source Personal Information Manager Server
At 02:07 PM 6/7/00 +0000, Jay Love wrote:
>Hi
Sounds fine to me. For increased efficiency, you could do it the left to
right instead. This might tie into our Context/Application discussions, so
that an entry in our "directory mapping" dictionary would specify that
everything that started with /Bands went to a particular servlet. I think
this is how Java does it in fact.
-Chuck
Ok, I have investigated a little. In the Java Servlet
Specification page 29, a request is made up of:
requestURI = contextPath+servletPath+pathInfo
Where pathInfo is the extra path after the servlet
name. There is some examples in the specs. I don't
know but I suspect that java servlets all must
reside in one directory. That would of course
simplify the parsing.
If played around a little along Jay's advices, this
is a works-for-me solution. But nothing that should
be used in a final version. I didn´t put that much
effort in it since I don't know all details on what
the function is supposed to do.
def serverSidePathForRequest(self, request, urlPath):
''' Returns what it says. This is a 'private' service method for use by HTTPRequest. '''
+ # Create context path
+ tail = urlPath
+ pathInfo = ""
+ try:
+ prepath = self._Contexts['contextName']
+ except KeyError:
+ prepath = self._Contexts['default']
+ if not os.path.isabs(prepath):
+ prepath = os.path.join(self.serverDir(), prepath)
+
+ # Look for servlets from right to left
+ while tail != '':
+ ssPath = os.path.join(prepath, tail + ".py")
+ if os.path.exists(ssPath):
+ request._fields['pathInfo'] = pathInfo
+ return ssPath
+ tail,head = os.path.split(tail)
+ pathInfo = os.path.join(head,pathInfo)
ssPath = self._serverSidePathCacheByPath.get(urlPath, None)
if ssPath is None: | http://sourceforge.net/p/webware/mailman/webware-discuss/thread/Pine.SOL.3.91.1000608135323.5933B-100000@aton.abo.fi/ | CC-MAIN-2014-23 | refinedweb | 435 | 68.97 |
Good morning, I have a rather large project that I've been working on without any problems on my mac desktop, but I'm having a problem preventing me from building anything when I attempt to work on the project from a windows machine.
My shared project contains code like
` if(Device.RuntimePlatform == Device.iOS)
{
// Create base Alert Controller UIAlertController alertController = UIAlertController.Create("", null, UIAlertControllerStyle.Alert);
...`
Which needs a reference to Xamarin.iOS, but when attempting to add that reference under a clean install, I get an error "'CSHarpAddImportCodeFixProvider' has encountered an error and has been disabled"
System.AggregateException : One or more errors occurred. ---> Unable to find a type of reference that is appropriate for this file: "C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\ReferenceAssemblies\Microsoft\Framework\Xamarin.iOS\v1.0\Xamarin.iOS.dll".
Anyone come across this before?
Answers
Did you setup a connection to your Mac in VisualStudio solution?
You should have to setup connection to Mac with Xamarin Studio installed and Xamarin Mac Agent running. Visual Studio running on Windows can't work with iOS projects without that.
Yes, this is while connected to my osx desktop.
I found a workaround by replacing those statements with #if IOS #endif #if__ANDROID__ #endif, and then removing the IOS macro from the args, which causes the statement to act as a comment, and allows me to work on and build the android app on my PC, while ignoring the iOS specific code.
Not sure if it's exactly a smart method, but no problems so far. | https://forums.xamarin.com/discussion/128764/error-when-working-on-project-from-windows-10-laptop | CC-MAIN-2019-22 | refinedweb | 260 | 55.74 |
In this post, we are going to explore how we can deploy a simple Spring Boot application to AWS Elastic Beanstalk. We will explain how to setup an AWS account and provide a step-by-step guide how to deploy to AWS.
1. Introduction
AWS provides numerous services and in the beginning it is difficult to find out where to get started. An easy way to start experimenting with AWS, is to make use of AWS Elastic Beanstalk. Elastic Beanstalk can be seen as an abstraction layer above core AWS services (like Amazon EC2, Amazon Elastic Container Service (Amazon ECS), Auto Scaling, and Elastic Load Balancing). It provisions and operates the infrastructure and manages the application stack for you, in order for you to focus on writing code.
In the next sections, we will show how to setup an AWS account, create a simple Spring Boot application and provide the steps in order to deploy a Spring Boot application to AWS Elastic Beanstalk.
2. Create a Free Tier AWS Account
Before we can deploy or use any of the AWS services, we will need to create an AWS account. Fortunately, we can create a Free Tier account which will allow us to experiment with several AWS services. We click the Create a Free Account button and fill in our email address, a password and an AWS account name.
Next, we fill in our contact information and choose for the Personal account type.
After clicking the Create Account and Continue button, we fill in our credit card details for the identity check and for charging when we use paid services. Do not be afraid, you will not get charged for anything and we will show later on how to set up notifications when a certain budget has been exceeded. This will give you some extra guarantee in order to limit any costs.
We need to verify our phone number and choose to do so by means of a text message.
After clicking the Send SMS button, a verification code is received which needs to be entered.
After clicking the Verify Code button, our identify has been verified successfully.
In the next section, we need to choose a support plan. We choose the Basic Plan because we just want to experiment with AWS services.
In less than a minute, we receive an email indicating our account being activated. After this point, we can sign in as Root user with the email address and password we have used for creating the account.
After logging in, the Management console is shown.
3. Create IAM User
Using the Root user for your daily access and use of AWS services is not a very good idea. It is therefore advised to follow some best practices. Most of these items are shown when you navigate to the IAM Management Console Dashboard. The Security Status shows which actions need to be performed.
Before we start resolving all of the security issues, we first are going to enable billing information for IAM users. This way, it will be possible to grant IAM users access to the billing information. If you do not do so, you will always need to use the Root user for this and this is what we are trying to avoid.
3.1 Enable Billing Information
Go to your account and choose My Account. Scroll down to the section IAM User and Role Access to Billing Information, click Edit and Activate IAM Access.
3.2 Activate MFA on Your Root Account
Back to the Security Status. First thing to resolve, is to activate Multi Factor Authentication on your Root account. We will do so by means of an Authenticator App on our mobile device.
We choose Google Authenticator as Authenticator App for our Android device, but a complete list can be found here.
After scanning a barcode on the Amazon website and entering twice a MFA code, the authentication is set up correctly.
3.3 Create Admin Account
In this section, we are going to set up an Admin account. This way, we do not need to log in anymore with our Root user. Go to My Account and choose My Security Credentials. In the section Access Management, we choose Users and Add a User. Choose a User name and choose AWS Management Console access as Access type.
Because we do not have a Group created, we need to create an administrators group with AdministrationAccess in the next step.
After this, a step is shown where we can add Tags, but we just skipped this step. Next, we are presented a Review page where we can review the settings of the user. In a final step, instructions are shown in order to instruct the newly created user.
Log out as Root user, navigate to the AWS management console URL as being shown in the instruction page (this will automatically fill in your AWS account Id) and log in as Admin user. At first login, you will need to provide a new password.
3.4 Apply IAM Password Policy
Last thing to do is to Apply an IAM Password policy. After this step, your IAM dashboard will look as follows:
You can now create other groups and other users if you want to.
3.5 Set a Budget
As mentioned earlier, we will set up a notification when a certain amount of money is charged. This can give you some extra comfort when you just want to experiment with AWS services and do not want to be confronted with a high bill. Go to My Account and choose My Billing Dashboard and Budgets. Click the Create a Budget button.
Choose Cost Budget as budget type and click the Set your budget button.
Leave the defaults in this page and only change the budget amount to 1$. Do not forget to set a name also, it is mandatory. The button Configure Alerts will not be enabled before you do so and no indication is given why.
In the Configure Alerts screen, you can set a notification alert. Fill in the email contacts and click Add email contact to add the email addres. When finished, click the Confirm Budget button
After reviewing everything at the Review page, the budget is created. You can create many other budgets if you want to, but for now this will be sufficient.
4. Create a Spring Boot App
Now that we have setup our AWS account, we need to create a simple Spring Boot App. We will create a Spring Web MVC application with a
HelloController which just returns a hello message including the IP address.
We go to Spring Initializr, select the Spring Web dependency, use Spring Boot 2.3.4, Java 11 and generate the project. The
HelloController looks as follows:
@RestController public class HelloController { @GetMapping("/hello") public String hello() { String message = "Hello AWS Elastic Beanstalk!"; try { InetAddress ip = InetAddress.getLocalHost(); message += " From host: " + ip; } catch (UnknownHostException e) { e.printStackTrace(); } return message; } }
Run the application:
$ mvn spring-boot:run
And invoke the URL:
$ curl Hello AWS Elastic Beanstalk! From host: your-computer-name/127.0.1.1
The sources of this project can be found at GitHub.
5. Install and Configure EB CLI
Before we can start with deploying our application, we need to install and configure the Elastic Beanstalk CLI tool which will allow us to deploy.
5.1 Install EB CLI
The instructions for installing the EB CLI can be found at GitHub. We describe the instructions for Ubuntu 20.04.
First, we clone the git repository:
$ git clone
Next, run the installer from the directory where you executed the
git clone command:
$ ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer ============================================== I. Installing Python ============================================== ************************************************************* 1. Determining whether pyenv is already installed and in PATH ************************************************************* - pyenv was not found in PATH. ********************************************************* 2. Determining whether pyenv should be cloned from GitHub ********************************************************* ********************************************************************************* 3. Cloning the pyenv GitHub project located at ********************************************************************************* Cloning into '/home/user/.pyenv-repository'... remote: Enumerating objects: 18348, done. remote: Total 18348 (delta 0), reused 0 (delta 0), pack-reused 18348 Receiving objects: 100% (18348/18348), 3.66 MiB | 3.63 MiB/s, done. Resolving deltas: 100% (12501/12501), done. Switched to a new branch 'rel-1.2.9' ******************************************* 4. Temporarily export necessary pyenv paths ******************************************* **************************************************************************** 5. Checking whether Python can be downloaded (through curl, wget, or aria2c) **************************************************************************** ************************************************************ 6. Installing Python 3.7.2. This step may take a few minutes ************************************************************ Downloading Python-3.7.2.tar.xz... -> Installing Python-3.7.2... BUILD FAILED (Ubuntu 20.04 using python-build 20180424) Inspect or clean up the working tree at /tmp/python-build.20201003121924.12884 Results logged to /tmp/python-build.20201003121924.12884.log Last 10 log lines: File "/tmp/tmpx6tk7g8s/pip-18.1-py2.py3-none-any.whl/pip/_internal/cli/main_parser.py", line 12, in <module> File "/tmp/tmpx6tk7g8s/pip-18.1-py2.py3-none-any.whl/pip/_internal/commands/__init__.py", line 6, in <module> File "/tmp/tmpx6tk7g8s/pip-18.1-py2.py3-none-any.whl/pip/_internal/commands/completion.py", line 6, in <module> File "/tmp/tmpx6tk7g8s/pip-18.1-py2.py3-none-any.whl/pip/_internal/cli/base_command.py", line 18, in <module> File "/tmp/tmpx6tk7g8s/pip-18.1-py2.py3-none-any.whl/pip/_internal/download.py", line 38, in <module> File "/tmp/tmpx6tk7g8s/pip-18.1-py2.py3-none-any.whl/pip/_internal/utils/glibc.py", line 3, in <module> File "/tmp/python-build.20201003121924.12884/Python-3.7.2/Lib/ctypes/__init__.py", line 7, in <module> from _ctypes import Union, Structure, Array ModuleNotFoundError: No module named '_ctypes' make: *** [Makefile:1130: install] Error 1 Exiting due to failure
The build fails. However, paragraph 2.3 Troubleshooting of the installation instructions gives us the answer. Most problems during installation are due to missing libraries. So we execute the following:
$ sudo apt-get install \ > build-essential zlib1g-dev libssl-dev libncurses-dev \ > libffi-dev libsqlite3-dev libreadline-dev libbz2-dev
Then we run the installer again:
$ ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
This did the trick, the EB CLI is successfully installed!
Last thing to do is to ensure that the
eb command is available in your path.
$ echo 'export PATH="/home/gunter/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
5.2 Configure the EB CLI
Next step is to configure the EB CLI. Navigate to your project directory and run the
eb initialisation command:
$) 19) eu-south-1 : EU (Milano) 20) ap-east-1 : Asia Pacific (Hong Kong) 21) me-south-1 : Middle East (Bahrain) 22) af-south-1 : Africa (Cape Town) (default is 3): 17
We choose 17 because that is the area we are residing.
In the next step, we need a Security Access key. Go to the AWS Management Console, click My Account – My Security Credentials. We need to create an access key.
Click the Create access key button. In a popup window your Access key ID and Secret access key are shown. This information will be shown only once, so make sure you store it somewhere in a safe place.
Now fill in the Access Key and Secret and continue:
You have not yet set up your credentials or your credentials are incorrect You must provide your credentials. (aws-access-id): <fill in your access id> (aws-secret-key): <fill in your secret key>
Choose an application name,we keep the default and choose Java as platform because we created a Spring Boot jar file.
Enter Application Name (default is "MyElasticBeanstalkPlanet"): Application MyElasticBeanstalkPlanet has been created. Select a platform. 1) .NET Core on Linux 2) .NET on Windows Server 3) Docker 4) GlassFish 5) Go 6) Java 7) Node.js 8) PHP 9) Packer 10) Python 11) Ruby 12) Tomcat (make a selection): 6
Next, we need to select a platform branch. We are using Java 11 so option 1 is the one we need. Amazon Corretto 11 is a no-cost, multiplatform, production-ready distribution of OpenJDK 11.
Select a platform branch. 1) Corretto 11 running on 64bit Amazon Linux 2 2) Corretto 8 running on 64bit Amazon Linux 2 3) Java 8 running on 64bit Amazon Linux 4) Java 7 running on 64bit Amazon Linux (default is 1):
Last two questions are answered with no. CodeCommit will store your code in AWS CodeCommit, this will speed up deployments, but for our small example, this is not necessary. The SSH keys are needed when you want to have acces via SSH.
Do you wish to continue with CodeCommit? (Y/n): n Do you want to set up SSH for your instances? (Y/n): n
In our repository, a
config.yml file is added to directory
.elasticbeanstalk with the following contents:
branch-defaults: master: environment: null group_suffix: null global: application_name: MyElasticBeanstalkPlanet branch: null default_ec2_keyname: null default_platform: Corretto 11 running on 64bit Amazon Linux 2 default_region: eu-west-3 include_git_submodules: true instance_profile: null platform_name: null platform_version: null profile: eb-cli repository: null sc: git workspace_type: Application
The Elastic Beanstalk files are also automatically added to the
.gitignore file.
The configuration of EB CLI is ready!
6. Deploy to AWS
A few things are left to do before we can deploy the Spring Boot App to Elastic Beanstalk. The Elastic Beanstalk environments run an nginx instance on port 80 to proxy the actual application, running on port 5000. Therefore, we need to set the server port to port 5000 in the
applications.properties file.
server.port=5000
Now build the application which will create the jar file
target/MyElasticBeanstalkPlanet-0.0.1-SNAPSHOT.jar:
$ mvn clean install
Next, we need to add the following to the
.elasticbeanstalk/config.yml:
deploy: artifact: target/MyElasticBeanstalkPlanet-0.0.1-SNAPSHOT.jar
Finally, we are going to create our AWS environment. Do so with the
-s option, otherwise a loadbalancer is created which will cost extra. The
-s option will create a single instance. Wait a few minutes and the environment will be available.
$ eb create -s Enter Environment Name (default is MyElasticBeanstalkPlanet-dev): Enter DNS CNAME prefix (default is MyElasticBeanstalkPlanet-dev): Would you like to enable Spot Fleet requests for this environment? (y/N): N 2.0+ Platforms require a service role. We will attempt to create one for you. You can specify your own role using the --service-role option. Type "view" to see the policy, or just press ENTER to continue: Uploading: [##################################################] 100% Done... Environment details for: MyElasticBeanstalkPlanet-dev Application name: MyElasticBeanstalkPlanet Region: eu-west-3 Deployed Version: app-b988-201003_134943 Environment ID: e-ftvkcpf2zr Platform: arn:aws:elasticbeanstalk:eu-west-3::platform/Corretto 11 running on 64bit Amazon Linux 2/3.1.1 Tier: WebServer-Standard-1.0 CNAME: MyElasticBeanstalkPlanet-dev.eu-west-3.elasticbeanstalk.com Updated: 2020-10-03 11:49:50.471000+00:00 Printing Status: 2020-10-03 11:49:49 INFO createEnvironment is starting. 2020-10-03 11:49:50 INFO Using elasticbeanstalk-eu-west-3-093997425909 as Amazon S3 storage bucket for environment data. 2020-10-03 11:50:15 INFO Created security group named: awseb-e-ftvkcpf2zr-stack-AWSEBSecurityGroup-B7WZ79XCRAGR 2020-10-03 11:50:30 INFO Created EIP: 15.236.185.10 2020-10-03 11:51:32 INFO Waiting for EC2 instances to launch. This may take a few minutes. 2020-10-03 11:52:09 INFO Instance deployment successfully generated a 'Procfile'. 2020-10-03 11:52:09 INFO Instance deployment successfully detected a JAR file in your source bundle. 2020-10-03 11:52:12 INFO Instance deployment completed successfully. 2020-10-03 11:52:28 INFO Application available at MyElasticBeanstalkPlanet-dev.eu-west-3.elasticbeanstalk.com. 2020-10-03 11:52:28 INFO Successfully launched environment: MyElasticBeanstalkPlanet-dev
Lets check whether we can access our application (the URL is taken from the last but one log line):
$ curl Hello AWS Elastic Beanstalk! From host: ip-172-31-47-35.eu-west-3.compute.internal/172.31.47.35
With the command
eb console the console of your environment can be viewed. Just try it out and navigate the pages. Quite some information about your running application is available.
What if we want to change something and want to redeploy our application? This is quite simple. Let’s make a small change to the hello message and add the word again to the text to be displayed:
String message = "Hello again AWS Elastic Beanstalk!";
Rebuild the application with
mvn clean install and execute the
eb deploy command in order to deploy the application:
$ eb deploy Uploading: [##################################################] 100% Done... 2020-10-03 12:10:10 INFO Environment update is starting. 2020-10-03 12:10:14 INFO Deploying new version to instance(s). 2020-10-03 12:10:17 INFO Instance deployment successfully detected a JAR file in your source bundle. 2020-10-03 12:10:17 INFO Instance deployment successfully generated a 'Procfile'. 2020-10-03 12:10:20 INFO Instance deployment completed successfully. 2020-10-03 12:10:27 INFO New application version was deployed to running EC2 instances. 2020-10-03 12:10:27 INFO Environment update completed successfully.
Run the
curl command again and the updated message is returned:
$ curl Hello again AWS Elastic Beanstalk! From host: ip-172-31-47-35.eu-west-3.compute.internal/172.31.47.35
7. Terminate the environment
At the end, it is wise to remove the environment. This can be done with the
eb terminate command following the environment name. After a few minutes, the environment is gone.
$ eb terminate MyElasticBeanstalkPlanet-dev The environment "MyElasticBeanstalkPlanet-dev" and all associated instances will be terminated. To confirm, type the environment name: MyElasticBeanstalkPlanet-dev 2020-10-03 12:12:52 INFO terminateEnvironment is starting. 2020-10-03 12:13:09 INFO Waiting for EC2 instances to terminate. This may take a few minutes. 2020-10-03 12:14:40 INFO Deleted security group named: awseb-e-ftvkcpf2zr-stack-AWSEBSecurityGroup-B7WZ79XCRAGR 2020-10-03 12:14:56 INFO Deleted EIP: 15.236.185.10 2020-10-03 12:14:58 INFO Deleting SNS topic for environment MyElasticBeanstalkPlanet-dev. 2020-10-03 12:14:59 INFO terminateEnvironment completed successfully.
8. Conclusion
It takes some time in order to setup your AWS account properly but this is something you need to do only once. After this, creating an environment for deploying your Spring Boot application is pretty easy and (re)deployments can be executed by means of single command.
2 thoughts on “How to Deploy a Spring Boot App to AWS Elastic Beanstalk” | https://mydeveloperplanet.com/2020/10/21/how-to-deploy-a-spring-boot-app-to-aws-elastic-beanstalk/ | CC-MAIN-2020-50 | refinedweb | 3,077 | 57.57 |
Back to: C#.NET Tutorials For Beginners and Professionals
Data Types in C# with Examples
In this article, I am going to discuss the Data Types in C# with examples. Please read our previous article where we discuss the Console Class Methods and Properties in C# before proceeding to this article. As a developer, it is very important to understand the Data Type in C#. This is because you need to decide which data type to use for a specific type of value. As part of this article, we are going to discuss the following pointers related to the C# data type in detail.
- Why do we need data types in C#?
- What is a data type in C#?
- Different types of Data types in C#.
- What is Value Data Type in C#?
- What is Reference Data Type in C#?
- Examples using Built-in Data Types in C#.
- What is Pointer Type?
- Examples using Escape Sequences in C#.
Why do we need data types in C#?
The Datatypes in C# are basically used to store the data temporarily in the computer through a program. In the real world, we have different types of data like integers, floating-point, characters, strings, etc. To store all these different kinds of data in a program to perform business-related operations, we need the data types.
What is a data type in C#?
The Datatypes are something that gives information about
- Size of the memory location.
- The range of data that can be stored inside that memory location
- Possible legal operations that can be performed on that memory location.
- What types of results come out from an expression when these types are used inside that expression.
The keyword which gives all the above information is called the data type.
What are the different types of Data types available in C#?
A data type in C# specifies the type of data that a variable can store such as integer, floating, character, string, etc. The following diagram shows the different types of data types available in C#.
There are 3 types of data types available in the C# language.
- Value Data Type
- Reference Data Type
- Pointer Data Type
Let us discuss each of these data types in detail
What is Value Data Type in C#?
The data type which stores the value directly is called the Value Data Type. They are derived from the class System.ValueType. The examples are int, char, and float which store numbers, alphabets, and floating-point numbers respectively. The value data types in C# again classified into two types are as follows.
- Predefined Data Types – Example includes Integer, Boolean, Float, etc.
- User-defined Data Types – Example includes Structure, Enumerations, etc.
Based on the Operating system (32 or 64-bit), the size of the memory of the data types may change. If you want to know the actual size of a type or a variable on a particular operating system, then you can make use of the sizeof method.
Let’s understand this with an example. The following example gets the size of int type on any platform.
namespace FirstProgram { class Program { static void Main(string[] args) { Console.WriteLine("Size of int: {0}", sizeof(int)); Console.ReadKey(); } } }
When we execute the above code, it gives the following output:
What is Reference Data Type in C#?
The data type which is used to store the reference of a variable is called Reference Data Types. In other words, we can say that the reference types do not store the actual data stored in a variable, rather they store the reference to the variables. We will discuss this concept in a later article.
Again, the Reference Data Types are categorized into 2 types. They are as follows.
- Predefined Types – Examples include Objects, String, and dynamic.
- User-defined Types – Examples include Classes, Interface.
What is Pointer Type?
The pointer in C# language is a variable, it is also known as a locator or indicator that points to an address of the value which means pointer type variables stores the memory address of another type. We will discuss this concept in detail in a later article.
Built-in Data Types in C#
The built-in Data Types in C# are as follows
- Boolean type – Only true or false
- Integral Types – sbyte, byte, short, ushort, int, uint, long, ulong, char
- Floating Types – float and double
- Decimal Types
- String Type
What is Escape Sequence in C#?
Verbatim Literal is a string with an @ symbol prefix, as in @“Hello”. The Verbatim literals make escape sequences translate as normal printable characters to enhance readability.
Without Verbatim Literal: “C:\\Pranaya\\DotNetTutorials\\Csharp” – Less Readable
With Verbatim Literal: @“C:\Pranaya\ DotNetTutorials\Csharp” – Better Readable
Let’s understand this with an example:
namespace FirstProgram { class Program { static void Main(string[] args) { // Displaying double quotes in c# string Name = "\"Dotnettutorials\""; Console.WriteLine(Name); // Displaying new line character in c# Name = "One\nTwo\nThree"; Console.WriteLine(Name); // Displaying new line character in c# Name = "c:\\Pranaya\\Dotnettutorials\\Csharp"; Console.WriteLine(Name); // C# verbatim literal Name = @"c:\Pranaya\Dotnettutorials\Csharp"; Console.WriteLine(Name); Console.ReadKey(); } } }
Output:
That’s it for today. In the next article, I am going to discuss the Literals in C# with examples. Here, in this article, I try to explain the Data Types in C# with some examples. I hope you understood the need and use of data types and I would like to have your feedback about this article.
2 thoughts on “Data Types in C#”
Great article. Thank you so much
Good article however, line Name = “c:\Pranaya\Dotnettutorials\Csharp”; will not compile and will correctly indicate an unrecognised escape sequence, like \P does not exist.
I assume you rather meant Name = “c:\\Pranaya\\Dotnettutorials\\Csharp”;
Thanks,
—– Jean Buchnik | https://dotnettutorials.net/lesson/data-types-in-csharp/ | CC-MAIN-2022-21 | refinedweb | 958 | 57.27 |
Sure here is what i had to make. The script is the one you sent me first, I just renamed it
with best regards
Mike
i found out that it must be because of the script.
because if i use the 2nd script you sent me (this with the scene) in the scene that didnt work before then it works too.
with best regards
mike
Glad that it works now, and thanks for the archive.
Kind regards,
Gökhan
I had same problem at Wikitude + Unity ARFoundation + URP.
And I found a way to fix it.
E WKTD : >ERROR: [Runtime][00:00:00.106.371] > Could not start the camera because no activeCamera has been set. (code 1003)
Unity 2020.3.25
Wikitude 9.6.0
Pixel 5a, Android 12
I think this problem occurs when the component "Wikitude SDK" is executed before the component "AR Foundation Plugin".
By setting the "Script execution order" so that ARFoundationPlugin is executed first, the problem no longer occurs.
If the order is reversed, an error will always occur.
In the sample scene ("AR Foundation - Multiple Trackers" etc.), I assume that ARFoundationPlugin is unintentionally executed first.
I made a mistake in writing
Unity 2020.3.25
Wikitude 9.6.0 9.10.0
Pixel 5a, Android 12
Just to confirm. Is "AR Foundation - Multiple Trackers" not working properly after this error?
Kind regards,
Gökhan
"AR Foundation - Multiple Trackers" works correctly by default.
However, if I intentionally change the script execution order, the application will crash after the error mentioned earlier.
regards.
Ok, understand. Thanks for reporting this! I'll make sure to include this in our documentation!
Kind regards,
Gökhan
I am still having a similar problem.
Unity 2020.3.25f1
Wikitude 9.10.0
Android Galaxy S10
I am working on a simple Object Recognition Scene, which worked perfectly before changing to the URP. Even after trying the solutions here mentioned, I am still having a black screen as background. I tested as well the sample scene: "Instant Tracking - Simple " and it is showing the same problem.
I have another scene in the same project only with AR Foundation and there everything works fine.
Is there any Sample project which already works with Wikitude and the URP?
Best Regards
Hi,
Unfortunately, URP is currently only supported in the Expert Edition Unity SDK.
Kind regards,
Gökhan
Hi Gökhan,
Thanks for your answer! Yes I know the URP is only supported in the Expert Edition SDK, I have a Trial License now, since we are testing the functionality for a project. And as far as I understand the Trial License gives us access to all Wikitude SDK features including the URP support right?
Best Regards
Mike Hoffmann
Hello,
I try to get the sample szene from Wikitude "Simple - Alignment Initialization" To work with the Universal Render Pipeline and ARfoundation.
I try it with different Unity Version
Unity 2020.2.2f1
Unity 2019.4.21f1
Unity 2019.4.2f1
and with Wikitude 9.6.0
the first step i try to get Wikitude with URP to work end i get every time the same error.
if i build it for Android i dont get a AR Background.
if i try to use the Window -> Wikitude - URP Helper
i get this error: Packages\com.wikitude.core\Editor\Helpers\URPHelper.cs(11,29): error CS0234: The type or namespace name 'Universal' does not exist in the namespace 'UnityEngine.Rendering' (are you missing an assembly reference?)
also i try to make it manualy like here
but nothing work i hope anyone can help me
with best regards
Mike | https://support.wikitude.com/support/discussions/topics/5000095954/page/last | CC-MAIN-2022-33 | refinedweb | 598 | 67.35 |
#include <SSL_Context.h>
#include <SSL_Context.h>
Collaboration diagram for ACE_SSL_Context:
This class provides a wrapper for the SSL_CTX data structure. Since most applications have a single SSL_CTX structure, this class can be used as a singleton.
Constructor.
Destructor.
[private]
SSL_FILETYPE_PEM
Set the certificate file.
Get the file name and file format used for the certificate file.
Verify if the context has been initialized or not.
Get the SSL context.
Set and query the default verify mode for this context, it is inherited by all the ACE_SSL objects created using the context. It can be overriden on a per-ACE_SSL object.
Load Diffie-Hellman parameters from file_name. The specified file can be a standalone file containing only DH parameters (e.g., as created by openssl dhparam), or it can be a certificate which has a PEM-encoded set of DH params concatenated on to i.
openssl dhparam
Set the Entropy Gathering Daemon (EGD) UNIX domain socket file to read random seed values from.
Test whether any CA locations have been successfully loaded and return the number of successful attempts.
0 If all CA load attempts have failed.
[static]
The Singleton context, the SSL components use the singleton if nothing else is available.
0
Load the location of the trusted certification authority certificates. Note that CA certificates are stored in PEM format as a sequence of certificates in <ca_file> or as a set of individual certificates in <ca_dir> (or both).
Note this method is called by set_mode() to load the default environment settings for <ca_file> and <ca_dir>, if any. This allows for automatic service configuration (and backward compatibility with previous versions.
Note that the underlying SSL function will add valid file and directory names to the load location lists maintained as part of the SSL_CTX table. (... It therefore dosn't make sense to keep a copy of the file and path name of the most recently added <ca_file> or <ca_path>.
Set the private key file.
Get the file name and file format used for the private key.
Seed the underlying random number generator. This value should have at least 128 bits of entropy.
Print the last SSL error for the current thread.
Print SSL error corresponding to the given error code.
-1
Set the file that contains the random seed value state, and the amount of bytes to read. "-1" bytes causes the entire file to be read.
ACE_SSL_Context::SSLv23
Set the CTX mode. The mode can be set only once, afterwards the function has no effect and returns -1. Once the mode is set the underlying SSL_CTX is initialized and the class can be used. If the mode is not set, then the class automatically initializes itself to the default mode.
1
Note for verification to work correctly there should be a valid CA name list set using load_trusted_ca().
OpenSSL documentation ... set_verify_depth(3) ...
More to document.
@
Verify that the private key is valid.
The SSL_CTX structure.
The default verify mode.
count of successful CA load attempts
Cache the mode so we can answer fast.
The private key, certificate, and Diffie-Hellman paramters files. | https://www.dre.vanderbilt.edu/Doxygen/5.4.9/html/ace/ssl/classACE__SSL__Context.html | CC-MAIN-2022-40 | refinedweb | 512 | 59.09 |
Moisture Sensor using HTTP POST Requests to Channel
This example shows how to post multiple fields of data to a ThingSpeak™ channel from a device that wakes from deep sleep. You read a soil moisture sensor and post the value to a ThingSpeak channel. The HTTP POST request is executed by writing to a communication client without a separate library. Directly writing the HTTP request to the wireless network client can offer increased flexibility and speed over the ThingSpeak Communication Library.
Supported Hardware
ESP8266-12
NodeMCU ESP8266-12
Arduino with Ethernet or wireless connection (with some code adjustments)
In this example, the onboard ADC reads a moisture sensor and posts the value and the elapsed time to two fields of a ThingSpeak channel. You can modify the POST to fill up to eight fields with data.. In between measurements, the whole device is put into deep-sleep mode to save power. Once data is posted to the channel, you can set up reactions to the data. For example, you can set the React app to notify you that the moisture level is low.
Prerequisites
1) Create a ThingSpeak channel, as shown in Collect Data in a New Channel.
2) On the Channel Settings tab, enable field 1. You can provide an informative field name such as
Moisture Value.
3) Note the write API key from the API Keys tab. You need this value in the code used for programming your device. For additional information, see Channel Configurations and Channel Properties.
Required Hardware
ESP8266-based board or Arduino board with inernet connection (NodeMCU ESP8266-12E used for this demonstration)
Soil moisture sensor (for example, the Sparkfun Moisture Sensor)
Jumper wires (at least 4)
USB cable
Schematic and Connections
1) Connect VCC of the moisture sensor to pin D7 on the NodeMCU.
2) Connect the sensor Gnd to the NodeMCU ground.
3) Connect the sensor Sig pin to NodeMCU pin A0.
4) Connect the NodeMCU Rst pin to NodeMCU pin D0, to enable wake up from deep sleep.
Program Your Device
1) Download the latest Arduino® IDE.
2) Add the ESP8266 Board Package.
a) Enter into Additional Board Manager URLs under File > Preferences.
b) Choose Tools > Boards > Board Manager. Search for
ESP8266 in the search bar and install the package.
3) Select the appropriate port and board in the Arduino IDE. The hardware used to generate this example used the
Node MCU 1.0 (ESP 8266–12E) option.
4) Create the application: Open a new window in the Arduino IDE and save the file. Add the code provided in the Code section. Be sure to edit the wireless network information and API key in the code.
5) After you successfully upload your program, you can monitor the output using the serial monitor or your channel view page.
Code
1) Include the
ESP8266WiFi library and initialize variables for hardware and data collection. Edit the network information and write API key in your code.
#include <ESP8266WiFi.h> // Network information. #define WIFI_NAME "YOUR_WIFI_NAME" #define PASSWORD "WIFI_PASSWORD" // Hardware information. #define SENSOR_POWER 13 // Connect the power for the soil sensor here. #define SOIL_PIN A0 // Connect the sensor output pin here. #define TIMEOUT 5000 // Timeout for server response. #define SLEEP_TIME_SECONDS 1800 // ThingSpeak information. #define NUM_FIELDS 2 // To update more fields, increase this number and add a field label below. #define SOIL_MOISTURE_FIELD 1 // ThingSpeak field for soil moisture measurement. #define ELAPSED_TIME_FIELD 2 // ThingSpeak field for elapsed time from startup. #define THING_SPEAK_ADDRESS "api.thingspeak.com" String writeAPIKey="XXXXXXXXXXXXXXXX"; // Change this to the write API key for your channel. // Global variables. int numMeasure = 5; // Number of measurements to average. int ADCValue = 0; // Moisture sensor reading. WiFiClient client;
2) In the
setup function, start the serial monitor, connect to the wireless network, and initialize the device pins that you use.
// Put your setup code here, to run once: void setup() { Serial.begin( 115200 ); // You may need to adjust the speed depending on your hardware. connectWifi(); pinMode( SENSOR_POWER , OUTPUT ); digitalWrite( SENSOR_POWER , LOW ); // Set to LOW so no power is flowing through the sensor. }
3) In the main loop, read the soil monitor and store it in the
data array. POST the data to ThingSpeak, and then put the device in low-power mode.
// Put your main code here, to run repeatedly: void loop() { // Write to successive fields in your channel by filling fieldData with up to 8 values. String fieldData[ NUM_FIELDS ]; // You can write to multiple fields by storing data in the fieldData[] array, and changing numFields. // Write the moisture data to field 1. fieldData[ SOIL_MOISTURE_FIELD ] = String( readSoil( numMeasure ) ); Serial.print( "Soil Moisture = " ); Serial.println( fieldData[ SOIL_MOISTURE_FIELD ] ); // Write the elapsed time from startup to Field 2. fieldData[ ELAPSED_TIME_FIELD ] = String( millis() ); HTTPPost( NUM_FIELDS , fieldData ); delay( 1000 ); Serial.print( "Goodnight for "+String( SLEEP_TIME_SECONDS ) + " Seconds" ); ESP.deepSleep( SLEEP_TIME_SECONDS * 1000000 ); // If you disable sleep mode, add delay so you don't post to ThingSpeak too often. // delay( 20000 ); }
4) Use the
readSoil function to provide power to the sensor, and then read the voltage at the output using the ADC. Turn off the power after measurement.
// This function reads the soil moisture sensor numAve times and returns the average. long readSoil(int numAve) { long ADCValue = 0; for ( int i = 0; i < numAve; i++ ) { digitalWrite( SENSOR_POWER, HIGH ); // Turn power to device on. delay(10); // Wait 10 milliseconds for sensor to settle. ADCValue += analogRead( SOIL_PIN ); // Read the value from sensor. digitalWrite( SENSOR_POWER, LOW ); // Turn power to device off. } ADCValue = ADCValue / numAve; return ADCValue; // Return the moisture value. }
5) Connect your device to the wireless network using the
connectWiFi function.
// Connect to the local Wi-Fi network int connectWifi() { while (WiFi.status() != WL_CONNECTED) { WiFi.begin( WIFI_NAME , PASSWORD ); Serial.println( "Connecting to Wi-Fi" ); delay( 2500 ); } Serial.println( "Connected" ); // Inform the serial monitor. }
6) Build the data string to post to your channel. Connect to ThingSpeak, and use the Wi-Fi client to complete an HTTP POST.
// This function builds the data string for posting to ThingSpeak // and provides the correct format for the wifi client to communicate with ThingSpeak. // It posts numFields worth of data entries, and takes the // data from the fieldData parameter passed to it. int HTTPPost( int numFields , String fieldData[] ){ if (client.connect( THING_SPEAK_ADDRESS , 80 )){ // Build the postData string. // If you have multiple fields, make sure the sting does not exceed 1440 characters. String postData= "api_key=" + writeAPIKey ; for ( int fieldNumber = 1; fieldNumber < numFields+1; fieldNumber++ ){ String fieldName = "field" + String( fieldNumber ); postData += "&" + fieldName + "=" + fieldData[ fieldNumber ]; } // POST data via HTTP. Serial.println( "Connecting to ThingSpeak for update..." ); Serial.println(); client.println( "POST /update HTTP/1.1" ); client.println( "Host: api.thingspeak.com" ); client.println( "Connection: close" ); client.println( "Content-Type: application/x-www-form-urlencoded" ); client.println( "Content-Length: " + String( postData.length() ) ); client.println(); client.println( postData ); Serial.println( postData ); String answer=getResponse(); Serial.println( answer ); } else { Serial.println ( "Connection Failed" ); } }
7) Wait for and receive the response from the server using
getResponse.
// Wait for a response from the server indicating availability, // and then collect the response and build it into a string. String getResponse(){ String response; long startTime = millis(); delay( 200 ); while ( client.available() < 1 && (( millis() - startTime ) < TIMEOUT ) ){ delay( 5 ); } if( client.available() > 0 ){ // Get response from server. char charIn; do { charIn = client.read(); // Read a char from the buffer. response += charIn; // Append the char to the string response. } while ( client.available() > 0 ); } client.stop(); return response; }
You can determine the useful range of values by monitoring your channel over wet and dry cycles. The number read by the ADC and posted to your channel is proportional to the voltage, and thus proportional to the soil moisture. The values vary depending on the temperature, humidity, and soil type. Once you know the values for dry soil, you can use the React app to generate a notification that it is time to water the plant. For more information on setting up React, see React App. | https://in.mathworks.com/help/thingspeak/MoistureMonitor.html | CC-MAIN-2022-33 | refinedweb | 1,306 | 59.4 |
Movies on film are shot at 24 frames per second (fps), and television (in the US) is broadcast at approximately 60 fields per second, or 60i where the i suffix means "interlaced" where one frame shows only the even horizontal scan lines, and the next one shows the odd scan lines. The antonym of "interlaced" is "progressive" which means the whole picture is shown for every frame. Movies on film are progressive. The problem to adapt movies for show on a television is known as Telecine. Film and TV engineers have always known that telecine could introduce jitter. The jitter is not so much due to the difference between progressive and interlaced, but due to the difference in frame rate, since 24 fps does not divide 60 fps.
Frames are supposed to be shown at a constant rate, which means a fixed interval. However, when you adapt one frame rate to another one, a frame would be shown for a longer duration than the next one. In the figure above, the green frame from the 24 fps source spans 3 frames in 60 fps, but the blue frame only spans 2 frames.
This difference causes the motion to appear jerky.
Of course, this problem happens when you perform constant rate sampling and wish to adapt the sampling to another rate. When it comes to a stream of wave-form such as audio, an interpolation scheme tries to "guess" the curve of the wave-form and reapply sampling to the guessed curve. This introduces interpolation error if our guesstimate is off.
How does it relate to computer science? When you have two fixed-rate link layer network and try to connect them, the rate difference causes jitter.
Wed thing { foo = 13, bar = 31, };what would the following program print? The answer may be surprising.
#include <iostream> int main() { thing x = thing(); // invokes enum's default constructor. std::cout << "x = " << x << std::endl; return 0; }It turns out that the code above will print x = 0, which is the integer value of neither foo nor bar. The reason is that enum is considered plain old data (ISO/IEC 14882 2003 section 3.9 clause 10), and all POD type have default initializer that initializes the value to zero (ISO/IEC 14882 2003 section 8.5 clause 5).
When dealing with an enum type value, we must always consider the possibility of 0 even if it was not one of our declared choices.
Posted by Likai Liu at 1:16 PM
Labels: c++, correctness | http://lifecs.likai.org/2010_07_01_archive.html | CC-MAIN-2017-09 | refinedweb | 421 | 71.04 |
Here’s a short-ish introduction to the Lucene search engine which shows you how to use the current API to develop search over a collection of texts. Most of this post is excerpted from Text Processing in Java, Chapter 7, Text Search with Lucene.
Lucene Overview
Apache Lucene is a search library written in Java. It’s popular in both academic and commercial settings due to its performance, configurability, and generous licensing terms. The Lucene home page is.
Lucene provides search over documents. A document is essentially a collection of fields. A field consists of a field name that is a string and one or more field values. Lucene does not in any way constrain document structures. Fields are constrained to store only one kind of data, either binary, numeric, or text data. There are two ways to store text data: string fields store the entire item as one string; text fields store the data as a series of tokens. Lucene provides many ways to break a piece of text into tokens as well as hooks that allow you to write custom tokenizers. Lucene has a highly expressive search API that takes a search query and returns a set of documents ranked by relevancy with documents most similar to the query having the highest score.
The Lucene API consists of a core library and many contributed libraries. The top-level package is
org.apache.lucene, which is abbreviated as
oal in this article. As of Lucene 4, the Lucene distribution contains approximately two dozen package-specific jars, e.g.:
lucene-core-4.7.0.jar,
lucene-analyzers-common-4.7.0.jar,
lucene-misc-4.7.0.jar. This cuts down on the size of an application at a small cost to the complexity of the build file.
A Lucene Index is an Inverted Index
Lucene manages an index over a dynamic collection of documents and provides very rapid updates to the index as documents are added to and deleted from the collection. An index may store a heterogeneous set of documents, with any number of different fields that may vary by document in arbitrary ways. Lucene indexes terms, which means that Lucene search is search over terms. A term combines a field name with a token. The terms created from the non-text fields in the document are pairs consisting of the field name and field value. The terms created from text fields are pairs of field name and token.
The Lucene index provides a mapping from terms to documents. This is called an inverted index because it reverses the usual mapping of a document to the terms it contains. The inverted index provides the mechanism for scoring search results: if a number of search terms all map to the same document, then that document is likely to be relevant.
Here are three entries from an index over part of the The Federalist Papers, a collection of 85 political essays which contains roughly 190,000 word instances over a vocabulary of about 8,000 words. A field called
text holds the contents of each essay, which have been tokenized into words, all lowercase, no punctuation. The inverted index entries for the terms consisting of field name
text and tokens abilities, able, and abolish are:
Note that the document numbers here are Lucene’s internal references to the document. These ids are not stable; Lucene manages the document id as it manages the index and the internal numbering may change as documents are added to and deleted from the index.
Lucene Versions and Version Numbers
The current Apache Lucene Java release is version 4.7, where 4 is the major version number and 7 is the minor version number. The Apache odometer rolled over to 4.6.0 in November, 2013 and just hit 4.7.0 on February 26, 2014. At the time that I wrote the Lucene chapter of Text Processing in Java, the current version was 4.5. Minor version updates maintain backwards compatibility for the given major version therefore, all the example programs in the book compile and run under version 4.7 as well.
The behavior of many Lucene components has changed over time. In particular, the index file format is subject to change from release to release as different methods of indexing and compressing the data are implemented. To address this, the
Enum class
oal.util.Version was introduced in Lucene 3. A
Version instance identifies the major and minor versions of Lucene. For example,
LUCENE_45 identifies version 4.5. The Lucene version is supplied to the constructor of the components in an application. As of Lucene 4.7, older versions have been deprecated, so the compiler issues a warning when older versions are specified. This means that the examples from the book, which specify version 4.5, generate a compiler warning when compiled under version 4.7.
There’s no requirement that all components in an application be of the same version however, for components used for both search and indexing, it is critical that the Lucene version is the same in the code that is called at indexing time and the code that is called at search time. A Lucene index knows what version of Lucene was used to create it (by using the Lucene
Version enum constant). Lucene is backward compatible with respect to searching and maintaining old index versions because it includes classes that can read and write all versions of the index up through the current release.
Lucene Indexes Fields
Conceptually, Lucene provides indexing and search over documents, but implementation-wise, all indexing and search is carried out over fields. A document is a collection of fields. Each field has three parts: name, type, and value. At search time, the supplied field name restricts the search to particular fields.
For example, a MEDLINE citation can be represented as a series of fields: one field for the name of the article, another field for name of the journal in which it was published, another field for the authors of the article, a pub-date field for the date of publication, a field for the text of the article’s abstract, and another field for the list of topic keywords drawn from Medical Subject Headings (MeSH). Each of these fields is given a different name, and at search time, the client could specify that it was searching for authors or titles or both, potentially restricting to a date range and set of journals by constructing search terms for the appropriate fields and values.
The Lucene API for fields has changed across the major versions of Lucene as the functionality and organization of the underlying Lucene index have evolved. Lucene 4 introduces a new interface
oal.index.IndexableField, which is implemented by class
oal.document.Field. Lucene 4 also introduces datatype-specific subclasses of
Field that encapsulate indexing and storage details for common use cases. For example, to index integer values, use class
oal.document.IntField, and to index simple unanalyzed strings (keywords), use
oal.document.StringField. These so-called sugar subclasses are all final subclasses.
The field type is an object that implements
oal.index.IndexableFieldType. Values may be text, binary, or numeric. The value of a field can be indexed for search or stored for retrieval or both. The value of an indexed field is processed into terms that are stored in the inverted index. The raw value of a stored field is stored in the index in a non-inverted manner. Storing the raw values allows you to retrieve them at search time but may consume substantial space.
Indexing and storage options are specified via setter methods on
oal.document.FieldType. These include the method
setIndexed(boolean), which specifies whether or not to index a field, and the method
setTokenized(boolean), which specifies whether or not the value should be tokenized. The method
setOmitNorms(boolean) controls how Lucene computes term frequency. Lucene’s default behavior is to represent term frequency as a proportion by computing the ratio of the number of times a term occurs to the total number of terms in the document, instead of storing a simple term frequency count. To do this calculation it stores a normalizing factor for each field that is indexed. Calling method
setOmitNorms with value
true turns this off and the raw term frequency is used instead.
Some indexing choices are interdependent. Lucene checks the values on a
FieldType object at
Field construction time and throws an
IllegalArgumentException if the
FieldType has inconsistent values. For example, a field must be either indexed or stored or both, so
indexed and/or
stored must be
true. If
indexed is false, then
stored must be true and all other indexing options should be set to false. The following code fragment defines a custom
FieldType and then creates a
Field of this type:
FieldType myFieldType = new FieldType(); myFieldType.setIndexed(true); myFieldType.setOmitNorms(true); myFieldType.setIndexOptions(IndexOptions.DOCS_AND_FREQS); myFieldType.setStored(false); myFieldType.setTokenized(true); myFieldType.freeze(); Field myField = new Field("field name", "field value", myFieldType);
The source code of the subclasses of
Field provides good examples of how to define a
FieldType.files.); } };
Note that.
Indexing Documents
Document indexing consists of first constructing a document that contains the fields to be indexed or stored, then adding that document to the index. The key classes involved in indexing are
oal.index.IndexWriter, which is responsible for adding documents to an index, and
oal.store.Directory, which is the storage abstraction used for the index itself. Directories provide an interface that’s similar to an operating system’s file system. A
Directory contains any number of sub-indexes called segments. Maintaining the index as a set of segments allows Lucene to rapidly update and delete documents from the index.
The following example shows how to create a Lucene index given a directory containing a set of data files. The data in this example is taking from the 20 Newsgroups corpus a set of roughly 20,000 messages posted to 20 different newsgroups. This code is excerpted from the example program in section 7.11 of Text Processing in Java that shows how to do document classification using Lucene. We’ll get to document classification in a later post. Right now we just want to go over how to build the index.
void buildIndex(File indexDir, File dataDir) = dataDir method
buildIndex walks over the training data directory and parses the newsgroup messages into a
NewsPost object, which is a simple domain model of a newsgroup post, consisting of the newsgroup name, subject, body, and filename (a string of numbers). We treat each post as a Lucene document consisting of two fields: a
StringField named
category for the newsgroup name and a
TextField named
text that holds the message subject and message body. Method
buildIndex takes two
java.io.File arguments: the directory for the Lucene index and a directory for one set of newsgroup posts where the directory name is the same as the Usenet newsgroup name, e.g.:
rec.sport.baseball.. and a Lucene
Analyzer named
mAnalyzer. We supply these as arguments to the
IndexWriterConfig constructor. We call the
setOpenMode method with the enum constant
IndexWriterConfig.OpenMode.CREATE, which causes the index writer to create a new index or overwrite an existing one.
A
for loop iterates over all files in the data directory. Each file is first parsed into a Java
NewsPost and a corresponding Lucene document is created and fields are added to the document. This document has two fields:
category and
text. Newsgroup names are stored as simple unanalyzed strings. The field named
text stores both the message subject and message body. A document may have multiple values for a given field. Search over that field will be over all values for that field, however phrase search over that field will not match across different field values. When the
IndexWriter.addDocument(Document) method is called, the document is added to the index and the constituent fields are indexed and stored accordingly.
The final two statements invoke the
IndexWriter‘s commit and close methods, respectively. The
commit method writes all pending changes to the directory and syncs all referenced index files so that the changes are visible to index readers. Lucene now implements a two-phase commit so that if the commit succeeds, the changes to the index will survive a crash or power loss. A commit may be expensive, and part of performance tuning is determining when to commit as well as how often and how aggressively to merge the segments of the index..
Search Queries
Lucene specifies a language in which queries may be expressed. For instance, computer NOT java produces a query that specifies the term computer must appear in the default field and the term java must not appear. Queries may specify fields, as in text:java, which requires the term java to appear in the
text field of a document.
The query syntax includes basic term and field specifications, modifiers for wildcard, fuzzy, proximity, or range searches, and boolean operators for requiring a term to be present, absent, or for combining queries with logical operators. Finally, sub-queries may be boosted by providing numeric values to raise or lower their prominence relative to other parts of the query. The full syntax specification is in the package level javadoc for package
oal.queryparser.classic. A bonus feature of Text Processing in Java is a quick one-page reference guide to Lucene’s search query syntax (see Figure 7.2).
Queries may be constructed programmatically using the dozen or so built-in implementations of the
oal.search.Query abstract base class. The most basic kind of query is a search for a single token on a single field, i.e., a single term. This query is implemented in Lucene’s
oal.search.TermQuery class. A term query is constructed from a
Term object, which is constructed from a field name and text for the term, both specified as strings.
Search Scoring
Lucene’s default search scoring algorithm weights results using TF—IDF, term frequency—inverse document frequency. Term frequency means that high-frequency terms within a document have higher weight than do low-frequency terms. Inverse document frequency means that terms that occur frequently across many documents in a collection of documents are less likely to be meaningful descriptors of any given document in a corpus and are therefore down-weighted. This filters out common words.
As of Lucene 4, the API provides alternative scoring algorithms and a pluggable architecture that allows developers to build their own custom scoring models.
Search Example
The following program illustrates the basic sequence of search operations. This program is a simplified version of the Lucene search example program included with the book. When run from the command line, this program takes three arguments: the path to a Lucene index; a query string; the maximum number of results to return. It runs the specified query against the specified index and prints out the rank, score, and internal document id for all documents that match the query.
import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.index.CorruptIndexException; import org.apache.lucene.index.DirectoryReader; import org.apache.lucene.queryparser.classic.ParseException; import org.apache.lucene.queryparser.classic.QueryParser; import org.apache.lucene.search.IndexSearcher; import org.apache.lucene.search.Query; import org.apache.lucene.search.ScoreDoc; import org.apache.lucene.search.TopDocs; import org.apache.lucene.store.Directory; import org.apache.lucene.store.FSDirectory; import org.apache.lucene.util.Version; import java.io.File; import java.io.IOException; public class LuceneSearch { public static void main(String[] args) throws ParseException, CorruptIndexException, IOException { File indexDir = new File(args[0]); String query = args[1]; int maxHits = Integer.parseInt(args[2]); Directory fsDir = FSDirectory.open(indexDir); DirectoryReader reader = DirectoryReader.open(fsDir); IndexSearcher searcher = new IndexSearcher(reader); Analyzer stdAn = new StandardAnalyzer(Version.LUCENE_45); QueryParser parser = new QueryParser(Version.LUCENE_45,"text",stdAn); Query q= parser.parse(query); TopDocs hits = searcher.search(q,maxHits); ScoreDoc[] scoreDocs = hits.scoreDocs; System.out.println("hits=" + scoreDocs.length); System.out.println("Hits (rank,score,docId)"); for (int n = 0; n < scoreDocs.length; ++n) { ScoreDoc sd = scoreDocs[n]; float score = sd.score; int docId = sd.doc; System.out.printf("%3d %4.2f %d\n", n, score, docId); } reader.close(); } }
The
main method is declared to throw a Lucene corrupt index exception if the index isn’t well formed, a Lucene parse exception if the query isn’t well formed, and a general Java I/O exception if there is a problem reading from or writing to the index directory. It starts off by reading in command-line arguments, then it creates a Lucene directory, index reader, index searcher, and query parser, and then it uses the query parser to parse the query.
Lucene uses instances of the aptly named
IndexReader to read data from an index, in this example, we use an instance of class
oal.index.DirectoryReader. The
oal.search.IndexSearcher class performs the actual search. Every index searcher wraps an index reader to get a handle on the indexed data. Once we have an index searcher, we can supply queries to it and enumerate results in order of their score.
The class
oal.queryparser.classic.QueryParserBase provides methods to parse a query string into a
oal.search.Query object or throw a
oal.queryparser.classic.ParseException if it is not. All tokens are analyzed using the specified
Analyzer. This example is designed to search an index for Lucene version 4.5 that was created using an analyzer of class
oal.analysis.standard.StandardAnalyzer which contains text a field named text.
It is important to use the same analyzer in the query parser as is used in the creation of the index. If they don’t match, queries that should succeed will fail should the different analyzers produce differing tokenizations given the same input. For instance, if we apply stemming to the contents of text field
text during indexing and reduce the word codes to code, but we don’t apply stemming to the query, then a search for the word codes in field
text will fail. The search term is
text:codes but the index contains only the term
text:code. Note that package
oal.queryparser.classic is distributed in the jarfile
lucene-queryparser-4.x.y.jar.
The actual search is done via a call to the
IndexSearcher‘s
search method, which takes two arguments: the query and an upper bound on the number of hits to return. It returns an instance of the Lucene class
oal.search.TopDocs. The
TopDocs result provides access to an array of search results. Each result is an instance of the Lucene class
oal.search.ScoreDoc, which encapsulates a document reference with a floating point score. The array of search results is sorted in decreasing order of score, with higher scores representing better matches. For each
ScoreDoc object, we get the score from the public member variable
score. We get the document reference number (Lucene’s internal identifier for the doc) from the public member variable
doc. The document identifier is used to retrieve the document from the searcher. Here, we’re just using the id as a diagnostic to illustrate the basics of search and so we just print the number, even though it is only an internal reference. Finally, we close the
IndexReader.
Discussion and Further Reading
This post tries to cover the essentials of Lucene 4 in a very short amount of space. In order to do this, this post contains minimal amounts of examples, and links to the Lucene javadocs have been substituted for detailed explanation of how classes and methods behave. Putting these code fragments together into a full application is left as an exercise to the reader.
Back when Lucene was at version 2.4 I wrote my first blog post titled “Lucene 2.4 in 60 Seconds.” A more accurate title would have been “Lucene 2.4 in 60 Seconds or More.” Likewise, a more accurate title for this post would be “The Essential Essentials of Text Search and Indexing with Lucene 4” but that’s just not very snappy. For more on the essentials of search and indexing with Lucene, please check out Chapter Seven of Text Processing in Java, (and all the other chapters as well).
To cover all of Lucene would take an entire book, and there are many good ones out there. My favorite is Lucene in Action, Second Edition, which, alas, only covers version 3. | https://lingpipe-blog.com/tag/apache-lucene/ | CC-MAIN-2020-05 | refinedweb | 3,433 | 56.05 |
$259.00.
Premium members get this course for $99.99.
Premium members get this course for $159.20.
Premium members get this course for $151.20.
Premium members get this course for $329.00.
better give,
set CLASSPATH=.;c:\jdk1.1.6\li
in the autoexec.bat file and save it and run the autoexec.bat file.
And check whether the .class file is available in the same directory which you working currently.
Now try
java root
Still it is not working,then try
java -classpath c:\temp root
assume that you are working temp directory which contains the root.class file.Now it will work.The option
-classpath allows us to specify where to find class for JVM.
I think this will work out.
will create myClass.class
SET CLASSPATH=.;%CLASSPATH%
java myClass
will execute that class
While compiling you have to give
javac name.java,
and while running the program you have to give
java name,
not java name.java.
The other thing is,in the name.java program suppose you have a class with different name like this:
name.java
class hello
{
public static void main(String []args)
{
System.out.println("hello"
}
}
In this case compile the program with
javac name.java which will create hello.class.And if u want to run the program give java hello,it will work out.
But the above case wont work if i am declaring the class hello as public since public class should be decalared in the file which should contain the same name.And ensure that CLASSPATH is set or not.
there is a typing mistake, it should be java name.class & the error message is : can't find class name/class.
In fact, I'm in Java programming. I simply follow the example in the book. So, I'm not really understanding your answer.
Here, below is one of the sample :
class Root {
public static void main(String[] arguments) {
int number = 225;
System.out.println("The square root of "
+ number
+ " is "
+ Math.sqrt(number) );
}
}
I save it as test.java. After javac test.java, root.class created.
According the book, output is : The square root of 225 is 15.0.
But, error message of java root : can't find class root ; error message of java root.class : can't find class root/class.
Here is my autoexec.bat for your reference :
C:\SBCD\DRV\MSCDEX.EXE /D:MSCD001 /V /M:8
SET BLASTER=A220 I5 D1 H5 P330 T6
SET CTSYN=C:\WINDOWS
C:\PROGRA~1\CREATIVE\SBLIV
C:\TVLITE\chgport 2DC
PATH C:\jdk11\bin;
set CLASSPATH=.;%CLASSPATH%;
rem PATH c:\PAGEMGR\IMGFOLIO;C:\PAG
Please let me know where goes wrong?
Thank you very very much. | https://www.experts-exchange.com/questions/10254212/can't-find-class-name-class.html | CC-MAIN-2018-13 | refinedweb | 451 | 70.8 |
SPOPS::Exception - Base class for exceptions in SPOPS
# As a user use SPOPS::Exception; eval { $user->save }; if ( $@ ) { print "Error: $@", "Stack trace: ", $@->trace->as_string, "\n"; } # Get all exceptions (including from subclasses that don't override # throw()) since the stack was last cleared my @errors = SPOPS::Exception->get_stack; print "Errors found:\n"; foreach my $e ( @errors ) { print "ERROR: ", $e->message, "\n"; } # As a developer use SPOPS::Exception; my $rv = eval { $dbh->do( $sql ) }; if ( $@ ) { SPOPS::Exception->throw( $@ ); } # Use the shortcut use SPOPS::Exception qw( spops_error ); my $rv = eval { $dbh->do( $sql ) }; spops_error( $@ ) if ( $@ ); # Throw an exception that subclasses SPOPS::Exception with extra # fields my $rv = eval { $dbh->do( $sql ) }; if ( $@ ) { SPOPS::Exception::DBI->throw( $@, { sql => $sql, action => 'do' } ); } # Throw an exception with a longer message and parameters SPOPS::Exception->throw( "This is a very very very very ", "very long message, even though it ", "doesn't say too much.", { action => 'blah' } ); # Catch an exception, do some cleanup then rethrow it my $rv = eval { $object->important_spops_operation }; if ( $@ ) { my $exception = $@; close_this_resource(); close_that_resource(); SPOPS::Exception->throw( $exception ); }
This class is the base for all exceptions in SPOPS. An exception is generally used to indicate some sort of error condition rather than a situation that might normally be encountered. For instance, you would not throw an exception if you tried to
fetch() a record not in a datastore. But you would throw an exception if the query failed because the database schema was changed and the SQL statement referred to removed fields.
This module replaces
SPOPS::Error and the error handling it used. There is a backwards compatible function in place so that the variables get set in
SPOPS::Error, but this is not permanent. If you use these you should modify your code ASAP.
You can easily create new classes of exceptions if you like, see SUBCLASSING below.
throw( $message, [ $message...], [ \%params ] )
throw( $exception )
This is the main action method and normally the only one you will ever use. The most common use is to create a new exception object with the message consisting of all the parameters concatenated together.
More rarely, you can pass another exception object as the first argument. If you do we just rethrow it -- the original stack trace and all information should be maintained.
Additionally with the common use: if the last argument is a hashref we add the additional information from it to the exception, as supported. (For instance, you can write a custom exception to accept a 'module' parameter which declares which of the plugins to your accounting system generated the error.)
Once we create the exception we then call
die with the object. Before calling
die we first do the following:
\%paramsfor any parameters matching fieldnames returned by
get_fields(), and if found set the field in the object to the parameter.
package,
filename,
line,
method.
traceproperty of the object to a Devel::StackTrace object.
initialize()so that subclasses can do any object initialization/tracking they need to do. (See SUBCLASSING below.)
get_fields()
Returns a list of property names used for this class. If a subclass wants to add properties to the base exception object, the common idiom is:
my @FIELDS = qw( this that ); My::Custom::Exception->mk_accessors( @FIELDS ); sub get_fields { return ( $_[0]->SUPER::get_fields(), @FIELDS ) }
So that all fields are represented. (The
mk_accessors() method is inherited from this class, since it inherits from Class::Accessor.
creation_location
Returns a string with information about where the exception was thrown. It looks like (all on one line):
Created in [%package%] in method [%method%]; at file [%filename%] at line [%line%]
to_string
Return a stringified version of the exception object. The default is probably good enough for most exception objects -- it just returns the message the exception was created with.
However, if the class variable
ShowTrace is set to a true value in the exception class, then we also include the output of the
as_string() method on a Devel::StackTrace object.
fill_error_variables
You normally do not need to call this since it is done from
throw(). This exists only for backward compatibility with
SPOPS::Error. The exception fills up the relevant
SPOPS::Error package variables with its information.
This is the message the exception is created with -- there should be one with every exception. (It is bad form to throw an exception with no message.)
package
The package the exception was thrown from.
filename
The file the exception was thrown from.
line
The line number in
filename the exception was thrown from.
method
The subroutine the exception was thrown from.
trace
Returns a Devel::StackTrace object. If you set a package variable 'ShowTrace' in your exception then the output of
to_string() (along with the stringification output) will include the stack trace output as well as the message.
This output may produce redundant messages in the default
to_string() method -- just override the method in your exception class if you want to create your own output.
It is very easy to create your own SPOPS or application errors:
package My::Custom::Exception; use strict; use base qw( SPOPS::Exception );
Easy! If you want to include different information that can be passed via
new():
package My::Custom::Exception; use strict; use base qw( SPOPS::Exception ); my @FIELDS = qw( this that ); My::Custom::Exception->mk_accessors( @FIELDS ); sub get_fields { return ( $_[0]->SUPER::get_fields(), @FIELDS ) }
And now your custom exception can take extra parameters:
My::Custom::Exception->throw( $@, { this => 'bermuda shorts', that => 'teva sandals' });
If you want to do extra initialization, data checking or whatnot, just create a method
initialize(). It gets called just before the
die is called in
throw(). Example:
package My::Custom::Exception; # ... as above my $COUNT = 0; sub initialize { my ( $self, $params ) = @_; $COUNT++; if ( $COUNT > 5 ) { $self->message( $self->message . "-- More than five errors?! ($COUNT) Whattsamatta?" ); } }
None known.
Nothing known.
Exception::Class for lots of good ideas -- once we get rid of backwards compatibility we will probably switch to using this as a base class.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Chris Winters <chris@cwinters.com> | http://search.cpan.org/~cwinters/SPOPS/SPOPS/Exception.pm | CC-MAIN-2015-48 | refinedweb | 1,009 | 52.29 |
A popular way to display notifications within a mobile app is through Toast notifications. Previously I demonstrated how to display these notifications using Ionic Framework 1, but with Ionic 2 being all the rage, I figured it would make sense to demonstrate how to do this again.
iOS has no true concept of a Toast notification like Android does, but using the great plugin by Eddy Verbruggen, we can make it possible in iOS. This is the same plugin we make use of in the Ionic Framework 1 tutorial.
Lets create a fresh Ionic 2 project using the Command Prompt (Windows) or Terminal (Mac and Linux):
ionic start ExampleProject blank --v2 cd ExampleProject ionic platform add ios ionic platform add android
Two important things to note here. You cannot add and build for the iOS platform unless you’re using a Mac. You must also be using the Ionic CLI that supports building Ionic 2 applications.
The goal here is to make notifications that look like the following:
This project will be using the Apache Cordova Toast plugin by Eddy Verbruggen. To install it, execute the following from the Command Prompt or Terminal:
ionic plugin add cordova-plugin-x-toast
We can start coding now. To keep things simple we’re just going to have a single screen with three buttons. Each button will display the Toast notification in a different part of the screen.
Let’s start by opening the project’s app/pages/home/home.ts and changing it to look like the following:
import {Component} from '@angular/core'; import {NavController, Platform} from 'ionic-angular'; declare var window: any; @Component({ templateUrl: 'build/pages/home/home.html' }) export class HomePage { constructor(private navCtrl: NavController, private platform: Platform) { } showToast(message, position) { this.platform.ready().then(() => { window.plugins.toast.show(message, "short", position); }); } }
A few important things to note here. First we need to include the
Platform dependency and set it in the constructor function.
Since we’re using the vanilla Apache Cordova plugin, it won’t come with TypeScript type definitions. This means we’ll get compiler errors at some point. To get past this, we can add the following line:
declare var window: any;
Then we create a
showToast function that accepts a message and screen position parameter. Because we’re using native plugins we need to make sure the application is ready before trying to use. This is done by making use of the
this.platform.ready().
When the application is ready, we can use the Toast plugin as defined in the official plugin README file.
With the logic file complete, lets shift our sight to the app/pages/home/home.html file for UI. Open that file and change it to look like the following:
<ion-header> <ion-navbar> <ion-title> Ionic Toast Project </ion-title> </ion-navbar> </ion-header> <ion-content padding> <button (click)="showToast('Hello World', 'top')">Show Top</button> <button (click)="showToast('Hello World', 'center')">Show Middle</button> <button (click)="showToast('Hello World', 'bottom')">Show Bottom</button> </ion-content>
It is a simple UI with just three buttons. Nothing fancy at all.
Now let’s say we didn’t want to use the vanilla Apache Cordova plugin. We have an option to use Ionic Native in our project instead. To include Ionic Native, import it like so:
import {Toast} from 'ionic-native';
Finally we can update the
showToast method to reflect appropriately:
showToast(message, position) { Toast.show(message, "short", position).subscribe( toast => { console.log(toast); } ); }
Notice we are now creating the Toast message a bit differently. It is just two different ways you can do things with Ionic 2.
Using the Apache Cordova Toast plugin by Eddy Verbruggen, we were able to show Toast notifications in our Ionic 2 Android and iOS mobile application exactly as we did in Ionic Framework 1. We didn’t do anything special to use the actual plugin, it was more just setup using Angular and the new framework version.
A video version of this article can be seen below. | https://www.thepolyglotdeveloper.com/2016/01/show-native-toast-notifications-in-an-ionic-2-mobile-app/ | CC-MAIN-2018-51 | refinedweb | 671 | 54.42 |
Mar 20, 2012 02:50 PM|dsmyth|LINK
Hi Everyone, really need your help.
I'm running through some basic web page development and have came across something and I don't know how to proceed.
I have two Actions; one that returns a Customer object as JSON, used in a jQuery Ajax call, and an Action that should accept a Customer object to save it. Here is the Customer object (model).
public class Customer { public string Name { get; set; } public Address Address { get; set; } } public class Address { public string Street { get; set; } }
Basic stuff.
Here now are the Actions.
public JsonResult GetCustomer() { Customer customer = new Customer { Name = "Microsoft", Address = new Address { Street = "Redmond" } }; return Json(customer, JsonRequestBehavior.AllowGet); } public void SaveCustomer(Customer customer) { // intentionally left blank }
In the HTML page I call the GetCustomer Action using jQuery.getJSON method, fired by a button click, and display the Customer in input fields.
<%@ Page Customer Records </asp:Content> <asp:Content <script type="text/javascript"> $(document).ready( function() { $('#buttonGet').click(click); }); function click() { $.getJSON("Customer/GetCustomer", null, display); } function display(customer) { $('#textName').val(customer.Name); $('#textStreet').val(customer.Address.Street); } </script> </asp:Content> <asp:Content <input type="text" id="textName" /> <br /> <input type="text" id="textStreet" /> <br /> <input id="buttonGet" type="button" value="Get" /> </asp:Content>
And it all works nice. Now I want to edit the details and submit a jQuery.post call to update the record but the problem is the customer object no longer exists on the client side and it were, as it could be made to, it would be out of date as all edits exist only in the input controls.
Question is how can I take the edits made in the input controls and use them to re-create a Customer object that can then be sent to the SaveCustomer event?
Is there data binding in ASP.MVC?
Any advice? I'm a bit new to this approach to web development and I'm a bit out of my comfort zone; which is awesome.
p.s I know that the customer object can be created using code similar to this
customer {
name = $('#textName').val;
}
and that's fine for this example but what if the Customer object was much more complex; I'd rather not take this approach as eventually I will be working with a very complex object.
Mar 20, 2012 02:56 PM|thinkrajesh|LINK
An example is extracted from my task management application.... the point of interest here is JSON.stringify(object) method which converts a json object into string.
Call ajax from your button click and replace task with customer object.
// Build the JSON object to pass to the controller action var task = { Title: $("#Title").val(), Description: $("#Description").val(), Owner: $("#Owner").val() }; $.ajax({ url: "/task/updatetask", type: "POST", data: JSON.stringify(task), dataType: "json", contentType: "application/json; charset=utf-8", success: function (e) { $("#message").html("Success"); }, error: function (xhr, status, error) { // Show the error $('#message').html(xhr.responseText); } });
Mar 20, 2012 02:59 PM|BrockAllen|LINK
One approach would be to just build the same shaped object in JavaScript, populate it from your input fields, serialize it to JSON and then send it to the server with Ajax.
var name = $("#textName").val();
var street = $("#textStreet").val();
var object = {Name: name, Address : {Street: street }};
$.ajax({
url: 'YourController/SaveCustomer' // server endpoint
type : "POST", // POST HTTP verb since you're doing an update
data : JSON.stringify(object), // serialized JSON -- this should map onto your Customer object with model binding in MVC
contentType:'application/json', // tell the server that you're sedning JSON
dataType : "json", // this is the expected format of the response body
success : function(result) { // success callback -- result will be the deserailized JSON object from the response body
}
});
Mar 20, 2012 03:04 PM|dsmyth|LINK
Hi thinkrajesh,
Thanks for taking the time to reply.
You replied as I updated the original post. I was aware of that approach but the reason I want an alternative is the Customer object could be much more complex.
In fact the idea was to use what I learned from this exercise and apply it to a very very complex object; and I don't really want to build the complex object on the client this way if I can help it.
A lot of code that needs to change if the objects design changes.
I'm kind of looking for a data binding approach where the Customer object would exist on the client and bound to the controls; edits on the controls would update the original object... and then that object could used with the ajax post code above really easy.
Any ideas how that could be done?
Mar 20, 2012 03:07 PM|BrockAllen|LINK
You can simply accept a different object in your Save method -- the only requirement is that the submitted JSON match the shape (as in duck typing).
Mar 20, 2012 03:08 PM|dsmyth|LINK!
Mar 20, 2012 03:12 PM|thinkrajesh|LINK
@dsmyth
-------------
Yes there are alternatives, but it may get too complex for this scenario..
NOTE: The below code is just for reference and you may ignore and wait for other better responses....
Checkout (this has two way binding between view and js model)...
An another easier approach would be to write a simple function which binds the information from form automatically....
like...
(extracted from my sample app)
function bindFormData(formId, jData) { debug(""); $.each(jData, function (fieldName, val) { debug("Field : " + fieldName + "=>" + val); var $formField = $("#" + fieldName); if ($formField.is(":checkbox")) { $formField.attr("checked", val); $formField.attr("value", val); } if ($formField.is(":radio")) { $formField.attr("checked", val); } if ($formField.is(":text") || $formField.is("select") || $formField.is(":hidden")) { if ($formField.attr("data-type") === "date" || $formField.hasClass("date")) { if (val !== null) { $formField.val(val.parseDate() || ""); } else { $formField.val(""); } } else { if (val === undefined || val === null) { $formField.val(""); } else { $formField.val(val); } } } }); }
This is a reusable funtion which could be bound to any json data, given the form id.
You may modify this as required....
Similarly, you can write a getObject function which gets you the required object based on formId parameters.....
I know this is getting too complicated, and I will wait for responses from others on this as well...
Mar 20, 2012 03:13 PM|BrockAllen|LINK
dsmyth!
Ok, so if you're interested and willing to take on a more sophisticated approach the consider using Knockout.js. It's basically a MVVM framework for JavaScript. It's quite cool and I think will be exactly what you're looking for.
And thx for the shout out :)
All-Star
37694 Points
Mar 20, 2012 03:18 PM|bruce (sqlwork.com)|LINK
there are two approaches.
1) mvc object binding to form elements. in the case the name of the field is path to bind to in the server object. use the html helpers ...For to produce the correct name. the use $('form').serialize() to produce a form post that bind to a complex view model.
in your case name street <input name="Address.Street" ...>
2) use a client side MVC pattern. here you use an observable object client side that is a proxy to controler json actions. backbone.js is the one in moe current MVC templates but ther are other toolkits.
Mar 20, 2012 03:19 PM|thinkrajesh|LINK
Try this
This is a form to json library, and may help avoid the tediousness of building json manually.... Do check whether this is useful to you...
Star
13415 Points
Mar 20, 2012 03:30 PM|CodeHobo|LINK
As others have pointed out, Knockout JS is a really good solution for your problem. Basically with knockout JS you would keep the customer object as a view model. Knockout automatically syncs any ui changes with the original object (pretty cool huh). It also syncs changes to the data with the ui. So you don't have to use jquery's .val, just change the value in the javascript object and the ui gets refreshed. In fact, knockout was created to solve exactly your situation where you are concerned about a large object and having to deal with tracking state/changes in complex applications with lots of javascript code.
Check out this example. It's a little more advanced, but it does illustrate how the save method simply takes the object in memory and converts it to json. In a real app you would then send that json to the controller via a jquery post
Mar 20, 2012 04:19 PM|dsmyth|LINK
Thank you everyone;
I have the dummy application working now using the '$('form').serialize();' technique. The customer data is reported back fine. It's a nice approach but, with limited experience, it might not be 100% ideal for the final solution; but it's all good.
The knockout framework might be more along the lines of what I'm thinking about; another experiment is in order. Many thanks for letting me know it exists and for the links.
A small but important victory was won today and it was thanks to all of yous. Nice one.
Hopefully I'll see everyone around. Later.
11 replies
Last post Mar 20, 2012 04:19 PM by dsmyth | http://forums.asp.net/p/1782778/4889816.aspx?Re+Ajax+POST+of+Complex+object+to+MVC+Action | CC-MAIN-2015-22 | refinedweb | 1,532 | 66.03 |
Edit the same script, index.py, like this :
def index(): form = FORM(action="show") form <= INPUT(name="city") form <= INPUT(Type="submit",value="Ok") return HTML(BODY(form)) def show(city): return city
This script defines 2 functions, index() and show(). In index() we build an HTML form, in show() we handle the data entered in the form
In function index() we first create an HTML form with the class FORM, with the parameter "action" set to "show", the name of the function that will handle the data entered by the user
Then we build this form with INPUT instances. For this we "add a child" to the FORM instance with the operator <= (think of it as a left arrow, meaning "add child") : first an input tag with name "city", then a submit tag with value "Ok". Note that the attribute "type" is written Type with an uppercase initial, to avoid confusion with the Python name "type"
When the script is called by, the form is printed in the web browser. Enter a value in the input field, then click on "Ok" : the value in the address bar of your browser will be set to..., and you will see the data you entered
How does this work? The value entered in the field "city" is passed as the argument to the function show() - the "action" attribute of the form. This function just returns this value, so the browser prints it
The name of the arguments of the function that receives the HTML form must be the same as the names in the HTML form. You can also use the usual Python syntax for unspecified arguments :
def show(**kw): return kw['city'] | http://en.m.wikibooks.org/wiki/Karrigell/Write_an_interactive_application_in_a_single_script | CC-MAIN-2014-52 | refinedweb | 281 | 61.29 |
The company I work for hired/cooperated with two other companies on some code for an embedded system. One company coded the operating system, and the other coded a driver that is used by the operating system. Due to FAA requirements, the code must be in C - no features from C++ are allowed. It is now our task to get the driver integrated with the operating system and get applications running on all of it. I don't know what kind of programmers the other companies employed, though - there are functions, structures, and global variables with single character names. Among other name conflicts, the operating system defines a function s(), and the driver has a structure s. These two things appear in thousands of lines of code, and it must all be compiled together to work. They are never both used in the same file, however. Is there some incredibly clever way of avoiding this conflict without spending what could literally be days inserting descriptive variable names in place of 's' and the like? In C++, I'd rely on some combination of namespaces and function overloading to make up for others bad coding practices, but I don't have that liberty this time. | http://cboard.cprogramming.com/c-programming/67472-name-conflicts-c.html | CC-MAIN-2015-27 | refinedweb | 204 | 59.64 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.