text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
In this section, you will learn how to copy content of one file into another file. We will perform this operation by using the read & write methods of BufferedWriter class.
Given below example will give you a clear idea :
import java.io.*; public class CopyFile { public static void main(String[] args) throws Exception { BufferedWriter bf = new BufferedWriter(new FileWriter("source.txt")); bf.write("This string is copied from one file to another\n"); bf.close(); InputStream instm = new FileInputStream(new File("source.txt")); OutputStream outstm = new FileOutputStream(new File("destination.txt")); byte[] buf = new byte[1024]; int siz; while ((siz = instm.read(buf)) > 0) { outstm.write(buf, 0, siz); } instm.close(); outstm.close(); BufferedReader br = new BufferedReader( new FileReader("destination.txt")); String str; while ((str = br.readLine()) != null) { System.out.println(str); } instm.close(); } }
The above will produce by the above code after execution :
Advertisements
Posted on: February+ | http://www.roseindia.net/tutorial/java/io/copyfile.html | CC-MAIN-2015-22 | en | refinedweb |
On Saturday, 2 of August 2008, Cedric Le Goater wrote:> Matt Helsley wrote:> > On Sat, 2008-08-02 at 00:58 +0200, Rafael J. Wysocki wrote:> >> On Friday, 1 of August 2008, Matt Helsley wrote:> >>>.> >>>> >>> Signed-off-by: Cedric Le Goater <clg@fr.ibm.com>> >>> Signed-off-by: Matt Helsley <matthltc@us.ibm.com>> >>> Acked-by: Serge E. Hallyn <serue@us.ibm.com>> >>> Tested-by: Matt Helsley <matthltc@us.ibm.com>> >>> ---> >>> include/linux/cgroup_freezer.h | 71 ++++++++> >>> include/linux/cgroup_subsys.h | 6 > >>> include/linux/freezer.h | 16 +-> >>> init/Kconfig | 7 > >>> kernel/Makefile | 1 > >>> kernel/cgroup_freezer.c | 328 +++++++++++++++++++++++++++++++++++++++++> >>> 6 files changed, 425 insertions(+), 4 deletions(-)> >>> create mode 100644 include/linux/cgroup_freezer.h> >>> create mode 100644 kernel/cgroup_freezer.c> >>>> >>> Index: linux-2.6.27-rc1-mm1/include/linux/cgroup_freezer.h> >>> ===================================================================> >>> --- /dev/null> >>> +++ linux-2.6.27-rc1-mm1/include/linux/cgroup_freezer.h> >>> @@ -0,0 +1,71 @@> >>> +#ifndef _LINUX_CGROUP_FREEZER_H> >>> +#define _LINUX_CGROUP_FREEZER_H> >>> +/*> >>> + * cgroup_freezer.h - control group freezer subsystem interface> >>> + *> >>> + * Copyright IBM Corporation, 2007> >>> + *> >>> + * Author : Cedric Le Goater <clg@fr.> >>> + */> >>> +> >>> +#include <linux/cgroup.h>> >>> +> >>> +#ifdef CONFIG_CGROUP_FREEZER> >>> +> >>> +enum freezer_state {> >>> + STATE_RUNNING = 0,> >>> + STATE_FREEZING,> >>> + STATE_FROZEN,> >>> +};> >>> +> >>> +struct freezer {> >>> + struct cgroup_subsys_state css;> >>> + enum freezer_state state;> >>> + spinlock_t lock; /* protects _writes_ to state */> >>> +};> >>> +> >>> +static inline struct freezer *cgroup_freezer(> >>> + struct cgroup *cgroup)> >>> +{> >>> + return container_of(> >>> + cgroup_subsys_state(cgroup, freezer_subsys_id),> >>> + struct freezer, css);> >>> +}> >>> +> >>> +static inline struct freezer *task_freezer(struct task_struct *task)> >>> +{> >>> + return container_of(task_subsys_state(task, freezer_subsys_id),> >>> + struct freezer, css);> >>> +}> >>> +> >>> +static inline int cgroup_frozen(struct task_struct *task)> >>> +{> >>> + struct freezer *freezer;> >>> + enum freezer_state state;> >>> +> >>> + task_lock(task);> >>> + freezer = task_freezer(task);> >>> + state = freezer->state;> >>> + task_unlock(task);> >>> +> >>> + return state == STATE_FROZEN;> >>> +}> >>> +> >>> +#else /* !CONFIG_CGROUP_FREEZER */> >>> +> >>> +static inline int cgroup_frozen(struct task_struct *task)> >>> +{> >>> + return 0;> >>> +}> >>> +> >>> +#endif /* !CONFIG_CGROUP_FREEZER */> >>> +> >>> +#endif /* _LINUX_CGROUP_FREEZER_H */> >> Hmm. I wonder if we really need a separate file for this. I'd prefer it to be> >> in freezer.h, unless there's a good reason not to place it in there.> > > > Yeah, it's a pretty small header so combining it with another header> > would be nice. However if we combine it with freezer.h we'd be including> > cgroup.h in unrelated filesystem code. An alternative might be to put it> > into a cgroup header for "small" subsystems (which might just be> > cgroup.h for now..).> > > > Thanks for the review!> > I'm not sure the inline is really useful. In that case, we could probably do > something like the following : > > include/linux/freezer.h :> > #ifdef CONFIG_CGROUP_FREEZER> > extern int cgroup_frozen(struct task_struct *task);> > #else /* !CONFIG_CGROUP_FREEZER */> > static inline int cgroup_frozen(struct task_struct *task)> {> return 0;> }> > #endif /* !CONFIG_CGROUP_FREEZER */> > and in kernel/cgroup_freezer.c:> > int cgroup_frozen(struct task_struct *task)> {> struct freezer *freezer;> enum freezer_state state;> > task_lock(task);> freezer = task_freezer(task);> state = freezer->state;> task_unlock(task);> > return state == STATE_FROZEN;> }> > and kill include/linux/cgroup_freezer.h ? I'd prefer this.Thanks,Rafael | http://lkml.org/lkml/2008/8/2/56 | CC-MAIN-2015-22 | en | refinedweb |
I feel that you are not orthogonalizing the desired feature into a setof fully independent orthogonal primitives. Compression is one example of this. Compression is a fully orthogonalissue. Why tie it to any other functionality?Placing the filesystem into user libraries exo-kernel style is anotherorthogonal issue. I don't want to start an exo-kernel implementationright now, especially not without doing it completely and systematicallyfor all of the filesystem. Transferring directories and virtual files can be solved in severalsimple and effective ways, we should pick one, and systematicallyimplement it for all virtual files. I like the one in which when youtransfer files you access the a special view of the FS:/filters-off-and-portable-format-only-visible/pathname/foo (or rather somesuch equivalent short named thing) to do the transfer. Tar can use thistoo. Someday we ought to use it for symlinks and file holes also (theyare just virtual files too.)I think that there should be only one interface to the set primitive,and that streams are hideous for at least the reason that they create asecond kind of set with a second kind of interface.What I understand here is that you are proposing a second kind of setwith a second kind of interface, so that albod ignorant programs cantransfer an albod without having to understand it. The problem thatleaves unsolved is that albod ignorant programs cannot access itscomponents because they don't understand it, and that a secondinterfaces is per se bad. I think you feel that albod ignorant programsdon't need to access it. Here I think you are wrong. Richard correctlypointed out that there is no reason to change emacs to handle any ofthis, it can all be done by the FS. He is right, so long as we keepthings clean.you would have us make it feasible to ignorantly transfer adirectory/albod, at the cost of us not ignorantly using adirectory/albod. I think the use of views makes it feasible to getboth.Finally, I don't see any need for the word application anywhere in thename of any of these orthogonal primitives we are creating. What we arecreating has no need to be more application specific than a directory isapplication specific. But then, maybe you are one of those folks whothinks it is clear what is OS and what is application. I am not one.In summary, each of the following has merit independent of the others:overloading names so that directories and files can have the same namesfilters to convert directories into various formats with rdf as onedefault so that they may be edited as flat files, and tar as another sothat they may be transferred. Note that, as Richard pointed out, theformat you want to use for editing with emacs is probably not the formatyou want to use for transferring. The format for editing is especiallynot tar. With filters come bundled some convention for insertingcertain standard filters into the namespace. (dirname/..rdf, anddirname/..tar, and dirname/..cat might be good. Note that dirname/..rdfsupports write, but dirname/..cat is a read-only file. Actually, if Iam the filter author, or somebody I pay is, then only dirname/..rdfwould have support for write, because it would be more work to do more,and I am lazy/cheap. Writing dirname/..rdf would allow specifying anon-random ordering of the elements in the directory, otherwise readingdirname/..rdf defaults to whatever order is convenient to the FS).file body inheritancestat data inheritancesymlinks (already implemented)Note that I am unsure whether filters should also be used to implementfile body inheritance, stat data inheritance, and symlinks, and feelvulnerable to good arguments on the topic.Together, they would allow all the functionality afforded by streamsthat Jeremy needs.. Yet because they are kept fully orthogonal, theywill be useful for much more than applications asking for streamsfunctionality. Especially compression.Now I think it is time to get this implemented. I am going to try tohire somebody or somebodies for this this week. As it gets implementeda lot of the details will fall into place, I don't really want to designthis more before I have a person assigned to do the work. If Acy doesfilters, I'll focus my guys on the other features, otherwise we'llimplement filters too.Hanstytso@mit.edu writes: > >.) > > > Requirements analysis > ===================== > > So, let's try this as an exercise. Since no one has actually bothered > to write down a list of requirements before galloping off to a solution, > let me try to offer some: > > 1) "Common" file manipulations operations should treat an "application > logical bundle of data" (albod) as if it were a single file. (Forgive > me for inventing a new acronym here, but "application logical bundle of > data" is too long to type each time, and I don't want to bias people's > thinking about how it is actually implemented.) > > 2) Applications should be able to quickly and efficiently manipulate > (read, modify, replace, delete, etc.) individual streams of data within > an albod. This should be done without the file bloat and inefficiencies > found in MS Office 97 format files. > > 3) There should be standard file streams inside the albod whose > semantics and data format are standardized, so that programs such as > graphical file managers can determine basic information about an albod, > such as which icon to use, who created it, which application should be > invoked when the albod is activated, etc. quickly and easily. (Using > file(1) on a data file to determine which application can interpret it > is considered barbaric.) > > 4) It should be easy to send these albod's across standard Internet > protocols using standard, commonly available tools (ftp, http, rcp, scp, > etc.). > > Am I missing any other requirements? > > > Other solutions > =============== > >). > > My proposed straw-man proposal > ============================== > >. > > > Problems with this approach > =========================== > >!! > > > Summary > ======= > >. > > - Ted-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at | http://lkml.org/lkml/1999/6/24/63 | CC-MAIN-2015-22 | en | refinedweb |
# # Copyright (c) 2008-2009 Apple Inc. All Rights Reserved. # # General project info Project = libpng = 1.4.3 # License file must be empty as this is a subproject. AEP_LicenseFile = AEP_Patches = # Local targets that must be defined before including the following # files to get the dependency order correct .PHONY: $(GnuAfterInstall) ifneq ($(_OS_VERSION),10.6) # Since Environment is passed to BOTH configure AND make (overriding what may have # been defined by configure), clear the default Environment since it was already saved # for use with configure by defining Extra_Configure_Environment. Environment = endif install-macosx: @echo "Reorganizing install for Mac OS X..." $(RMDIR) $(DSTROOT)/usr/local/share $(RMDIR) $(DSTROOT)/usr/local/bin | http://opensource.apple.com/source/apache_mod_php/apache_mod_php-53.3.1/libpng/Makefile | CC-MAIN-2015-22 | en | refinedweb |
03 September 2012 04:32 [Source: ICIS news]
SINGAPORE (ICIS)--China National Offshore Oil Corp (CNOOC) started construction of a 4m tonne/year liquefied natural gas (LNG) terminal at Shenzhen in ?xml:namespace>
The terminal will come on stream in 2015, CNOOC said in a statement.
Four LNG tanks – each with a storage capacity of 160,000 cubic metres (cbm) – and a berth with an 80,000-266,000cbm capacity will also be built as supporting facilities at the terminal, it said.
The entire project, which is a 70:30 joint venture between CNOOC Oil & Gas and Shenzhen Energy, will cost about yuan (CNY) 8bn ($1.3bn) to build, the company | http://www.icis.com/Articles/2012/09/03/9591949/chinas-cnooc-starts-construction-of-shenzhen-lng-terminal.html | CC-MAIN-2015-22 | en | refinedweb |
Mingw
MinGW (historically, MinGW32) is a way to cross-compile Windows binaries on Linux or any other OS. It can also be used natively on Windows to avoid using Visual Studio, etc.
More information on the MinGW project can be found at MinGW.org.
Contents
MinGW32 Toolchain
Start with emerging the
crossdev tool:
root #
emerge --ask sys-devel/crossdev
This article assumes you want to build a 32Bit toolchain. If you want to compile for a 64Bit target instead, replace the crossdev target i686-pc-mingw32 with x86_64-w64-mingw32.
Now with this tool, emerge the mingw32 toolchain:
root #
crossdev -t i686-pc-mingw32
You may try adding
--ex-insight and/or
--ex-gcc. These have not been known to build.
--ex-gdb will give you GDB and likely will work, but it is not very useful on Linux because MinGW GCC by default makes PE's (EXE files), not ELF files, and gdb has no idea how to run an EXE file on Linux. A remote debugger (with a Windows target machine) is a possibility but a long shot.
Notes about the toolchain:
- GCJ sources will not compile due to missing makespec files that do not get installed (copying from MinGW from Windows does not work either)
- OpenMP is forcefully disabled in the ebuild for the time being even if you enable it in your USE flags
Uninstallation
root #
crossdev -C i686-pc-mingw32
If files are left over (such as libraries and things you have added), you will be prompted to remove the
/usr/i686-pc-mingw32 directory recursively.
Using Portage
Some things work. Most things do not. Try with
USE="-*" after a failed build, then selectively add USE flags you need. If that does not work, then you probably cannot use Portage to install the package desired for use with MinGW.
Using Portage, you may run into problems such as the following:
- Application wants GDBM (see below)
- Application wants to link with ALSA/OSS/Mesa/other library only useful to X or Linux
Emerging
sys-libs/zlib:
root #
i686-pc-mingw32-emerge sys-libs/zlib
GDBM
These are "Standard GNU database libraries" according to Portage. Many libraries and applications depend on this. Successfully compiled before, but the current version in Portage does not compile. A patch is very much needed.
build.logexcerpt
i686-pc-mingw32-gcc -c -I. -I. -march=k8 -msse3 -O2 -pipe gdbmfetch.c -DDLL_EXPORT -DPIC -o .libs/gdbmfetch.lo gdbmopen.c: In function 'gdbm_open': gdbmopen.c:171: error: storage size of 'flock' isn't known gdbmopen.c:171: error: 'F_RDLCK' undeclared (first use in this function) gdbmopen.c:171: error: (Each undeclared identifier is reported only once gdbmopen.c:171: error: for each function it appears in.) gdbmopen.c:171: error: 'F_SETLK' undeclared (first use in this function) gdbmopen.c:177: error: storage size of 'flock' isn't known gdbmopen.c:177: error: 'F_WRLCK' undeclared (first use in this function)
To get around this problem for the moment, try building with
USE="-*".
Libraries
OpenSSL
SDL Example
Emerge SDL:
root #
i686-pc-mingw32-emerge media-libs/libsdl
Try compiling this source code (save to
test.c).
test.c
#include <SDL/SDL.h> #include <windows.h> void cool_wrapper(SDL_Surface **s, int flags) { *s = SDL_SetVideoMode(640, 480, 32, flags); return; } int main(int argc, char *argv[]) { int flags; SDL_Surface *s; SDL_Init(SDL_INIT_VIDEO); flags = SDL_OPENGL; /* Enable OpenGL */ flags |= SDL_GL_DOUBLEBUFFER; /* Enable double-buffering */ flags |= SDL_HWPALETTE; /* Enable storing palettes in hardware */ flags |= SDL_RESIZABLE; /* Enable window resizing */ cool_wrapper(&s, flags); Sleep(5000); SDL_FreeSurface(s); SDL_Quit(); return 0; }
Use the following command to build:
user $
i686-pc-mingw32-gcc -o test.exe test.c `/usr/i686-pc-mingw32/usr/bin/sdl-config --libs`
Test with Wine (requires SDL.dll to be somewhere in Wine's
%PATH%, which includes the same directory as the EXE):
user $
cp /usr/i686-pc-mingw32/usr/bin/SDL.dll .
user $
wine test.exe
If you get a window named SDL_app, then it worked. The window will automatically exit after about 5 seconds (the Windows
Sleep() function halts execution for 5000 milliseconds).
Porting POSIX Threads to Windows
Windows thread functions seem to work fine with MinGW. The following example code will compile without error:
win32_threads.c
#include <windows.h> #include <stdio.h> #include <stdlib.h> #define NUM_THREADS 5 DWORD print_hello(LPVOID lpdwThreadParam); int main(int argc, char *argv[]) { int i; DWORD dw_thread_id; for (i = 0; i < NUM_THREADS; i++) { if (CreateThread(NULL, /* Default security level */ 0, /* Default stack size */ (LPTHREAD_START_ROUTINE)&print_hello, /* Routine to execute */ (LPVOID)&i, /* Thread paramater */ 0, /* Run immediately */ &dw_thread_id /* Thread ID */ ) != NULL) { printf("In main: Creating thread %d\n", i); Sleep(1000); } else { printf("Error: Failed to create the %d\n", i); exit(EXIT_FAILURE); } } exit(EXIT_SUCCESS); } /* Thread routine */ DWORD print_hello(LPVOID lpdwThreadParam) { printf("Thread #%d responding\n", *(int*)lpdwThreadParam); return 0; }
Compile with:
user $
i686-pc-mingw32-gcc -o win32_threads.exe win32_threads.c
(The call to
Sleep() will make the thread creation a little more closer to POSIX, more in order, and there will not be duplicate runs.)
However, many applications rely upon POSIX threads and do not have code for Windows thread functionality. The POSIX Threads for Win32 project provides a library for using POSIX thread-like features on Windows (rather than relying upon Cygwin). It basically wraps POSIX thread functions to Win32 threading functions (
pthread_create()->
CreateThread() for example). Be aware that not everything is implemented on either end (however do note that Chrome uses this library for threading on Windows). Regardless, many ported applications to Windows end up using POSIX Threads for Win32 because of convenience. With this library you can get the best of both worlds as Windows thread functions work fine as show above.
To get Pthreads for Win32:
- Go to the Sourceware FTP and download the header files to your includes directory for MinGW (for me this is
/usr/i686-pc-mingw32/usr/include).
- Go to the Sourceware FTP and download only the .a files to your lib directory for MinGW (for me this is
/usr/i686-pc-mingw32/usr/lib).'
- At the same directory, get the DLL files (only pthreadGC2.dll and pthreadGCE2.dll; others are for Visual Studio) and place them in the bin directory of your MinGW root (for me this is
/usr/i686-pc-mingw32/usr/bin).
Example POSIX threads code:
win32_posix_threads.c
#include <pthread.h> #include <stdio.h> #include <stdlib.h> #define NUM_THREADS 5 void *print_hello(void *thread_id) { long tid; tid = (long)thread_id; printf("Thread #%ld responding.\n", tid); pthread_exit(NULL); return NULL; } int main(int argc, char *argv[]) { pthread_t threads[NUM_THREADS]; pthread_attr_t attr; int rc, status; long i; for (i = 0; i < NUM_THREADS; i++) { printf("In main: creating thread %ld\n", i); rc = pthread_create(&threads[i], NULL, print_hello, (void *)i); if (rc) { printf("Error: return code from pthread_create() is %d\n", rc); exit(EXIT_FAILURE); } } pthread_attr_destroy(&attr); for (i = 0; i < NUM_THREADS; i++) { rc = pthread_join(threads[i], (void **)&status); if (rc) { printf("Error: return code from pthread_join() is %d\n", rc); exit(EXIT_FAILURE); } printf("Completed join with thread %d, status = %d\n", i, status); } pthread_exit(NULL); exit(EXIT_SUCCESS); }
Compile with:
user $
i686-pc-mingw32-gcc -o posix_threads.exe -mthreads posix_threads.c -lpthreadGC2
It is VERY important that
-lpthreadGC2or
-lpthreadGCE2is at the END of the command.
With
i686-pc-mingw32-objdump -p posix_threads.exe we can see that we need pthreadGC2.dll. If you linked with -lpthreadGCE2 (exception handling POSIX threads), you will need mingwm10.dll, pthreadGCE2.dll, and possibly libgcc_s_sjlj-1.dll (last one only if you do not compile with CFLAG
-static-libgcc with
g++).
Copy the DLL file(s) required to the directory and test with Wine. For example:
user $
cp /usr/i686-pc-mingw32/usr/bin/pthreadGC2.dll .
user $
wine posix_threads.exe
If all goes well, you should see output similar to the following:
In main: creating thread 0 In main: creating thread 1 Thread #0 responding. In main: creating thread 2 Thread #1 responding. In main: creating thread 3 Thread #2 responding. In main: creating thread 4 Thread #3 responding. Thread #4 responding. Completed join with thread 0, status = 0 Completed join with thread 1, status = 0 Completed join with thread 2, status = 0 Completed join with thread 3, status = 0 Completed join with thread 4, status = 0
You will probably always want to include
-mthreadsin your CFLAGS for any code that relies on thread-safe exception handling. From the manpage:
-mthreads- Support thread-safe exception handling on MinGW 32. Code that relies on thread-safe exception handling must compile and link all code with the -mthreads option. When compiling, -mthreads defines:
-D_MT; when linking, it links in a special thread helper library
-lmingwthrdwhich cleans up per thread exception handling data.
Wine and %PATH%
Like Windows, Wine supports environment variables. You may specify the path of your DLLs (for example, the MinGW bin directory) in the registry at
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\Environment (for me this value would be
Z:\usr\i686-pc-mingw32\usr\bin). I recommend against this as you might forget to distribute DLLs with your application binaries.
No need for -lm
If you
#include <math.h> and make use of any of its functions, there is no need to link with the standard C math library using the
-lm switch with gcc or g++.
DirectX
DirectX 9 headers and libs are included. Link with
-ldx9. For the math functions (such as
MatrixMult, unlike Windows, you need to dynamically link with
-ld3dx9d and then include
d3dx9d.dll (where you get this file SHOULD be from Microsoft's SDK). This is the same for DirectX 8.
There is no support for DirectX 10 or 11 yet. Minimal support for Direct2D has been implemented via a patch (search the official mailing list of MinGW). | https://wiki.gentoo.org/wiki/Mingw | CC-MAIN-2015-22 | en | refinedweb |
IN THE PREVIOUS CHAPTER, you examined the steps taken by the CLR to resolve the location of an externally referenced assembly. Here, you drill deeper into the constitution of a .NET executable host and come to understand the relationship between Win32 processes, application domains, contexts, and threads. In a nutshell, application domains (or simply, AppDomains) are logical subdivisions within a given process, which host a set of related .NET assemblies. As you will see, an application domain is further subdivided into contextual boundaries, which are used to group together likeminded .NET objects. Using the notion of context, the CLR is able to ensure that objects with special needs are handled appropriately. Once you have come to understand the relationship between processes, application domains, and contexts, the remainder of this chapter examines how the .NET platform allows you to manually spawn multiple threads of execution for use by your program within its application domain. Using the types within the System.Threading namespace, the task of creating additional threads of execution has become extremely simple (if not downright trivial). Of course, the complexity of multithreaded development is not in the creation of threads, but in ensuring that your code base is well equipped to handle concurrent access to shared resources. Given this, the chapter closes by examining various synchronization primitives that the .NET Framework provides (which you will see is somewhat richer than raw Win32 threading primitives).
©2015
C# Corner. All contents are copyright of their authors. | http://www.c-sharpcorner.com/uploadfile/freebookarticles/apress/2009jan21035800am/ProcessesAppDomain/1.aspx | CC-MAIN-2015-22 | en | refinedweb |
The client proxy base class. More...
#include <clientproxy.h>
Detailed Description
The.
Member Function Documentation
Request capabilities of device.
Send a message asking if the device supports the given message type and subtype. If it does, the return value will be 1, and 0 otherwise.
Fresh is set to true on each new read.
It is up to the user to set it to false if the data has already been read. This is most useful when used in conjunction with the PlayerMultiClient
Set a replace rule for this proxy on the server.
If a rule with the same pattern already exists, it will be replaced with the new rule (i.e., its setting to replace will be updated).
- Parameters:
-
- Exceptions:
-
The documentation for this class was generated from the following file: | http://playerstage.sourceforge.net/doc/Player-svn/player/classPlayerCc_1_1ClientProxy.html | CC-MAIN-2015-22 | en | refinedweb |
There are following JPA persistence provider:
The Java Enterprise Edition (J2EE) supports JPA and supply a persistence provider. It uses the following elements to allow persistence management in EJB 3.0.
[Note: The injection of the EntityManager supports the following artifacts]
Query: The Java Persistence APIs defines the JPQL (JPA Query Language) that is used to select objects from data source. The JPQL query has an internal namespace which is declared in the from clause of the JPQL query. In JPQL, we define an arbitrary identifiers for assigning the entities. See the following JPQL query.
Query q = em.createQuery ("SELECT s FROM Student s");
In above query 's' is identifier that is assigned to the entity Student.
[Note: We can use "as" keyword. This is optionally that can be used when declaring an identifier in the "from" clause. Ex: SELECT s FROM s and SELECT s FROM Student AS s is synonymous.]: Persistence
Post your Comment | http://roseindia.net/struts/training/day1/jpa-api-overview.shtml | CC-MAIN-2015-22 | en | refinedweb |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
4
results of 4
Hi Malcolm,
> What I half-implemented last night does the following:
I do not see it in CVS yet ;-)
BTW, is there any way of getting head revision out of the SF CVS by tarball.
> 1) Expand indexes.type to an int field
> 2) Create a declarations table containing declid (int) and declaration
> (char(255))
If you use two ints (one for the language and one for the id) each language can
have its own "namespace" (my original major, minor idea). The impact is small
if using two small ints.
>.
Given the current bindings from filename to language module, it makes sense to
put it into the files table. Your webpage example is pretty scary, but a
special language module should take care of this. Unless the page is
preprocessed in some hairy way, there's a parser going to read it at some time.
> The licence is the number one problem. Cleaning up the code so it runs
> on non-gcc platforms would be good, but it's not essential since gcc is
> so widely available.
I checked up on the Alliance toolkit. The parser is divided into two.
Behavorial and structural, and there's lot of language (like variable) not
supported. It is a parser that is almost but not quite entirely unlike a VHDL
parser...
The good news is that the VAUL parser is derived from the parser i also used,
so all I have to do is to recreate my special parser from the VAUL source and
we're in GPL.
Robin.
--
ASIC Design Engineer
Tellabs Denmark A/S
Direct: +45 4473 2942
robin.theander@...
Hi Malcom,
Malcolm Box wrote:
> I guess you've already looked at them, but a websearch did find some
> other parsers, including a Perl one at
> though it sounds *slow*.
Yup, the language definition is so huge and complex that a single small entity
with a dummy arch takes 2+ minutes to parse.
> There's a VHDL compiler at Alliance,
> which is GPL'ed and thus might have a grammer that you could rip out.
Hmm, "my" parser was actually derived from an Alliance toolkit way back. I'll
have a second look.
> And there's VAUL from which
> claims to be a flex/bison job.
I looked at the VAUL. It's a big thing and clutched togethed from several
projects. I expected the job ripping and cleaning the parser to be bigger than
starting over.
Then I found the current skeleton...
Thanks anyway.
Robin.
--
ASIC Design Engineer
Tellabs Denmark A/S
Direct: +45 4473 2942
robin.theander@...
Hi,
I guess you've already looked at them, but a websearch did find some
other parsers, including a Perl one at though it sounds *slow*.
There's a VHDL compiler at Alliance,
which is GPL'ed and thus might have a grammer that you could rip out.
And there's VAUL from which
claims to be a flex/bison job.
Malcolm
Hi Robin,
Robin Theander wrote:
>Malcolm Box wrote:
>
>>Logically the mappings should be per-language, and ideally Common.pm
>>would not depend directly on the installed languages - ie it would not
>>hold the list of mappings. The problem is that the ident script doesn't
>>know what language each of the returned identifiers is in to display the
>>correct string.
>>
>>Probably the best solution is to create another database table that maps
>>a numeric id to a string, and then have the language modules store the
>>id number where they now store a character. Then each language module
>>can contain the string <-> number mapping, and simply check on
>>initalisation that the strings are in the db, adding them if not.
>>
>
>Is'nt it too much hassle to put the strings in db? They can be in the languange
>module together with all the other language specific stuff.
>
>In my little head it goes like this:
>1) Each identifier has to know what language it is.
>
That's the rub - currently each identifier does not store this
information. It seems like something that could be stored in the
indexes table, and perhaps it would be worth it.
>2) Language types are possibly ints allocated and defined somewhere in each
>language module (or in generic.conf).
>3) Each language module defines its own type to string mapping (as ints instead
>of chars?).
>4) ident looks for the relevant language mapping in the relevant module to
>return the string.
>
This would potentially be pretty slow - assume you have a common
identifier such as "close" which might appear in many different
languages. Each ident result returned would involve creating the
correct language module and then asking it for the type -> string
mapping. You could order the results by language, to reduce the
create/destroy count, but this is unlikely to be the order people
actually want to see the results in. Given I've got a LXR install
running where one common identifier returns over 1000 declarations, I'm
not sure I'd want to take the hit of that.
What I half-implemented last night does the following:
1) Expand indexes.type to an int field
2) Create a declarations table containing declid (int) and declaration
(char(255)).
The only difficulty is initialising the mapping in (3). My current plan
is to have each language hold its own type strings (e.g. "class",
"function definition") and on startup build the mapping string -> int by
searching declarations for the string and use the declid if found, else
insert the string and use the new declid. For languages using ctags,
the initialisation would also build the appropriate ctags char -> declid
mapping.
This should be pretty fast and is a one-time cost for the language
module, which is OK because the Lang modules are only used for genxref &
source, both of which have much bigger overheads than that. Note this
doesn't require ident to know about Lang::* modules..
>This could also make identifiers local to their language, but how should that
>be handled when
>1) Searching from scratch
>2) Displaying the identifier from a link from source (here we know the
>language).
>
Making identifiers carry language info is a good idea - it will help for
the source -> ident -> source jump that happens so often, since we will
be able to filter to identifiers from the same language (and even
possibly order by whether the id is in the same file or directory, which
would make it much faster to navigate). ident would then be extended to
allow selection of a language when searching, defaulting to all
langugages as at present.
>I think ident should take an optional language identifier from the URL. This
>could be generated from source.
>
Indeed, that's how I see it working.
>Did I miss out on anything here? I haven't been in every dim lit corner of the
>code..
>
Don't go there without a light, or the grues will get you...
>And something in the far dark of my mind says that the changes probably should
>be compatible with dbm support.
>
dbm support doesn't work at the moment anyway - there was a discussion
here about dropping it totally soon if no-one is prepared to work on it.
The overhead of getting someone to set up a RDBMs is so low that I
don't see it as a big issue, not to mention the fact that dbm
performance was why 0.3 sucked so much on big repositories.
>>I think it will be OK to add a C module to the distribution, provided it
>>comes with some reasonable way to build it. My guess (correct me if I'm
>>wrong) would be that the parser is pretty much vanilla C with no
>>platform dependancies, so it should be easy to make build. I would
>>suggest creating a lib/LXR/Lang/VHDL subdir to keep the source and build
>>system in. Then those that want VHDL support can build it, and those
>>that don't can just comment out the config in lxr.conf that maps files
>>to VHDL (and in fact won't ever see a problem unless they have files
>>that look like VHDL).
>>
>
>I agree, but... The code skeleton has the following license (the files are from
>'93):
> * This file is intended not to be used for commercial purposes
> * without permission of the University of Twente and permission
> * of the University of Dortmund
>
>I'm going to contact the source to see if it can be GPL'ed or whatever.
>If you know any other GPL'ed VHDL parser that's just the yacc skeleton with
>grammar, I could hack that up instead.
>
I'd be very reluctant to let any non-free code into the main
distribution. I know glimpse isn't free, but there are moves afoot to
replace it (probably with Swish-E2) RSN. I don't know of any other VHDL
parser out there, although perhaps VHDL mode from emacs might have
something useful?
>The parser code btw, required quite many hacks. It was in a very old lex
>dialect and gave some trouble with both flex and gcc. It would probably need
>some more cleaming to run on non gcc platforms. Again, the largest problem is
>probably the license.
>
The licence is the number one problem. Cleaning up the code so it runs
on non-gcc platforms would be good, but it's not essential since gcc is
so widely available.
Cheers,
Malcolm | http://sourceforge.net/p/lxr/mailman/lxr-developer/?viewmonth=200111&viewday=15&style=flat | CC-MAIN-2015-22 | en | refinedweb |
25 May 2010 07:17 [Source: ICIS news]
GUANGZHOU (ICIS news)--PetroChina subsidiary Urumqi Petrochemical is expected to start up a ?xml:namespace>
“Construction of the yuan (CNY) 3.9bn ($
Feedstock would be supplied from the firm's 6m tonne/year refinery at the same site, the source added.
“If the refinery can get enough crude to run at full load, we definitely don’t need to outsource feedstock for the PX unit. But we have short supply of crude now,” the source said and added that the PX unit may not operate at full capacity in the initial stage.
( | http://www.icis.com/Articles/2010/05/25/9362203/petrochina-urumqi-petchem-to-start-up-px-unit-by-end-september.html | CC-MAIN-2015-22 | en | refinedweb |
ASF Bugzilla – Bug 41441
Error 20024 on all pages request containing a ":"
Last modified: 2014-02-17 13:43:43 UTC
Win2k3
Apache 2.2.4
PHP 5.2.0 (apache2handler)
Mediawiki 1.9.0
On every mediawiki URLs containing a ":" character, I get the following error
from Apache.
[Tue Jan 23 09:18:05 2007] [error] [client xx.xx.xx.xx] (20024)The given path
misformatted or contained invalid characters: Cannot map GET
/wiki/index.php/Special:Recentchanges HTTP/1.1 to file, referer:
Is it normal?
It's not just Mediawiki that does it, any random page with a colon in it will
cause the error. Furthermore, the url does not have to correspond to any file
or service on the computer. Error messages are as follows:
[Sat Feb 17 12:55:50 2007] [error] [client 127.0.0.1] (20024)The given path
misformatted or contained invalid characters: Cannot map GET
/wiki/Special:Specialpages HTTP/1.1 to file, referer:
[Sat Feb 17 23:39:40 2007] [error] [client 127.0.0.1] (20024)The given path
misformatted or contained invalid characters: Cannot map GET
/testing:colons:and:stuff HTTP/1.1 to file
The underlying idea was to prevent people from referencing DOS style device
names, e.g. C: which could create havoc
Unfortunately, MediaWiki started out life on Linux / Unix platforms, and they
decided to use colons for namespaces.
Can we get an Apache httpd.conf option which allows us to selectively switch
off this warning, maybe according to regex matching of the URL path (or even
just the virtual path prefix) ?
While device names could be a small problem, the more treacherous problem is
NTFS data streams - as described in
These are prone to misuse.
Apache 2.2 (actually the APR test_safe_name() function) intentionally disallows
the ":" character within a URI on Windows.
Also, the Windows FindFirstFile() function will return ERROR_INVALID_NAME
instead of ERROR_FILE_NOT_FOUND for any name attempting data stream access using
the ":" character.
The choice of the ":" character as the namespace separator in MediaWiki was an
unfortunate one for use on Windows.
setup: WinXP, Apache 2.2.4
url: ":" (anything with a colon)
browser: Forbidden You don't have permission to access /: on this server.
error.log: (20024)The given path misformatted or contained invalid characters:
Cannot map GET /: HTTP/1.1 to file
these urls also cause "Forbidden":
url: " /" (any only-space path segment)
url: " ?"
url: " #"
.htaccess:
RewriteEngine On
RewriteRule ^.*$ index.html [L]
..if only the filename safety check didn't come to play before the mod_rewrite..
Crazy question, did you try a [PT] rewrite rule to knock out the offending ':'?
My fixing, recompiled libapr and replaced libarp-1.dll in my apache.
char tmpname[APR_FILE_MAX * 3 + 1];
HANDLE hFind;
if ((rv = test_safe_name(fname)) != APR_SUCCESS) {
return APR_FROM_OS_ERROR(ERROR_FILE_NOT_FOUND); //rv;
}
hFind = FindFirstFileW(wfname, &FileInfo.w);
if (hFind == INVALID_HANDLE_VALUE)
return APR_FROM_OS_ERROR(ERROR_FILE_NOT_FOUND); //apr_get_os_error();
FindClose(hFind);
if (unicode_to_utf8_path(tmpname, sizeof(tmpname),
FileInfo.w.cFileName)) {
return APR_ENAMETOOLONG;
}
filename = apr_pstrdup(pool, tmpname);
I think the issue here is that ':' in the original URI always causes the
warning, even when the URI is eventually rewritten to another one without
the ':'.
The warning simply needs to be downgraded to a debug message (which is normally
suppressed) or removed altogether. Or there needs to be a flag to turn it off.
Windows XP, apache 2.2.4,, -- not working
((22)Invalid argument: Cannot map GET /%F0%FF HTTP/1.1 to file)
but,, works
fine
Comment 8 has nothing to do with this report; please don't confuse bugs by
layering on multiple issues.
To the commentor - it fails because httpd on win32 requires you to specific
non-ASCII filenames in UTF-8, corresponding to the Unicode filesystem of Win32.
Confirming that problem still exists in Apache 2.2.8 (PHP 5.2.5) on Windows 2003 Server.
It would be great to have some way to turn this off so it doesn't spam the error.log....
Confirming that problem still exists in Apache 2.2.11 (PHP 5.2.8) on Windows
2003 Server.
Error log:
[Mon Jan 26 20:53:45 2009] [error] [client 10.x.x.x] (20024)The given path is misformatted or contained invalid characters: Cannot map GET /mwiki/index.php/Special:SpecialPages HTTP/1.1 to file, referer:
(In reply to comment #4)
> RewriteEngine On
> RewriteRule ^.*$ index.html [L]
>
>
> ..if only the filename safety check didn't come to play before the mod_rewrite..
>
Have you tried without htaccess? It makes your rules run much later, where they can't suppress the "translate_name" in the core of apache.
Windows 7
Apache 2.2.11
Same problem with "|" character.
and it's need a third level to overcome bug: -Forbidden -Forbidden -Norm
On Windows Vista Apache 2.2.14, these urls also cause "Forbidden":\\n\\n\n
Folks, this won't be addressed until httpd learns the concept of "not a file"
resource, al la contextual DocumentRoot per Location/VirtualHost. E.g. a
proxy-only namespace, or something run exclusively through a special handler.
Congratulations for choosing Win32 and we are pleased to provide that port.
However, the same porters would strongly encourage you to choose a system with
fewer potential naming-related security issues if you must work around these
issues. httpd is at the mercy of the underlying filesystem.
WRT comment #14, Zéfling, see AllowEncodedSlashes and AcceptPathInfo; we suspect
yours is a simple misconfiguration or unusual assumptions. However, in httpd
the backslash is not a pathname separator, and the file handler will not
accept it as such. As an argument to your cgi script, it is accepted. | https://bz.apache.org/bugzilla/show_bug.cgi?id=41441 | CC-MAIN-2015-22 | en | refinedweb |
Introduction
Many Maximo business objects, processes and applications record
date and time measurements. Such measurements can be utilized to compute the
total time spent by operators on particular tasks. For example, in the Service
Request application, finding the elapsed time difference between Actual Finish
and Actual Start date and time can help measure the time spent by service desk
agents in completing and closing a ticket. This computed information can then
be utilized in ad hoc reporting to gain insights into the efficiency of the
service desk.
Date and time processing with scripts
I have picked this scenario today to illustrate how Maximo
scripting can be exploited to calculate the elapsed time. My goal is to
calculate elapsed time and present it in the Service Request application user
interface formatted in an easy-to-read manner. For example, if the Actual Start
time for a service request was 9/2/04 12:04 PM and the Actual Finish time was 9/2/04
12:14 PM, then the time spent in resolving the service request is 10 minutes.
Note: For a more complete and conceptual introduction to
Maximo scripting, please follow the Maximo 7.5 scripting link provided in the
Useful Links section of this blog.
Note: In Maximo, time spent on a service request is tracked
in the form of labor transaction records. These records are generated whenever
a service desk agent starts a timer and then stops it thereby capturing time
spent. An agent may also manually enter Actual Start and Finish information
directly into the service request record. Various totals for a service request
can be viewed under different categories such as Total Labor Hours and Total
Costs. My goal with this scripting scenario is not to supplant these
capabilities. My goal is simple: Illustrate script-based time calculations
using Service Request application as the backdrop.
Ingredients to build the scenario
I use the following ingredients to put together the desired
functionality:
TIMESPENT attribute
The purpose of this attribute is to persist the computed
elapsed time in an end-user friendly form in the product database. Using
Database Configuration application, I created the persistent TIMESPENT
attribute against the TICKET business object. The data type is ALN and its
length is 20 characters. Configure the product so that this new attribute takes
effect. Notice that the attribute is inherited by the SR business object. For
the reminder of our discussion, we will work only with the SR business object.
Here is a screen capture of what one would see in the Database Configuration
application.
SR application presentation
Using the Application Designer, I open the SR application
presentation. I locate the Dates section on the Service Request tab. I add a
text box just below the Actual Finish field. I configure the text box by
binding the field with the newly created TIMESPENT attribute of the SR business
object. In the Textbox Properties dialog, I specify a label and select the
SR.TIMESPENT attribute through the Attribute look up. I then save the
presentation. Here is a screen capture of what one would see in the Application
Designer application.
Object Launchpoint
I open the Script with Object Launch Point wizard in the
Automation Script application.
In Step 1, I provide the following values:
In Step 2, I provide the following values:
In Step 3, I provide the following script:
timediff
= actualfinish - actualstart
timespent
= elapsedtime(msdiff)
timediff
= actualfinish - actualstart
timespent
= elapsedtime(msdiff)
In Step 3, I click the Create button to create the object
launch point and save the script.
Dealing with dates
To test this configuration I navigate to the Service Request
application, select a service request record that’s in RESOLVED state, has
actual start and finish date and time, edit the long description and save the
record. The following error is displayed:
BMXAA7837E
- An error occured that prevented the CALCELAPSEDTIME script for the
CALCELAPSEDTIME launch point from running.
TypeError: unsupported operand type(s) for -: 'java.sql.Timestamp' and
'java.sql.Timestamp' in <script> at line number 1
BMXAA7837E
- An error occured that prevented the CALCELAPSEDTIME script for the
CALCELAPSEDTIME launch point from running.
I realize a number of things:
The Tpae scripting framework returns Java Date object
instances for those variables that are bound to business object attributes of
Maximo data type DATE, TIME and DATETIME. With all this information I make the
following changes to the script:
def calcelapsedtime(msdiff):
secondinmillis = 1000
minuteinmillis = secondinmillis *
60
hourinmillis = minuteinmillis * 60
dayinmillis = hourinmillis * 24
elapseddays = msdiff / dayinmillis
elapsedhours = msdiff / hourinmillis
elapsedminutes = msdiff / minuteinmillis
if(elapseddays !=0):
return str(elapseddays) + " days"
if(elapsedhours !=0):
return str(elapsedhours) + " hours"
if(elapsedminutes !=0):
return str(elapsedminutes) + " minutes"
timediff = actualfinish.getTime() -
actualstart.getTime()
I test the revised script by navigating to the Service
Request application and saving the updated description on the same record I
used previously. The following error is displayed:
BMXAA7837E - An error occured that prevented the
CALCELAPSEDTIME script for the CALCELAPSEDTIME launch point from running.
TypeError: unsupported operand type(s) for /: 'java.math.BigInteger' and 'int'
in <script> at line number 20
Line 20 is the invocation of the calcelapsedtime() function. Yet the
error message specifies an ‘unsupported operand type for /’ indicating the
operation being attempted is a division. The problem lies in the first division
operation occurring on line 7 of the script:
elapseddays
= msdiff / dayinmillis
elapseddays
= msdiff / dayinmillis
The parameter being passed into the calcelapsedtime()
function is not a long value but a java.math.BigInteger object. Jython does not
support automatic conversion of the Java BigInteger object to a long value. So
I revise the script by converting the BigInteger object myself to a long value
and pass the long value to the calcelapsedtime() function. Here’s the final
script:
dayinmillis = hourinmillis * 24
I test the final script against the service request record
and, this time, the elapsed time is successfully written into the Time Spent
field:
A test on a different service request record yields this
output into the Time Spent field:
Let us take a quick look at the script. There are two parts
to it:
Summary
Dates-based calculations in Maximo scripting are not
difficult to author. It is important to understand what Maximo data type you
are working with and how the script framework passes a representation to the
script. With that insight, useful scripts can be authored that deliver
application enhancements rapidly to end users. You can find copy of the script as a ZIP file at this link.
Useful Links
Java Date class and methods
Java Timestamp class and methods
Java BigInteger class and methods
Jython operators
Jython functions
Working with Java Date objects – good introduction with examples. | https://www.ibm.com/developerworks/community/blogs/a9ba1efe-b731-4317-9724-a181d6155e3a/entry/maximo_scripting_date_dizziness_part_i22?lang=en_us | CC-MAIN-2015-22 | en | refinedweb |
Paraphrased, the bug reports an information disclosure vulnerability in Firefox apparently caused a couple to break up after she discovered a list of web sites he was visiting while surfing the net.
It's not clear from the bug if it was an actual information disclosure bug, or if it was a user error, but the thing I found fascinating was the comments that were left in the bug:.
And:'.
The list of colorful comments goes on and on, only a couple of which have anything to do with the actual bug.
We don't tend to have anything NEARLY as interesting in our internal bug reporting databases (although there have been some quite "interesting" comments in the comments on the threads). I'm not sure if this is a good thing or not :)
BTW: To cut off any discussion about whether or not it would be a good thing for there to be a public Windows bug database, I've not made up my mind. The bottom line is that I'm not sure if public bug databases can scale to a project the size of Windows. :)
PS: Before everyone asks, yes, I did get all the emails on Wednesday along with everyone else, it was just funnier to wait for the /. article.).
So the answer to my bonus question was too easy, "hippietim" figured it out in the first comment. The problem is that the loop:
for (LONG i = 0 ; i < imageCount ; i += 1){ hr = images->item(CComVariant(), CComVariant(i), &image); if (FAILED(hr)) {
Leaks all the "image" object references except the last one. If you're lucky enough to be running a debug version of the runtime library, the code will assert, but that doesn't always happen.
The fix, of course is to move the "image" variable to the correct scope.
for (LONG i = 0 ; i < imageCount ; i += 1){ CComPtr<IDispatch> image; hr = images->item(CComVariant(), CComVariant(i), &image); if (FAILED(hr)) {
Other errors pointed out in the comments (mea culpas): Several people (Aaron, Vladimir, Miral) pointed out that instead of exit()ing, I should have used throw hr;, they're right, I was mixing metaphors.
throw hr;
There's one other thing that came up in the comments, it's worth its own post (to increase visibility) so I'll post that tomorrow.
patria found the answer on the 4th comment, and I think that Mike Dimmick put it best:
Well, if you're going to use the Resource Acquisition Is Initialization idiom, use it consistently:
Well, if you're going to use the Resource Acquisition Is Initialization idiom, use it consistently:
CComPtr (an autoptr for COM objects that auto-addref's and releases the object) uses RAII, but the code didn't consistently use RAII - instead it pretended that it wasn't using RAII. Since CComPtr never throws, it's easy to treat it as a super pointer, but Mike's right - you have to be careful about lifetime issues, and that's exactly what went wrong in this example.
The problem is that when you call CoUninitialize, you need to ensure that you've released all references to any COM objects you might hold, if you don't, the DLL that hosts the COM objects is almost certainly going to have been uninitialized.
So let's present a "fixed" version of the code, using Mike's example of adding an RAII style object to work around the lifetime issue:
class CCoInitializer {public: CCoInitializer( DWORD dwCoInit ) { HRESULT hr; hr = CoInitializeEx( NULL, dwCoInit ); if (FAILED(hr)) { throw hr; } }
~CCoInitializer() throw() { CoUninitialize(); } };
int _tmain(int argc, _TCHAR* argv[]){ HRESULT hr; CCoInitializer coInitializer(COINIT_APARTMENTTHREADED); CComPtr<IHTMLDocument2> document; CComPtr<IHTMLElementCollection> images; CComPtr<IDispatch> image; LONG imageCount; hr = document.CoCreateInstance(CLSID_HTMLDocument); if (FAILED(hr)) { exit(hr); } : : hr = document->get_images(&images); if (FAILED(hr)) { exit(hr); } hr = images->get_length(&imageCount); if (FAILED(hr)) { exit(hr); } for (LONG i = 0 ; i < imageCount ; i += 1) { hr = images->item(CComVariant(), CComVariant(i), &image); if (FAILED(hr)) { exit(hr); } } return 0;}
~CCoInitializer()
int
While I was fixing the code, I added a bit of additional stuff. Unfortunately, the new code introduced yet another bug.
Btw, for those playing along at home, I know that this doesn't actually work, code to load up the HTML document is in the omitted section :).
In addition, the absence of code to check for the coInitializer object throwing is NOT a bug. There's no way of recovering from this exception, and the exception handling paradigm states that if you don't know how to handle an exception, you let your caller handle it.
This may be the shortest "Bad Code" I've ever done, but it keeps on surprising me how many times I see this problem (people asked me questions about it twice in the past week).
// BadCode18.cpp : Defines the entry point for the console application.//
#include "stdafx.h"#include <windows.h>#include <tchar.h>#include <wininet.h>#include <urlmon.h>#include <mshtml.h>int _tmain(int argc, _TCHAR* argv[]){ HRESULT hr; CComPtr<IHTMLDocument2> document; hr = CoInitialize(0); if (FAILED(hr)) { exit(hr); } hr = document.CoCreateInstance(CLSID_HTMLDocument); if (FAILED(hr)) { exit(hr); } CoUninitialize(); return 0;}
// BadCode18.cpp : Defines the entry point for the console application.//
#include
That's all it takes, I consciously chose not to add stuff to obfuscate the problem.
Btw, for those who've been reading this blog for a while, I covered this exact same issue in a different form a while ago.
I still can't quite believe it's been two years and over 500 posts (ok, it's only 501, but that's still over 500 :)). My posting rate's dropped as Vista's getting closer to shipping, I keep letting other things get in the way, but...
Some of my favorite posts (aka a trip through memory lane):
One in a million is next Tuesday - an oldie, but a goodie.
What are these "Threading Models" and why do I care? - a brief introduction to one of the most confusing aspects of COM programming.
Larry's Rules of Software Engineering #1: Every software engineer should know roughly what assembly language their code generates.
Larry's Rules of Software Engineering #2: Measuring Testers by test Metrics Doesn't. - this one made it into a book :)
Me Too! - Bedlam DL3
How do I divide fractions? - One of the first posts inspired by Valorie, which generated some of the largest number of comments.
A Parable - Another Valorie inspired post.
It was 20 years ago today - my 20th anniversary post.
What does Style Look Like - the last post in my series on programming style - it includes links to the other articles.
Concurrency - My other major series from last year, which again includes pointers to the other articles in the series.
How do you play a CD - this is the last in a series of posts I made back in April and May last year where I showed a number of different ways to play the contents of a CD.
Moving Offices - Again - It's just funny :)
Remember the Blibbet - Actually I learned the origin of this badge just yesterday.
What I did on the 4th of July - proof that Larry makes stupid mistakes.
Larry goes to Layer Court - a peek into some of the quality processes in Windows.
Early Easter Eggs and Why no Easter Eggs
Anyway, that's enough memories :)
I've enjoyed the past two years, and once again, thanks for putting up with me :)
It seems they noticed a number of events coming from the "bowser" event source, and they were convinced that it had to be a typo.
Well, it's not :)...
"Silence on the Wire" describes itself as "a Field Guide to Passive Reconnaissance and Indirect Attacks" (I know that because it's on the front cover of the book). In it, Michal discusses Information Disclosure vulnerabilities and the various ways that information can leak out from a system, even when that system is protected by a firewall. He also discusses (although not in as much detail) ways that you can mount indirect attacks against a host.
I finished it a while ago, and found it "interesting". Overall, it was a reasonably enjoyable read, but I have to be honest and say that I'm not really sure that the book actually met the discription on the cover. There were also several mysterious (to me) diversions during the course of the book.
For instance, Chapter 2 starts with a huge discussion about how von Neumann computers work, including how memory gates are assembled, etc. While The end of the chapter discusses a way of of using detailed timing analysis to as a covert channel to detect information leaking from sensitive calculation. The hardware discussion was interesting stuff, I'm not sure why it needed to be in a book on passive analysis (and realistically, Charles Petzold did a better job of it in his book "Code").
There are similar digressions throughout the book (although none as notable as this one).
One of my favorite portions of the book was the one with the pretty pictures ;). In it he discusses a fascinating analysis of the pseudo random number generator that's used to generate TCP/IP sequence numbers. He showed a series of pictures and some analysis for a series of operating systems, ranging from good to not so good. I do wish he had used more up-to-date operating systems in his analysis, the book was printed in 2005, but he uses examples from Mac OS 9, and Win98 and NT4, and none from Win2K3, or OS X.
Some of my problems with the book are:.
Overall, I enjoyed reading the book, I found much of the information presented to be fascinating (and a bit scary).
Back in January, I wrote about the OOBE of my iRiver H10 player, and I've got another horrid first run story today.
Daniel's been pestering us to get DnD Online, and yesterday it arrived. I figured I'd install it for him (to save him the trouble).
Man, talk about hideous first run experiences. First off, the CD installed SLOWLY. Now the machine I'm running this on isn't the fastest on the planet, but I can rip CDs in way less time than the game installed. My guess is that they were decompressing data on the fly or something. But slow installations aren't a huge issue, I know how hard it can be to copy tons of data onto a machine.
My biggest complaints came when I launched the application.
It popped up a pretty splash screen, and some status text flashed on the screen about checking for web sites, etc. Then it hung. I waited for about 5 minutes and no progress, it just hung. What was worse is that the app didn't show up in the task manager list so I had to find dndlauncher.exe in taskmgr and kill it manually.
So I restarted. This time it started and made it through the initial UI, and presented a new loader screen. The loader started downloading two versions of the client executable. That was wierd, the game's only been online for 7 days and there are already 2 new versions of the client available? No big deal. One thing I noticed was that the download was SLOW - 10KB per second according to the progress meter. Looking at the network traffic in taskmgr, it wasn't receiving any data, the client was just slow.
And then it hung downloading the client executable. This time it DID have an entry in the taskbar, but I couldn't right click on it to stop it, I had to go back to the task manager to kill it.
Third try, this time it got through downloading the client programs, and it started patching game data. There were 50(!) patches available for the game. Again, this is a game that's been online for all of 7 days, and there were ALREADY 50 patches for game data? And once again, the launcher hung downloading the patches. And I'm still getting 10KB/second download speeds.
Fourth try (I'm getting pretty annoyed at this point), and it starts downloading more patches. This time, the patches came in quickly - 75KB/second. My guess is that their load balancing solution on their patch servers doesn't work, and some of the patch machines were overloaded.
And again the game hung after downloading all the patches.
The game also installs a notification area icon, this time I clicked on it. A menu flashed on the screen really quickly, and then disappeared. So back to taskmgr to kill the launcher app.
On the 5th time, I was finally allowed to log in and start the game, but still.... 4 hangs of the client app that required taskmgr intervention to recover? 10KB/sec download speeds?
And then there's the notification area icon. By default, the game installs itself into the notification area, and it's set to download game patches every 4 hours.
Every 4 hours? They patch this game frequently enough that you need to check for patches EVERY FOUR HOURS?!!
Mindboggling.
I've not played the game beyond racing through the character creation mechanism, this is Daniel's game to play, I have absolutely no opinions about the relative quality of the game (although it seemed to be very pretty for the 2 minutes I played it)
I know this is a major new game in its first week or so of retail release, so it's expected that things may be overloaded - there were 10 new characters in the entry area when I logged in, so the game servers are clearly being hammered, but still...
So I've talked a bit about some of the details of the Vista audio architecture, but I figure a picture's worth a bunch of text, so here's a simple version of the audio architecture:
This picture is for "shared" mode, I'll talk about exclusive mode in a future post.
The picture looks complicated, but in reality it isn't. There are a boatload of new constructs to discuss here, so bear with me a bit.
The flow of audio samples through the audio engine is represented by the arrows - data flows from the application, to the right in this example.
The first thing to notice is that once the audio leaves the application, it flows through a very simple graph - the topology is quite straightforward, but it's a graph nonetheless, and I tend to refer to samples as moving through the graph.
Starting from the left, the audio system introduces the concept of an "audio session". An audio session is essentially a container for audio streams, in general there is only one session per process, although this isn't strictly true.
Next, we have the application that's playing audio. The application (using WASAPI) renders audio to a "Cross Process Transport". The CPT's job is to get the audio samples to the audio engine running in the Windows Audio service.
In general, the terminal nodes in the graph are transports, there are three transports that ship with Vista, the cross process transport I mentioned above, a "Kernel Streaming" transport (used for rendering audio to a local audio adapter), and an "RDP Transport" (used for rendering audio over a Remote Desktop Connection).
As the audio samples flow from the cross process transport to the kernel streaming transport, they pass through a series of Audio Processing Objects, or APOs. APOs are used to provide DSP on the audio samples. Some examples of the APOs shipped in Vista are:
All of the code above runs in user mode except for the audio driver at the very end.
At some point, about 5 or 6 years ago, Valorie decided that it was time for her to go back to school to get her teaching certificate. It turns out that her college courses didn't quite meet the entrance requirements for the local schools that offer Masters in Teaching (MiT) programs, so about 6 years ago she started taking an almost full time course load at various local schools. In addition to working six to eight hours a day in our kids classroom, she also took two or three classes per semester filling in the gaps in her previous degree.
Two years ago, Valorie started in the MiT program at CityU taking a full time masters course load while continuing to work in the 5/6 classroom (this time as a teacher's aide).
She's now about 3 months away from graduation, and today she started the final major step in finally receiving her degree - today's her first day as a student teacher.
I know it's been a long hard 6 years for her, I've seen how hard she's worked achieving one of her lifelong goals.
So, if you'll excuse the potentially inappropriate paraphrase:
"So here's to you Mrs. Osterman"
Congratulations sweetheart, it's been a long road but the end is finally in sight.
This one isn't really that hideous, but I ran into this construct the other day while working on some stuff and it just flat-out annoys me :) (the code's been heavily sanitized to protect the innocent)) { : : }
I almost don't even know where to begin on this one. It's three lines of code (and 2 lines of variable declarations), chock full of badness.
But the thing that really got my goat (and the thing that caused me to write this post) was the use of XOR when != would just as well. By using XOR, the author of the code guaranteed that whoever was looking at the code would have to sit and think about what the code was doing - for some reasons, the logic table for XOR isn't sitting at the front of my short-term memory.
And then there's the variable names. I don't know about y'all, but I just HATE trying to wrap my head around negative Boolean variables, especially when they're used as double negatives (!fDontDoSomething). They always make me need to think twice when I see them.
Wouldn't it have been SO much better if the code had been:
static BOOL fWasDoingSomething; BOOL fDoSomething, fOldValue; fDoSomething = DecideIfWeShouldDoSomething(); fOldValue = InterlockedExchange( &fWasDoingSomething, fDoSomething); if ( fOldValue != fWasDoingSomething) { : : }
?
Ah, I feel MUCH better now :) Venting always helps :)
There's a variant of Psychic Debugging called "Psychic Perf Analysis". It works like this:
I get an IM from one of Ryan, one of the perf guys.
Ryan: "Hey Larry, we just found a great perf bug that caused a 3 second slowdown in Windows boot time"
Me: "Let me guess, they were calling RegFlushKey in a service startup path."
<long pause>
Ryan: "Who told you?"
One of the things people don't realize about RegFlushKey is that it actually flushes the data that backs the registry key (doh!). Well, flushing the data means that you need to write it to disk, and the semantics of RegFlushKey ensure that the data's actually been committed - in other words, the RegFlushKey API is going to block until all the disk writes needed to ensure that the data backing the key is physically on the disk. This can take hundreds and hundreds of milliseconds.
In Ryan's case, it was complicated because the service was calling RegFlushKey from a DllMain function (Doh!) which held the loader lock, which meant that all the other services in that process were blocked, and there were other services that depended on those services, and... You get the picture, it REALLY wasn't pretty.
The documentation for RegFlushKey explicitly says that "In general, RegFlushKey rarely, if ever, need be used", and it's right.
Why did I know that this was a problem? Well, when we first deployed the new audio stack into Vista, we were blocked from RI'ing into winmain because the audio service degraded the boot time of Windows by 3/4 of a second (yes, we measure boot time performance to the millisecond, and changes that degrade the system boot performance aren't allowed in). When I looked at the perf logs of the boot process, I noticed a significant number of writes occurring during the start of the audiosrv service. I chased it down further, and realized that the writes correlated almost perfectly with some code that was modifying the registry. I dug deeper and discovered a call to RegFlushKey that we had mistakenly added. Removing the call to RegFlushKey fixed the problem.
Over that time, I've developed a bag of tricks for working with services, I mentioned one of them here. Here's another.
One of the most annoying things to have to debug is a problem that occurs during service startup. The problem is that you can't attach a debugger to the service until it's started, but if the service is failing during startup, that's hard.
It's possible to put a Sleep(10000) to cause your service startup to delay for 10 seconds (which gives you time to attach the debugger during start), that usually works, but sometimes service startup failures only happen on boot (for autostart services).
First off, before you start, you need to have a kernel debugger attached to your computer, and you need the debugging tools for windows (this gets you the command line debuggers). I'm going to assume the debuggers are installed into "C:\Debuggers", obviously you need to adjust this for your local machine.
One thing to keep in mind: As far as I know, you need have the kernel debugger hooked up to debug service startup issues (you might be able to use ntsd.exe hooked up for remote debugging but I'm not sure if that will work).
This of course begs the next question: "The kernel debugger? Why on earth do I need a kernel debugger when I'm debugging user mode code?". You're completely right. But in this case, you're not actually using the kernel debugger. Instead, you're running using a user mode debugger (ntsd.exe in my examples) that's running over the serial port using facilities that are enabled by the kernel debugger. It's not quite the same thing.
There are multiple reasons for using a debugger that's redirected to a kernel debugger. First off, if your service is an autostart service, it's highly likely that it starts long before the a user logs on. So an interactive debugger won't really be able to debug the application. Secondly, services by default can't interact with the desktop (heck, they often run in a different TS session from the user (this is especially true in Vista, but it's also true on XP with Fast User Switching), so they CAN'T interact with the desktop). That means that when the debugger attempts to interact with the user, it can't because it flat-out can't because the desktop is sitting in a different TS session.
There are a couple of variants of this trick, all of which should work.
Lets start with the simplest:
If your service runs with a specific binary name, you can use the Image File Execution Options registry key (documented here) to launch your executable under the debugger. The article linked shows how to launch using Visual Studio, for a service, you want to use the kernel debugger, so instead of using "devenv /debugexe" for the value, use "C:\Debuggers\NTSD.EXE -D", that will redirect the output to the kernel debugger.
Now for a somewhat more complicated version - You can ask the service controller to launch the debugger for you. This is useful if your service is a shared service, or if it lives in an executable that's used for other purposes (if you use a specific -service command line switch to launch your exe as a service, for example).
This one's almost easier than the first.
From the command line, simply type:
sc config <your service short name> binpath= "c:\debuggers\ntsd.exe -d <path to your service executable> <your service executable options>
Now restart your service and it should pick up the change.
I suspect it's possible to use the ntsd.exe as a host process for remote debugging, I've never done that (I prefer assembly language debugging when I'm using the kernel debugger), so I don't feel comfortable describing how to set it up :(
Edit: Answered Purplet's question in the comments (answered it in the post because it was something important that I left out of the article).
Edit2: Thanks Ryan. s/audiosrv/<your service>/ | http://blogs.msdn.com/b/larryosterman/archive/2006/03.aspx?Redirected=true | CC-MAIN-2015-22 | en | refinedweb |
Home Skip to content Skip to navigation Skip to footer
Cisco.com Worldwide Home
Introduction. 3
Software and Hardware Requirements. 4
Terminology. 4
Static Interface/Static vNIC or vHBA.. 5
Dynamic Interface/Dynamic vNIC.. 6
Fixed veth Interface. 6
Floating veth Interface. 6
Cisco UCS P81E Capabilities. 6
Provisioning Model for vethernet Interfaces. 10
Communication Between Switchport and Network Adapter 10
Static and Dynamic Provisioning. 10
Port-Profiles. 11
vNIC Configuration Example. 12
Dynamically Provisioned veth. 12
Statically Provisioned veth. 13
Monitoring vethernet Interfaces. 13
Role of vPC in A-FEX.. 14
Traffic Forwarding in A-FEX.. 15
Provisioning Model for vFC Interfaces. 17
Topology Choice. 20
A-FEX Connectivity without vPC.. 21
The Benefit of Using vPC.. 22
vPC with FEX Straight-Through. 23
vPC Orphan Port Considerations with Layer 2 Uplinks. 24
SAN Connectivity with vPC and FEX Straight-Through Mode. 25
vPC with FEX Active/Active. 26
SAN Connectivity with FEX Active/Active. 28
Routing Considerations. 30
Sample Configuration Steps. 31
Verify Licensing Requirements. 31
Configure vPC.. 31
Configure Fabric Extenders (if needed) 33
Enable the A-FEX Feature. 33
Configure All Switchports Connected to VIC-Capable Adapters. 33
Configure VIC Adapters to Operate in A-FEX Mode. 34
Configure Port-Profiles on Both vPC Peers. 34
Configure the FC Connectivity to the SAN with Unified Ports. 35
Configure the FCoE connectivity from the A-FEX Adapter. 36
Configure Zoning. 37
Sample Configurations. 38
vPC and FEX Straight-Through with Routed Access. 38
Cisco Nexus 5500 Switch 1. 38
Cisco Nexus 5500 Switch 2. 44
vPC and FEX Active/Active with Routed Access. 49
Cisco Nexus 5500 Switch 1. 49
Cisco Nexus 5500 Switch 2. 57
Introduction
This guide describes how to best design networks with virtualized adapters such as the Cisco Unified Computing System™ (Cisco UCS™) P81E Virtual Interface Card:.
Virtualized adapters that implement the prestandard VN-Tag technology can be connected to a Cisco Nexus® 5500 Switch with Cisco® NX-OS Software Release 5.1(3)N1(1) or later and be remotely operated and configured from the switch itself (which is referred to as an adapter-fabric extender or A-FEX) as described in the following document:.
As Figure 1 illustrates, a server with a virtualized adapter (called vNICs) can offer the operating system a number of virtual adapters, and with A-FEX technology, they are presented to the Cisco Nexus 5500 platform as directly connected interfaces. All the switching between vNICs occurs on the upstream Cisco Nexus 5500, just as if they were interfaces of a remote linecard or fabric extenders. In addition to this, all features from access control lists (ACLs) to private VLANs, quality of service (QoS), and so on, are available on the remote interfaces.
The redundancy or teaming configuration is not required on the operating system anymore since it is implemented in hardware and controlled by the Cisco Nexus 5500 platform.
The provisioning model allows the network administrator to define profiles with specific network definitions (mode access or trunk, VLAN and so on). The server administrator has the choice of how many vNICs to define and which profile to put them on.
This guide includes design recommendations for the use of the Cisco UCS P81E network adapter cards in Cisco UCS C-Series Rack-Mount Servers in conjunction with Cisco Nexus 5500 Switches and Cisco Nexus 2232PP Fabric Extenders (FEX).
At the time of this writing, the Cisco Nexus 5000 Series family includes the following products that support A-FEX technology:
●.
●).
●.
At the time of this writing, the Cisco Nexus 2000 Series includes the following product which supports A-FEX:
● Cisco Nexus 2232PP 10 Gigabit Ethernet Fabric Extender: This product has 1 and 10-Gbps Small Form‑Factor.
Software and Hardware Requirements
Adapter-FEX requires the use of the Cisco Nexus 5500 Switch with or without FEX 2232PP running NX-OS Software Release 5.1(3)N1(1) or later.
On the server side, a virtual network tag (VN-Tag)-capable network adapter is required, such as the Cisco UCS P81E Virtual Interface Card (VIC).
If using the Cisco UCS C-Series servers with the Cisco P81E, the following versions of software are required:
● For Cisco Integrated Management Controller firmware, you need a minimum of Version 1.4(1).
● For the C-Series BIOS, you need a minimum of Version 1.4(1).
● For the P81E firmware, you need a minimum of Version 1.6(1).
Terminology
In the context of this document, the following terminology applies:
● vNICs: The hardware instantiations of virtual adapter within a given network adapter.
● vethernet (veth for brief): This term refers to a virtual network adapter “port” (vNIC) as seen by the controlling bridge (that is, by the Cisco Nexus 5500 Switch).
Figure 2 illustrates the association of these two elements. As you can see, the blue box representing the physical network adapter is virtualized in two different instances (vNICs) represented by the grey box. These instances are then using a VN-Tag/channel, which is specific to each wire connecting to the upstream fabric extender (or the Cisco Nexus 5500). For instance, vNIC1 may be using channel 1 on Port 0 as the primary port, and vNIC2 may be configured to use Port 1 with channel 2.
A single vNIC may be able to use both physical ports (Port 0 and Port 1) with a VN-Tag on either port. For instance, in Figure 2, vNIC1 is configured to use both Port 0 and Port 1.
The veth box on the Cisco Nexus 5500 Switch represents the instantiation of the vNIC on the Cisco Nexus 5500. All the operations performed by the network administrator, such as looking at counters, shut/no shut, and so on) are performed on the vethernet (veth) interface.
In addition to this terminology, it’s also important to distinguish between static and dynamic veth and between fixed and floating veth, as described next.
Static Interface/Static vNIC or vHBA
A static interface is an interface with parameters configured manually by the administrator. A static virtual adapter can be a virtual NIC or a virtual host adapter bus (HBA). A static interface can be a veth or a virtual Fibre Channel (vFC) interface.
For a statically created fixed and floating veth, it is possible for a network administrator to associate configuration to the veth before it is brought up. When creating a static veth, the network administrator specifies which “channel” (for simplicity, you can consider this equivalent to a VN-Tag number) a given veth uses. The server administrator must be sure to define a vNIC on the adapter with the same channel number.
Dynamic Interface/Dynamic vNIC
A dynamic interface is a veth interface that gets configured automatically as a result of adapter and switch communications. The provisioning model of a dynamic interface consists in the configuration of a port-profile on the Cisco Nexus 5500, which appears on the network adapter, followed by the association of the port-profile with the vNIC performed by the server administrator.
A dynamic virtual adapter can be virtual NIC but not virtual HBA. A dynamic virtual interface can be veth interface. Dynamic interfaces have support for hardware-based failover in the Cisco UCS P81E VIC.
Fixed veth Interface
A fixed veth interface is a virtual interface that does not support migration across physical interfaces. When talking about adapter-FEX, the scope is always on fixed veth because adapter-FEX refers to the use of network virtualization by a single (that is, nonvirtualized) operating system.
For fixed veth (static or dynamic), administrators can change configurations, including shut/no shut or create/delete, anytime. The veth-number-to-channel-number binding is persistent unless the administrator changes it.
Floating veth Interface
When the Cisco UCS P81E network adapter is used in a hypervisor environment, each vNIC on the network adapter is associated with one virtual machine (VM). VMs can migrate from one server to another physical server. A virtual interface that “migrates” along with a VM and virtual network link is called floating virtual interface.
For a floating (static or dynamic) veth, a configuration change, including shut/no shut, is allowed anytime, regardless of the attached state, except for a binding configuration. Changing the binding configuration is not allowed while a veth is attached. Binding configurations can only be changed if there is no veth associated.
Cisco UCS P81E Capabilities
The Cisco UCS P81E VIC is a PCI Express (PCIe) 2.0 x 8 10-Gbps adapter designed for use with Cisco UCS C‑Series.
The Cisco UCS P81E can also be configured for virtual HBAs.
The total number of virtual adapters that can be provisioned on the P81E card is 112. As Figure 3 illustrates, this maximum is shared between fixed vNICs and floating vNICs as follows:
● Up to a maximum of 16 fixed vNICs (used for A-FEX purposes) (which leaves space for 96 floating vNICs).
●.
The configuration of the vNICs is performed via the Cisco Integrated Management Controller (CIMC) interface on the UCS C-Series servers. The CIMC is a GUI that provides remote KVM console capabilities and the ability to power up and down the server. In addition, you use the CIMC to configure the network adapter that is installed in the server.
Please refer to the following document to access, configure, and manage the server using the CIMC:.
Figure 4 shows the vNIC tab configuration with the list of vNIC adapters (up to 16). The vNIC tab refers to the adapter-FEX configuration.
Each vNIC created gets its own MAC address as shown in Figure 5.
As Figure 6 shows, each vNIC can be configured to use a specific “channel” (that is, VN-Tag) on one of the two physical ports (uplink ports - that is, the adapter ports), and it can be configured for “adapter failover,” called “Uplink Failover” in the CIMC interface.
Figure 7 illustrates the concept of adapter failover. If adapter failover is enabled, each vNIC can be associated with both adapters (0 and 1), and on each adapter, it is going to use a VN-Tag/channel (which is referred to as “A” on both ports). Hence, when adapter failover is enabled, the channel number of a given vNIC is automatically reserved on both ports: port 0 and port 1.
From an operating system perspective, adapter failover is different from regular NIC teaming in that the operating system is not controlling the failover of the adapter; instead, the network adapter card itself is controlling the failover.
The P81 card offers a number of features, including TCP offload and so on. The P81 card is also a converged network adapter, so as Figure 8 shows, there are virtual HBAs.
Provisioning Model for vethernet Interfaces
The provisioning model of A-FEX is based on the concept of port-profiles (of type vethernet). Port-profiles are configured on the Cisco Nexus 5500 switch and communicated to the network adapter. From the network adapter management tool (CIMC) the server administrator can associate port-profiles and “vNICs,” which in its turn triggers the creation of the veth on the Cisco Nexus 5500.
Communication Between Switchport and Network Adapter
Before veths can be created, the physical interface connected to the host must be configured with VN-Tag mode and Data Center Bridging Exchange (DCBX) Protocol must run between switchport and host.
In order to enable VN-Tag communication between the switchport and the adapter, you configure the port to operate in VN-Tag mode.
Assume, for instance, that the server P81E card connects to a FEX 2232PP. The configuration would look like this:
If the server P81E card connects directly to a Cisco Nexus 5500 Switch, the configuration would look like this:
The network adapter must be configured for network interface virtualization (NIV) or adapter-FEX.
At this point the host starts to include the NIV capability type-length-value (TLV) fields in its DCBX advertisement. Until then, the network adapter operates in Classical Ethernet (CE) mode. While it operates in CE mode, the host has full connectivity to the network via the physical interface connected to the switch.
You can and should verify the network adapter ports mode of operation from CIMC. The “Encap” field should show “NIV” (Figure 9). If not, verify the configuration of the adapter, and if you are changing from mode CE to mode NIV, be sure to reboot the server (hence the adapter within the server).
Static and Dynamic Provisioning
veth can be either statically configured or dynamically created by the association of a port-profile with a vNIC. The preferred configuration method is the dynamic one.
In this provisioning model, there is no configuration on the switch side that represents the veth. This model depends on the port-profile information that is provided to the switch via the VIC protocol.
For ease of configuration, the switch advertises via the VIC control channel (VIF_SET of 0) the list of port-profile names configured on the switch. The NIV adapter provides a way for server administrator to select from a list of port-profiles to be attached to each vNIC.
These are the protocol steps used by the adapter and the switch to bring up a fixed veth:
1. The NIV adapter sends a VIF_CREATE message with channel number and optional port-profile name.
2. The switch first matches the channel number against its list of static fixed veth configuration. If there is a matching channel number, that veth number is brought up.
3. If there is no static fixed veth with a matching channel number, a dynamic fixed veth is created.
4. If a port-profile name is in the VIF_CREATE message and the fixed veth is not already configured with a port‑profile in the switch configuration, the port-profile parameters within the VIF_CREATE message are configured for the veth.
Port-Profiles
Port-profiles are not a functionality that is specific to A-FEX. However, A-FEX uses a particular type of port-profile to interface with the server admin provisioning tool (CIMC in the case of the P81E adapter). Port-profiles provide a template configuration that can then be applied identically to multiple interfaces. The Ethernet port-profile by default is Layer 2 on the Cisco Nexus 5000 Series.
An example of configuration illustrates the configuration template provided by port-profiles:
Whenever you configure shut or no shut from the port-profile, this gets propagated to the port. Also state enabled must be configured for the port-profile configuration to take effect.
Note: Read here for more information about port-profiles.
In the context of A-FEX, the port-profiles are of type vethernet (instead of type Ethernet).
An example of port-profile that is used for A-FEX is as follows:
When the user creates such a port-profile, it appears on network adapter management tool as one of the available options for vNICs..
vNIC Configuration Example
Figure 10 illustrates the configuration choice for the P81E as a result of the creation of the above port-profile.
By default, the P81E comes with two predefined vNICs: eth0 and eth1. You can see in Figure 10 that you can select which physical port you want the eth0 interface to use by default (in this example it’s interface 0). The port-profile NIC-VLAN5 appears in the list. You can also see that the vNIC will try to negotiate using channel 1 for the veth-to-vNIC mapping.
You can also see the option “Enable Uplink Failover.” If this box is checked, the server will use port0, for instance, as the primary one, and port1 as the backup (or vice versa if you select port 1 as the “Uplink Port”). In most deployments, you would want to enable uplink failover.
After you add a vNIC, change the channel, or choose the port-profile, the server might have to be rebooted for the configuration to take effect.
When configuring multiple vNICs in a redundant topology, you may want to make sure that the vNICs alternate in using Uplink Port 0 and Uplink Port 1 in order to maximize traffic distribution across both devices that they are connected to.
Dynamically Provisioned veth
The configuration of the adapter results in the creation of a veth on the Cisco Nexus 5500. If you are using the 5500 in redundancy mode (that is, with vPC), the veth appears on both vPC peers, but the MAC address table configuration appears only on one of the two vPC peers.
For instance, the above configuration results in the following automatic configuration:
In addition, the second vNIC has been configured to use VLAN 60 with a different port-profile:
The veth numbering for dynamic fixed veth is always above 32768. This is to allow the users to configure static fixed veth in the range below without having to worry about some veth numbers having been taken by dynamic veth.
When you are using a redundant configuration (that is, when using vPC), the veth is going to appear on both vPC peer devices and it will be up on both.
The MAC address will appear only on one of the two vPC peers, as follows:
If you save the configuration (“copy run startup”) the above veth configuration is saved, and upon reboot of the Cisco Nexus 5500, the association of the veth to the interface and the channel will be already present..
Statically Provisioned veth
In the static fixed model, the administrator of the switch needs to carefully map the veth being configured on the switch to the channel number of the vNIC that it represents. Since this requires the switch administrator to get the channel information from the server/adapter administrator and this process is prone to errors, the static configuration is considered to be less desirable.
For each static fixed vNIC provisioned in an adapter, a corresponding veth must be created and bound to a {Ethernet interface, channel-number} where Ethernet interface is the physical interface which the adapter is connected to and channel-number is the vNIC instance number provisioned in the adapter.
For instance, if we were to configure a veth for the previously configured vNIC and we were not using the port-profile, a valid configuration would like as follows:
A unique channel number given to each physical interface is required.
Also, if the server administrator changes the channel number on the vNIC interface, the server has to be rebooted (for the adapter to be rebooted) in order for the new channel number to be applied to the configuration of the server.
Monitoring vethernet Interfaces
This section provides some useful commands to monitor the vethernet interfaces that have been provisioned on the Cisco Nexus 5500.
Role of vPC in A-FEX
Virtual PortChannels (vPCs) play an important role in A-FEX deployments because the infrastructure provided by vPC ensures uniqueness of veth numbering and synchronization between the vPC peers.
It is fundamental to understand that using vPC with A-FEX doesn’t mean that you will be constructing virtual port‑channels out of A-FEX veth interfaces. vPC provides the infrastructure to synchronize the database where the information about veth interfaces is stored.
This is useful because a vNIC configured with adapter failover can “attach” to either Cisco Nexus 5500 Switch, depending on the preferred physical port or which of the physical links is up or down. Because of this, the user must configure the same port-profiles on both switches.
The vPC infrastructure makes sure that the veth numbering for the same vNIC is identical on both 5500 switches.
In addition to this, if you configure static fixed veth and you are using a number that is not used on the local Cisco Nexus 5500, vPC will verify whether this number is present in the database. If the number is used on one of the two 5500 switches as part of the same vPC domain, the number won’t be allowed.
The vPC primary is the device that decides which veth can be brought up. The vPC secondary, upon receiving VIF‑create message, verifies with the vPC primary which veth number it can use.
Traffic Forwarding in A-FEX
This section describes how traffic forwarding works in a topology that uses A-FEX technology.
With A-FEX, even if vNICs end up sharing the same physical link, they are really equivalent to two separate physical interfaces, as Figure 11 illustrates. For instance, each one of them can be shut down independently from the upstream Cisco Nexus 5500 Switch and be configured for different security policies.
To the operating system, each vNIC appears as a separate network adapter. The operating system doesn’t see whether adapter failover is enabled or not. What’s more, there is no requirement to run teaming software on the operating system because adapter failover is implemented in hardware.
In the example shown in Figure 12, the physical adapter is virtualized into two vNICs, each using both ports (uplinks):
● vNIC eth0: uses adapter 0 as the primary one (and adapter 1 as the standby) and is attached to port-profile NIC-VLAN50.
● vNIC eth1: uses adapter 1 as the primary one (and adapter 0 as the standby) and is attached to port-profile NIC-VLAN60.
Figure 13 illustrates the overall connectivity from the operating system to the upstream switches.
The operating system sees two network adapters (eth0 and eth1), which are in reality vNICs, and each one of them is using one specific physical port to connect to the upstream network infrastructure (FEX and the Cisco Nexus 5500). Each virtual network adapter also has a backup path via the other physical port, in case the primary path fails.
As Figure 14 illustrates, all traffic that is sent by a server over a vNIC is switched at the Cisco Nexus 5500 Switch layer.
Moreover, if one of the physical links fails, adapter failover will move the traffic to the remaining link without any involvement from the operating system, as depicted in Figure 15.
Provisioning Model for vFC Interfaces
With A-FEX technology, the benefits of using a network adapter such as the P81E include the possibility to carry Fibre Channel traffic onto the same interface as the LAN traffic. This is because the Cisco UCS P81E card is at the same time a network interface virtualization (NIV) adapter and a converged network adapter.
The Fibre Channel interfaces on the P81E card are called virtual HBAs. FCoE connectivity between the vHBAs and the upstream Cisco Nexus 5500 is configured by defining a virtual Fibre Channel interface. In contrast to veth interfaces, vFC interfaces are configured with static binding; there’s no dynamic binding configuration possible with vFCs.
The configuration is achieved by manually associating a veth on the Cisco Nexus 5500 Switch with the vHBA channel and then binding the vFC to the veth as described in the code sample and Figure 16.From the CIMC, you can check the channel number used by the vHBAs (Figure 16). You should also ensure that each HBA uses a different “uplink” - that is, a different network adapter.
If you don’t know which one of the adapters is port0 and port1 and to which FEX or Cisco Nexus 5500 Switch it connects to, you may want to define the veth interface first with one channel and then with the second one. If the veth doesn’t go up in the first few seconds, you know that you may have to use the other channel number.
Notice that the veth used for FCoE must be a trunk because of the way FCoE works. Initially in the FCoE Initialization Protocol (FIP) VLAN discovery, FCoE uses the native VLAN, but subsequently FCoE uses the VLAN associated with the VSAN. Because of this, you want to configure the veth to be a trunk and to carry specifically the VLAN used for VSAN purposes as well as the native VLAN.
Figure 17 illustrates how FCoE connectivity works in an adapter-FEX environment. Each adapter has two virtual HBAs: fc0 and fc1, which are mapped to one of the two physical ports (port 0 and 1). The veth binds to the FEX port and to the channel that is used by the adapter, and the vFC binds to the veth. From the operating system perspective, there are two Fibre Channel storage adapters.
The vFC interfaces on the Cisco Nexus 5500 are equivalent to regular Fibre Channel interfaces. So they can be shut down and this will shut down the vHBA on the server without affecting the virtual NICs. Also all properties of zoning - and more properties in general of Fibre Channel fabric configurations - apply to vFCs just as they apply to physical interfaces.
Figure 18 (taken from Cisco Fabric Manager) illustrates how the Cisco Nexus 5500 and the vHBAs appear from a SAN management perspective:
As you can see the vHBA (Node World Wide Name 10:00:58:8d:09:0e:f8:5d) appears as if it is directly connected to the Cisco Nexus 5500, even if it is just a virtual interface in the P81E card.
In Figure 18, the zone defined for the storage access is highlighted in yellow. The World Wide Name (WWN) of the vHBA can easily be found using the normal Fibre Channel commands: show flogi database vsan 11 where vfc11 behaves exactly as any regular Fibre Channel port:
And the zone associated with it is configured as it would be with normal zoning with regular HBAs:
From an operating system perspective, the logical unit number (LUN) targets appear as if they were connected via regular HBAs. For instance, in the case of a Windows 2008 server, the disk management utility would see the screens shown in Figure 19:
The LUN is visible via two different paths, and as a result you may see a warning message from the operating system indicating that this LUN is reachable via multiple paths (which is why the disk shows as offline in this screen capture).
A look at the disk array (Figure 20) confirms that this is the correct LUN (check the LUN number and compare with the properties from the disk manager):
Multipath I/O software would operate exactly in the same way as with physical HBAs.
Topology Choice
Servers equipped with an NIV-capable adapter can connect to a Cisco Nexus 5500 system in different ways:
● Directly to the Cisco Nexus 5500 Switch, which is logically equivalent to connecting to a Fabric Extender 2232 in Straight-Through mode (hence this case is not covered).
● To FEX 2232 in FEX Straight-Through mode without vPC. In this topology, each fabric extender module is single-attached to a Cisco Nexus 5500 Switch.
● To FEX 2232 in FEX Straight-Through mode with vPC. In this topology, each fabric extender module is single-attached to a Cisco Nexus 5500 Switch, and the Cisco Nexus 5500 is configured for vPC.
● To FEX 2232 in FEX Active/Active mode with vPC. In this topology, each fabric extender is dual-connected to Cisco Nexus 5500 Switches.
All the topologies - Straight-Through as well as Active/Active - are able to implement a dual fabric topology for the purpose of using Fibre Channel starting from Cisco NX-OS Software Release 5.1(3)N1(1).
Topologies vary also based on the Cisco Nexus 5500 Switch’s connectivity to the aggregation layer. This connectivity can be Layer 2 or Layer 3, and in the Layer 2 case it can be based on Spanning Tree Protocol, vPC, or Cisco FabricPath.
This document covers Layer 3 connectivity to the aggregation layer and vPC connectivity to the aggregation layer.
Note: For simplicity the following examples use the numbering veth1, veth2, and so on even if for dynamic veth the numbering starts from veth32768.
A-FEX Connectivity without vPC
The fabric extender can connect to the Cisco Nexus 5000 Series Switch with a single-homed topology, often referred to as a straight-through connection without vPC, as shown in Figure 21. With the single-homed topology, each fabric extender is attached to a single Cisco Nexus 5000 Series Switch.
With this topology, the network adapter issues a VIF create message from each vNIC (1 or 2) to the upstream Cisco Nexus 5500. The VIF message creates a veth on the Cisco Nexus 5500 Switches along both the active and the standby path. The port-profile that is used by vNIC1 in this example is port-profile A, and the one used by vNIC2 is port-profile B. Because of redundancy, you need to make sure that both port-profiles are present on both Cisco Nexus 5500 Switches.
With this model, there is no synchronization of the veth namespace, and the failover of MAC addresses would not be automatically triggered but would instead require waiting for the server to send traffic.
For instance, the vNIC1 may be normally active on the left Cisco Nexus 5500. As a result, the MAC table on the left switch would look like this:
No MAC address entry would appear on the other Cisco Nexus 5500 for this vNIC.
In case of failure, the vNIC would start using the alternate path, as depicted in the example shown in Figure 22:
When this happens, there’s no automatic reprogramming of the MAC address table. For veth5, traffic flow triggers the appearance of the vNIC MAC in the MAC table. Notice that the upstream Layer 2 domain must provide Layer 2 adjacency between the vNICs ports - that is, between port 0 and port 1.
The Benefit of Using vPC
When using vPC in the context of A-FEX, there is no requirement to use the vPC feature to create port-channels. In the context of A-FEX, vPC is used to synchronize the veth database and to optimize the failover behavior of vNICs.
The vPC configuration allows synching the veth numbers to make sure that the same vNIC uses the same veth numbers on both the switches. For instance, in the previous example, vNIC1 was creating two veths on the Cisco Nexus 5500, veth1 and veth5. With vPC, the vNIC would appear to both switches with the same veth number.
Using vPC also allows syncing the MAC addresses on both the switches, thereby making sure that a failover will cause the switches to still populate the MAC on the new active rather than wait for the new active to send traffic to learn those MACs.
vPC topologies that involve FEX are categorized as:
● FEX Straight-Through
● FEX Active/Active
vPC with FEX Straight-Through
Figure 23 illustrates the characteristics of an adapter-FEX topology with FEX Straight-Through mode. The main difference from the non-vPC topology is the configuration of the peer-link and the presence of the vPC domain configuration.
With a vPC design, you would still need to configure the port-profiles type vethernet on both Cisco Nexus 5500 Switches, but the veth that is instantiated by a given vNIC would have the same number on both vPC peers.
The vNIC would still operate in Active/Standby mode with adapter failover. The use of vPC does not enable the vNICs of an A-FEX adapter to create a “port-channel.” For load distribution the port 0 and port 1 uplinks must be utilized by vNICs in a round-robin fashion: that is, half of the vNICs would use port 0 as the primary path, and the other half of the vNICs would use port 1 as their primary path.
The configuration on the left Cisco Nexus 5500 for the above topology would look like this:
The configuration on the right Cisco Nexus 5500 for the above topology would look like this:
The MAC address table will show the MAC address of the vNIC and the veth interface associated with it only on the 5500 where the vNIC has the active path. Should the active path fail, the MAC address would then be programmed on the other vPC peer.
vPC Orphan Port Considerations with Layer 2 Uplinks
In case that you have orphan ports connected to this topology, you need to consider the use of the command vpc orphan-ports suspend. This command applies to non-veth ports only. For veth ports the “vpc orphan-ports” feature is transparently enabled.
As Figure 24 illustrates, when the peer-link is lost, the veth will failover automatically from the vPC secondary to the vPC primary.
Servers that are single-homed to a FEX port or to the Cisco Nexus 5500 that is vPC secondary should instead be configured with the following option:
The vpc orphan-port suspend option should be configured on all ports that are considered vPC orphan ports (minus the veth interfaces) on both the vPC primary and vPC secondary. This feature ensures that when the peer-link goes down, the vPC secondary shuts down these orphan ports, thus forcing the servers to use the path via the vPC primary.
SAN Connectivity with vPC and FEX Straight-Through Mode
The configuration to support FCoE in FEX Straight-Through mode doesn’t require any particular explanation. Each Cisco Nexus 5500 Switch would be configured with a veth binding to the appropriate vHBA channel and a vFC binding to the veth.
From a SAN connectivity perspective, with FEX Straight-Through you need to define a static fixed veth on each Cisco Nexus 5500 and bind a vFC to the static fixed veth as shown in Figure 25.
The relevant configuration for Cisco Nexus 5500-1 would look like this:
The relevant configuration for Cisco Nexus 5500-2 would look like this:
vPC with FEX Active/Active
Figure 26 illustrates the topology for adapter-FEX in a FEX Active/Active topology:
The port-profiles type vethernet would have to be programmed on both vPC peers.
The FEX interfaces would have to be configured on both Cisco Nexus 5500 Switches to be in VN-Tag mode. For instance, interface Eth105/1/2 would be configured on both Cisco Nexus 5500-1 and Nexus 5500-2.
The associated configuration for this topology would look like this and would appear on both Cisco Nexus 5500 Switches:
Given that the vNIC operates in Active/Standby mode, only one of the two bindings would be active at any given time.
In order to understand which interface is binding where, you can use the command show interface vethernet <number> detail and pay attention to the status “active” or “standby”:
Differently from the FEX Straight-Through topology, the MAC address of the vNIC would be present in both Cisco Nexus 5500 MAC address tables and it will be associated with the same vethernet interface.
SAN Connectivity with FEX Active/Active
The FCoE configuration with FEX Active/Active requires some additional explanation. Each FEX in this topology is attached to both Cisco Nexus 5500 Switches, and so it is potentially connected to both SAN fabrics.
In order to ensure separation of SAN fabrics, each FEX belongs to just one of the two fabrics. This is achieved by typing fcoe under the FEX configuration on one of the two Cisco Nexus 5500 Switches (Figure 27).
For instance, in the example shown in Figure 27:
● You can assign FEX 105 to Fabric 1 via Nexus 5500-1 by typing fcoe under fex 105 in the configuration for Cisco Nexus 5500-1.
●‑2. Similarly, if Nexus 5500-2 uses VLAN 12 for FCoE, this VLAN must also be created on Nexus 5500-1.
As an example the relevant configuration on Cisco Nexus 5500-1 is as follows:
The relevant configuration on Cisco Nexus 5500-2 would look like this:
Routing Considerations
If you are using the Cisco Nexus 5500 system for routing, the default gateway for the server is going to be an SVI on the Cisco Nexus 5500. Usual best practices for vPC and SVIs apply. You will configure HSRP gateway as follows:
Do not forget to install the BASE license; otherwise routing won’t work:
In addition, you have to remember to configure the peer-gateway in the vPC: configuration as follows:
Finally, for all orphan ports (on the Cisco Nexus 5500 or on the FEX) that are not veth, you need to configure the following:
This configuration applies to both the vPC primary and the vPC secondary.
Figure 28 illustrates the fact that when the peer-link goes down, the vPC secondary brings down the SVI. This assumes that all of the above configurations are in place (license, peer-gateway, and vpc orphan-port suspend on non-veth orphan ports). As a result, all vNICs will failover to the port that leads to the vPC primary device.
Sample Configuration Steps
The following configurations illustrate how to configure Adapter-FEX on the Cisco Nexus 5500 Switch.
Verify Licensing Requirements
No A-FEX license is required for A-FEX to function.
If using FC or FCoE, make sure the appropriate license is installed.
If you are using the Layer 3 card, make sure to install the BASE license.
Configure vPC
Remember that in A-FEX configurations you need to use vPC for the purpose of synchronizing the veth database between the Cisco Nexus 5500 Switches. You don’t have to create vPC port-channels, but the vPC infrastructure still is needed for A-FEX.
Figure 29 shows the components of a Cisco Nexus 5000 Series vPC deployment.
The following list provides a summary of vPC configuration best practices:
●.
● The peer-keepalive traffic should never be carried in a VLAN over the peer link; such a configuration would make the peer keepalive useless.
● Make sure to use the peer-gateway command.
● Configure vpc orphan-port suspend on host orphan ports on both the vPC primary and secondary except for the veth interfaces.
Configure Fabric Extenders (if needed)
When configuring fabric extenders with the Cisco Nexus 5500, you should use FEX pre-provisioning. FEX preprovisioning is a feature that was introduced in Cisco NX-OS Software Release 5.0(2)N1(1).
In FEX Active/Active mode, for SAN connectivity purposes you need to decide which FEX belongs to which SAN fabric. So if FEX 105 should send SAN traffic to the Cisco Nexus 5500 Switch that you are configuring, you need to type fcoe under the FEX configuration:
Enable the A-FEX Feature
Configure All Switchports Connected to VIC-Capable Adapters
If using FEX Active/Active mode, remember that the above configurations need to be repeated on both Cisco Nexus 5500 Switches.
Configure VIC Adapters to Operate in A-FEX Mode
From CIMC, go to Inventory>Network Adapters>Modify Adapter Properties.
Select Enable NIV Mode as shown in Figure 30:
After configuring NIV mode, reload the server (hence the adapter too) for this change to take effect.
Configure Port-Profiles on Both vPC Peers
For redundancy purposes, you need to configure the same port-profile name on both vPC peers:
From the server, you should associate the vNIC with the port-profile, as shown in Figure 31:
After completing the configuration, you need to reload the server in order for the operating system to recognize the new PCI device and for the network adapter to send a VIF create message to the Cisco Nexus 5500.
So shut down the server then power it up (Figure 32):
Now, on the Cisco Nexus 5500, this configuration will automatically appear:
Configure the FC Connectivity to the SAN with Unified Ports
Enable Fibre Channel features:
If you use Unified Ports you need to define the range of ports that operate as Fibre Channel. For instance in the setup that was used to validate this design guide:
After changing the ports from Ethernet to Fibre Channel, module 2 must be reloaded:
VSAN and Port VSAN configuration:
Connect the Fibre Channel interface to the Fibre Channel core:
Configure the FCoE connectivity from the A-FEX Adapter
Fibre Channel connectivity from the host vHBAs to the SAN is defined via FCoE. For this reason, you should configure FCoE policy-maps to implement priority flow control and proper bandwidth allocation for the Fibre Channel traffic type.
The following policy-maps already exist, but they may not be mapped:
So in order to enable FCoE, you can simply map them to the system-qos (unless they already are):
Define VLAN-to-VSAN mapping:
Bind veth to the vHBA channel and vFC to veth:
Notice that FIP VLAN discovery is not supported by Linux or ESX servers, so you need to configure the FCoE VLAN information on the vHBA from CIMC itself.
If you are using FEX Active/Active mode, remember that you need to assign each FEX to either fabric by using the fcoe keyword under the fex <number> configuration within the Cisco Nexus 5500 that connects to a given fabric.
Configure Zoning
The following steps in defining zoning are no different from regular Fibre Channel deployments:
1. Add a zone that includes the Port WWN defined above.
2. Activate the zoneset.
3. Optionally, you can copy the zoneset to the local configuration file: zone copy active-zoneset full-zoneset vsan <number>
You can configure zoning and LUN masking on the disk arrays just as if the vHBA were a physical HBA.
First locate the WWN:
Normally, the default policy for a zone is a deny; if it isn’t, you can change it from permit to deny (do not forget to commit after the change when running enhanced zoning):
Verify which zones are active:
The zoning can be defined in any of the Fibre Channel switches, and it will be propagated. Initially, you may want to get the existing zoneset database if there’s already an existing one:
You may then want to check the fcns database to find the WWN of the target:
You would then add a zone to allow the vHBA to see the remote target:
After adding this zone to the zoneset, you need to activate the zoneset:
After doing this, you need to check the zone status:
Make sure that after making changes, you commit the configuration; otherwise, configuration sessions will be locked on other switches in the fabric. The session field tells you which device is changing the zone configuration. Finally, the Active Zoning Database tells you which zone is active.
Sample Configurations
vPC and FEX Straight-Through with Routed Access
Cisco Nexus 5500 Switch 1
Cisco Nexus 5500 Switch 2
vPC and FEX Active/Active with Routed Access | http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/guide_c07-690080.html | CC-MAIN-2015-22 | en | refinedweb |
@example tag
This tag inserts a piece of code from source code file in the document. The code may be put in a frame with colored syntax depending on output format.
It is possible to select some code file parts by adding anchor comments in the source file ( /* anchor name */ on a single line) and specifying an anchor name pattern. All lines following a matching anchor line will be retained and other lines will be ignored. A /* ... */ comment or /* ... name */ comment in ignored file part will insert an ellipsis line.
Flags available for the @code tag can be used. Moreover, the P flag can be used to insert a code comment with path to the example file in inline code block. This behavior may be enforced for all examples by using the show_examples_path option.
See also code_path section and @scope tag section.
Syntax
@example <path to source file>[:<label>] [<flags>] [{<scope>}] <end of line>
Examples
Consider the following C source file:
/*
Long license header
*/
/* anchor global */
#include <stdio.h>
FILE *file;
int main()
{
/* anchor open */
file = fopen("foo,h", "rw");
if (file == NULL)
return 1;
/* anchor write */
fputs("test", file);
/* ... */
/* anchor close */
fclose(file);
/* anchor global */
return 0;
}
To insert the whole file with line numbers:
@example path/to/file.c N
If you want to skip the header:
@example path/to/file.c:global|open|write|close
If you just want to show how to open and close a file:
@example path/to/file.c:open|close
This will result in the following code being inserted:
file = fopen("foo,h", "rw");
if (file == NULL)
return 1;
[ ... ]
fclose(file); | http://www.nongnu.org/mkdoc/_example_tag.html | CC-MAIN-2015-22 | en | refinedweb |
Zwave To MQTTZwave To MQTT
Fully configurable Zwave to MQTT Gateway and Control Panel.
ZwavejsTHIS PROJECT IS UNMAINTAINED. PLEASE CONSIDER MOVING TO
- Backend: NodeJS, Express, socket.io, Mqttjs, openzwave-shared, Webpack
- Frontend: Vue, socket.io, Vuetify
!! ATTENTION!! ATTENTION
After a discussion with Openzwave maintainer all issues related to OZW 1.4 will be ignored and automatically closed as it isn't supported anymore, please use OZW 1.6+
📖 Table of contents
- Zwave To MQTT
- !! ATTENTION
📖Table of contents 🔌Installation 🤓Development 🔧Usage 📁Nodes Management ⭐Features 🤖Home Assistant integration (BETA) 🎁MQTT APIs 📷Screenshots
- Health check endpoints
- Environment variables
❓FAQ 🙏Thanks 📝TODOs
Author
🔌 Installation
DOCKER
🎉 way
# Using volumes as persistence docker run --rm -it -p 8091:8091 --device=/dev/ttyACM0 --mount source=zwave2mqtt,target=/usr/src/app/store robertslando/zwave2mqtt:latest # Using local folder as persistence mkdir store docker run --rm -it -p 8091:8091 --device=/dev/ttyACM0 -v $(pwd)/store:/usr/src/app/store robertslando/zwave2mqtt:latest # As a service wget docker-compose up
Replace
/dev/ttyACM0with your serial device
For more info about docker check here
Kubernetes wayKubernetes way
kubectl apply -k
You will almost certainly need to instead use this as a base, and then layer on top patches or resource customizations to your needs or just copy all the resources from the kubernetes resources directory of this repo
NodeJS or PKG versionNodeJS or PKG version
Firstly you need to install the Open-Zwave library on your system.
cd ~ git clone cd open-zwave && make && sudo make install sudo ldconfig export LD_LIBRARY_PATH=/usr/local/lib64 sudo sed -i '$a LD_LIBRARY_PATH=/usr/local/lib64' /etc/environment
Test the library: go to openzwave directory
cd openzwave-*and run the command
MinOZW /dev/ttyACM0
replace
/dev/ttyACM0with the USB port where your controller is connected
Now you can use the packaged version (you don't need NodeJS/npm installed) or clone this repo and build the project:
For the packaged version:
cd ~ mkdir Zwave2Mqtt cd Zwave2Mqtt # download latest version curl -s \ | grep "browser_download_url.*zip" \ | cut -d : -f 2,3 \ | tr -d \" \ | wget -i - unzip zwave2mqtt-v*.zip ./zwave2mqtt
If you want to compile last code from github:
git clone cd Zwave2Mqtt npm install npm run build npm start
Open the browser
Reverse Proxy SetupReverse Proxy Setup
If you need to setup ZWave To MQTT behind a reverse proxy that needs a subpath to work, take a look at the reverse proxy configuraiton docs.
🤓 Development
Developers who wants to debug the application have to open 2 terminals.
In first terminal run
npm run dev to start webpack-dev for front-end developing and hot reloading at
(THE PORT FOR DEVELOPING IS 8092)
In the second terminal run
npm run dev:server to start the backend server with inspect and auto restart features (if you don't have nodemon installed:
npm install -g nodemon)
To package the application run
npm run pkg command and follow the steps
Developing against a different backendDeveloping against a different backend
By default running
npm run dev:server will proxy the reequests to a backend listening on localhost on port 8091.
If you want to run the development frontend against a different backend you have the following environment variables that you can use to redirect to a different backend:
- SERVER_HOST: [Default: 'localhost'] the hostname or IP of the backend server you want to use;
- SERVER_PORT: [Default: '8091'] the port of the backend server you want to use;
- SERVER_SSL: [Default: undefined] if set to a value it will use https/wss to connect to the backend;
- SERVER_URL: [Default: use the other variables] the full URL for the backend API, IE:
- SERVER_WS_URL: [Default: use the other variables] the full URL for the backend Socket, IE:
wss://zwavetomqtt.home.net:8443/
🔧 Usage
Firstly you need to open the browser at the link and edit the settings for Zwave, MQTT and the Gateway.
ZwaveZwave
Zwave settings:
- Serial port: The serial port where your controller is connected
- Network key (Optional): Zwave network key if security is enabled. The correct format is
"0xCA,0xFE,0xBA,0xBE,.... "(16 bytes total)
- Logging: Enable/Disable Openzwave Library logging
- Save configuration: Store zwave configuration in
zwcfg_<homeHex>.xmland
zwscene.xmlfiles this is needed for persistent node information like node name and location
- Poll interval: Interval in milliseconds between polls (should not be less than 1s per device)
- Commands timeout: Seconds to wait before automatically stop inclusion/exclusion
- Configuration Path: The path to Openzwave devices config db
- Assume Awake: Assume Devices that support the Wakeup Class are awake when starting up OZW
- Auto Update Config File: Auto update Zwave devices database
- Hidden settings: advanced settings not visible to the user interface, you can edit these by setting in the settings.json
zwave.plugindefines a js script that will be included with the
thiscontext of the zwave client, for example you could set this to
hackand include a
hack.jsin the root of the app with
module.exports = zw => {zw.client.on("scan complete", () => console.log("scan complete")}
zwave.optionsoverrides options passed to the zwave client see
IConstructorParametersin the open-zwave docs. For detail for example
zwave.options.options.EnforceSecureReception=trueto drop insecure messages from devices that should be secure.
MQTTMQTT
Mqtt settings:
- Name: A unique name that identify the Gateway.
- Host: The url of the broker. Insert here the protocol if present, example:
tls://localhost. Mqtt supports these protocols:
mqtt,
mqtts,
tcp,
tls,
wsand
wss
- Port: Broker port
- Reconnect period: Milliseconds between two reconnection tries
- Prefix: The prefix where all values are published
- QoS: Quality Of Service (check MQTT specs) of outgoing packets
- Retain: The retain flag of outgoing packets
- Clean: Sets the clean flag when connecting to the broker
- Store: Enable/Disable persistent storage of packets (QoS > 0). If disabled in memory storage will be used but all packets stored in memory are lost in case of shutdowns or unexpected errors.
- Allow self signed certs: When using encrypted protocols, set this to true to allow self signed certificates (WARNING this could expose you to man in the middle attacks)
- Ca Cert and Key: Certificate Authority, Client Key and Client Certificate files required for secured connections (if broker requires valid certificates, this fields can be leave empty otherwise)
- Auth: Enable this if broker requires auth. If so you need to enter also a valid username and password.
GatewayGateway
Gateway settings:
Gateway type: This setting specify the logic used to publish Zwave Nodes Values in MQTT topics. At the moment there are 3 possible configuration, two are automatic (all values are published in a specific topic) and one needs to manually configure which values you want to publish to MQTT and what topic to use. For every gateway type you can set custom topic values, if gateway is not in 'configure manually' mode you can omit the topic of the values (the topic will depends on the gateway type) and use the table to set values you want to
pollor if you want to scale them using
post operation
ValueId Topics: Automatically configured. The topic where zwave values are published will be:
<mqtt_prefix>/<?node_location>/<node_id>/<class_id>/<instance>/<index>
mqtt_prefix: the prefix set in Mqtt Settings
node_location: location of the Zwave Node (optional, if not present will not be added to the topic)
node_id: the unique numerical id of the node in Zwave network
class_id: the numerical class id of the value
instance: the numerical value of value instance
index: the numerical index of the value
Named Topics: Automatically configured. DEPRECATED After a discussion with Openzwave author lib we discourage users to use this configuration as we cannot ensure that value labels will be the same, they could change in future versions (and also they depends on localization added in OZW 1.6). You can find more info HERE
The topic where zwave values are published will be:
<mqtt_prefix>/<?node_location>/<node_name>/<class_name>/<?instance>/<value_label>
mqtt_prefix: the prefix set in Mqtt Settings
node_location: location of the Zwave Node (optional, if not present will not be added to the topic)
node_name: name of the node, if not set will be
nodeID_<node_id>
class_name: the node class name corresponding to given class id or
unknownClass_<class_id>if the class name is not found
?instance: Used just with multi-instance devices. The main instance (1) will not have this part in the topic but other instances will have:
instance_<instance_index>
value_label: the zwave value label (lower case and spaces are replaced with
_)
Configured Manually: Needs configuration. The topic where zwave values are published will be:
<mqtt_prefix>/<?node_location>/<node_name>/<value_topic>
mqtt_prefix: the prefix set in Mqtt Settings
node_location: location of the Zwave Node (optional, if not present will not be added to the topic)
node_name: name of the node, if not set will be
nodeID_<node_id>
value_topic: the topic you want to use for that value (take from gateway values table).
Payload type: The content of the payload when an update is published:
JSON Time-Value: The payload will be a JSON object like:
{ "time": 1548683523859, "value": 10 }
Entire Zwave value Object The payload will contain all info of a value from Zwave network:
{ "value_id": "3-64-1-0", "node_id": 3, "class_id": 64, "type": "list", "genre": "user", "instance": 1, "index": 0, "label": "Mode", "units": "", "help": "", "read_only": false, "write_only": false, "min": 0, "max": 0, "is_polled": false, "values": ["Off", "Heat (Default)", "Cool", "Energy Heat"], "value": "Off" }
Just value: The payload will contain only the row Numeric/String value
Ignore status updates: Enable this to prevent gateway to send an MQTT message when a node changes its status (dead/sleep == false, alive == true)
Ignore location: Enable this to remove nodes location from topics
Send Zwave Events: Enable this to send all Zwave client events to MQTT. More info here
Send 'list' as integer: Zwave 'list' values are sent as list index instead of string values
Use nodes name instead of numeric nodeIDs: When gateway type is
ValueIduse this flag to force to use node names instead of node ids in topic.
⭐Hass discovery ⭐: Enable this to automatically create entities on Hass using MQTT autodiscovery (more about this here)
Discovery Prefix: The prefix to use to send MQTT discovery messages to HASS
Once finished press
SAVE and gateway will start Zwave Network Scan, than go to 'Control Panel' section and wait until the scan is completed to check discovered devices and manage them.
Settings, scenes and Zwave configuration are stored in
JSON/xml files under project
store folder that you can easily import/export for backup purposes.
Special topicsSpecial topics
- Node status (
trueif node is ready
falseotherwise) will be published in:
<mqtt_prefix>/<?node_location>/<node_name>/status
- Node events (value will be the event code) will be published in:
<mqtt_prefix>/<?node_location>/<node_name>/event
- Scene events will be published in:
OZW 1.4:
<mqtt_prefix>/<?node_location>/<node_name>/scene/event (value will be the scene event code)
OZW 1.6: In OZW 1.6 scenes are treated like a valueID (so the topic depends on gateway configuration). For example if the command class is
91 (
central_scene) and gateway uses valueid topics
<mqtt_prefix>/<?node_location>/<node_name>/91/1/1 (value published in payload will depend on gateway payload type)
Gateway values tableGateway values table
The Gateway values table can be used with all gateway types to customize specific values topic for each device type found in the network and do some operations with them. Each value has this properties:
- Device: The device type. Once scan is complete, the gateway creates an array with all devices types found in the network. A device has a
device_idthat is unique, it is composed by this node properties:
<manufacturerid>-<productid>-<producttype>.
- Value: The value you want to customize
- Device Class: If the value is a multilevel sensor, a binary sensor or a meter you can set a custom
device_classto use with home assistant discovery. Check sensor and binary sensor
- Topic: The topic to use for this value. It is the topic added after topic prefix, node name and location. If gateway type is different than
Manualthis can be leave blank and the value topic will be the one based on the gateway configuration chosen
- Post operation: If you want to convert your value (eg. '/10' '/100' '*10' '*100')
- Poll: Enable this to set the value
enablePollflag
- Verify Changes: Used to verify changes of this values
- Parse Send: Enable this to allow users to specify a custom
function(value)to parse the value sent to MQTT. The function must be sync
- Parse receive: Enable this to allow users to specify a custom
function(value)to parse the value received via MQTT. The function must be sync
📁 Nodes Management
Add a nodeAdd a node
To add a node using the UI select the controller Action
Add Node (inclusion), click send (
Controller status will be
waiting when inclusion has been successfully enabled on the controller and
completed when the node has been successfully added. Wait few seconds and your node will be visible in the table once ready.
Remove a nodeRemove a node
To add a node using the UI select the controller Action
Remove Node (exclusion), click send (
Controller status will be
waiting when exclusion has been successfully enabled on the controller and
completed when the node has been successfully removed. Wait few seconds and your node will be removed from the table.
Replace failed nodeReplace failed node
To replace a failed node from the UI you have to use the command
Replace Failed Node, if everything is ok the controller will start inclusion mode and status will be
Waiting, now enable inclusion on your device to add it to the network by replacing the failed one.
Remove a failed nodeRemove a failed node
If a node is missing or marked as dead. There is a way to cleanup the controller by executing
Remove Failed Node. This will forcebly delete the node from the controller.
It can only succeed if:
- Node has ben first marked as failed using
Has node failed
- Marked as Dead by the controller
Alive and Sleeping nodes cannot be deleted.
⭐ Features
-and
locationare.xmlfile (if present) and create the new
nodes.jsonfile based on that. This file can be imported/exported from the UI control panel with the import/export buttons placed on the top of nodes table, on the right of controller actions select.
- Groups associations: create associations between nodes (also supports multi-instance associations, need to use last version of openzwave-shared)
- Custom scenes management: (OpenZwave-Shared scenes management has actually some bugs and it's limited so I have made a custom scenes implementation that uses the same APIs but stores values in a JSON file that can be imported/exported and also allows to set a timeout to a value in a scene)
- Log debug in UI
- Mesh graph showing devices neighbors
🤖 Home Assistant integration (BETA)
At least Home Assistant >= 0.84 is required!
The easiest way to integrate Zwave2Mqtt with Home Assistant is by using MQTT discovery. This allows Zwave2Mqtt to automatically add devices to Home Assistant. To enable this feature remember to set the flag Hass Discovery in Gateway settings configuration.
ATTENTION: Hass updates often break Zwave2Mqtt device discovery. For this reason Zwave2Mqtt will try to be always compatible with latest hass version. Check the changelog before update!
To achieve the best possible integration (including MQTT discovery):
- In your Zwave2Mqtt gateway settings enable
Homeassistant discoveryflag and enable the MQTT retain too. The retain flag for MQTT is suggested to be sure that, once discovered, each device get the last value published (otherwise you have to wait for a value change)
NB: Starting from version
4.0.0 the default Birth/Will topic is
homeassistant/status in order to reflect defaults birth/will of Hass
0.113 th
- In your Home Assistant
configuration.yaml:
mqtt: discovery: true discovery_prefix: <your_discovery_prefix> broker: [YOUR MQTT BROKER] # Remove if you want to use builtin-in MQTT broker birth_message: topic: 'hass/status' # or homeassistant/status if z2m version >= 4.0.0 payload: 'online' will_message: topic: 'hass/status' # or homeassistant/status if z2m version >= 4.0.0 payload: 'offline'
Mind you that if you want to use the embedded broker of Home Assistant you have to follow this guide.
Zwave2Mqtt is expecting Home Assistant to send it's birth/will
messages to
hass/status (or
homeassistant/status if z2m version >= 4.0.0). Be sure to add this to your
configuration.yaml if you want
Zwave2Mqtt to resend the cached values when Home Assistant restarts.
Zwave2Mqtt try to do its best to guess how to map devices from Zwave to HASS. At the moment it try to guess the device to generate based on zwave values command classes, index and units of the value. When the discovered device doesn't fit your needs you can you can set custom a
device_class to values using Gateway value table.
Components managementComponents management
To see the components that have been discovered by Zwave2Mqtt go to Control Panel UI, select a Node from the Nodes table then select the Node tab from tabs menu at the bottom of Nodes table. Now at the Bottom of the page, after Node values section you can find a new section called
Home Assistant - Devices. Here you will see a table with all devices created for the selected node.
ATTENTION
Once edited the devices will loose all their customizations after a restart. To prevent this you can store the node hassDevices by pressing
STORE button at the top of hass devices table. By pressing it the hassDevices will be stored in
nodes.json file that can be imported/exported easily from control panel UI at the top of nodes table.
Rediscover NodeRediscover Node
If you update node name/location you have to also rediscover values of this node as they may have wrong topics. To do this press on
REDISCOVER NODE green button on top of Home Assistant - Devices table (check previous picture)
Edit existing componentEdit existing component
If you select a device it's configuration will be displayed as a JSON object on the right. With the selected device you can edit it and send some actions:
Update: Update in-memory hass device configuration
Rediscover: Re-discover this device using the
discoveryTopicand
discovery_payloadof the configuration
Delete: Delete the device from Hass entities of selected node
Add new componentAdd new component
If no device is selected you can manually insert a device JSON configuration. If the configuration is valid you can press the button
Add to add it to devices. If the process complete successfully the device will be added to the Hass Devices table and you can now select it from the table and press on
Rediscover to discover your custom device
Custom ComponentsCustom Components
At the moment auto discovery just creates components like
sensor,
cover
binary_sensor and
switch. For more complex components like
climate and
fan you need to provide a configuration. Components configurations are stored in
hass/devices.js file. Here are contained all components that Zwave2MQTT needs to create for each Zwave device type. The key is the Zwave device unique id (
<manufacturerid>-<productid>-<producttype>) the value is an array with all HASS components to create for that Zwave Device.
UPDATE: Starting from version 2.0.7 you can specify your custom devices configuration inside
store/customDevices(.js|.json) file. This allows users that use Docker to create their custom hass devices configuration without the need to build a new container. If using
.json format Zwave2Mqtt will watch for file changes and automatically load new components on runtime without need to restart the application.
ONCE YOU SUCCESSFULLY INTEGRATE NEW COMPONENTS PLEASE SEND A PR!
Identify the Device idIdentify the Device id
Starting from version 2.2.0 device id is shown on node tab of control panel before the inputs for update the node name and locations.
Before version 2.2.0 you can get the device id in this ways:
First (and easier) option is to add a random value in gateway values table for the desired device, the device id will be visible in first column of the table (
Devices) between square brackets
[<deviceID>] Device Name
Second option would be to retrieve it from here. Each device has Manufacturerid, product id and a product type in HEX format and needs to be converted in decimal:
<Manufacturer id="019b" name="ThermoFloor AS"> <Product config="thermofloor/heatit021.xml" id="0001" name="Heatit Thermostat TF 021" type="0001"/> <Product config="thermofloor/heatit056.xml" id="0202" name="Heatit Thermostat TF 056" type="0003"/> <Product config="thermofloor/heatit-zdim.xml" id="2200" name="Heatit ZDim" type="0003"/> </Manufacturer>
In this example, if we have choose
Heatit Thermostat TF 056:
- Manufacturer Id:
19b-->
411
- Product Id:
202-->
514
- Product type:
3-->
3
So in decimal format will become:
411-514-3. This is the device id of
Heatit Thermostat TF 056
ThermostatsThermostats
{ "411-1-1":[ { // Heatit Thermostat TF 021 (ThermoFloor AS) "type": "climate", "object_id": "thermostat", "values": ["64-1-0", "49-1-1", "67-1-1", "67-1-2"], "mode_map": {"off": "Off", "heat": "Heat (Default)", "cool": "Cool"}, "setpoint_topic": { "Heat (Default)": "67-1-1", "Cool": "67-1-2" }, "default_setpoint": "67-1-1", "discovery_payload": { "min_temp": 15, "max_temp": 30, "modes": ["off", "heat", "cool"], "mode_state_topic": "64-1-0", "mode_command_topic": true, "current_temperature_topic": "49-1-1", "current_temperature_template": "{{ value_json.value }}", "temperature_state_template": "{{ value_json.value }}", "temperature_command_topic": true } } ] }
- type: The hass MQTT component type
- object_id: The unique id of this object (must be unique for the device)
- values: Array of values used by this component
- mode_map: Key-Value object where keys are MQTT Climate modes and values are the matching thermostat modes values
- setpoint_topic: Key-Value object where keys are the modes of the Zwave thermostat and values are the matching setpoint
value_id(use this if your thermostat has more than one setpoint)
- default_setpoint: The default thermostat setpoint.
- discovery_payload: The payload sent to hass to discover this device. Check here for a list with all supported options
- min_temp/max_temp: Min/Max temperature of the thermostat
- modes: Array of Hass Climate supported modes. Allowed values are
[“auto”, “off”, “cool”, “heat”, “dry”, “fan_only”]
- mode_state_topic:
value_idof mode value
- current_temperature_topic:
value_idof current temperature value
- current_temperature_template/temperature_state_template: Template used to fetch the value from the MQTT payload
- temperature_command_topic/mode_command_topic: If true this values are subscribed to this topics to send commands from Hass to change this values
Thermostats are most complex components to create, in this device example the setpoint topic changes based on the mode selected. Zwave2Mqtt handles the mode changes by updating the device discovery payload to match the correct setpoint based on the mode selected.
FansFans
{ // GE 1724 Dimmer "type": "fan", "object_id": "dimmer", "values": ["38-1-0"], "discovery_payload": { "command_topic": "38-1-0", "speed_command_topic": "38-1-0", "speed_state_topic": "38-1-0", "state_topic": "38-1-0", "speeds": ["off", "low", "medium", "high"], "payload_low_speed": 24, "payload_medium_speed": 50, "payload_high_speed": 99, "payload_off": 0, "payload_on": 99, "state_value_template": "{% if (value_json.value | int) == 0 %} 0 {% else %} 99 {% endif %}", "speed_value_template": "{% if (value_json.value | int) == 25 %} 24 {% elif (value_json.value | int) == 51 %} 50 {% elif (value_json.value | int) == 99 %} 99 {% else %} 0 {% endif %}" } }
- type: The hass MQTT component type
- object_id: The unique id of this object (must be unique for the device)
- values: Array of values used by this component
- discovery_payload: The payload sent to hass to discover this device. Check here for a list with all supported options
- command_topic: The topic to send commands
- state_topic: The topic to receive state updates
- speed_command_topic: The topic used to send speed commands
- state_value_template: The template used to set the value ON/OFF based on the payload received
- speed_value_template: The template to use to set the speed
["off", "low", "medium", "high"]based on the payload received
Thermostats with FansThermostats with Fans
The main template is like the thermostat template. The things to add are:
{ // GoControl GC-TBZ48 (Linear Nortek Security Control LLC) "type": "climate", "object_id": "thermostat", "values": [ "49-1-1", "64-1-0", "66-1-0", // <-- add fan values "67-1-1", "67-1-2", "68-1-0" // <-- add fan values ], "fan_mode_map": { // <-- add fan modes map "on": "On", "auto": "Auto" }, "mode_map": { "off": "Off", "heat": "Heat", "cool": "Cool", "auto": "Auto" }, "setpoint_topic": { "Heat": "67-1-1", "Cool": "67-1-2" }, "default_setpoint": "67-1-1", "discovery_payload": { "min_temp": 60, "max_temp": 85, "modes": [ "off", "heat", "cool", "auto" ], "fan_modes": [ // <-- add fan supported modes "on", "auto" ], "action_topic": "66-1-0", "mode_state_topic": "64-1-0", "mode_command_topic": true, "current_temperature_topic": "49-1-1", "current_temperature_template": "{{ value_json.value }}", "temperature_state_template": "{{ value_json.value }}", "temperature_low_command_topic": true, "temperature_low_state_template": "{{ value_json.value }}", "temperature_high_command_topic": true, "temperature_high_state_template": "{{ value_json.value }}", "fan_mode_command_topic": true, "fan_mode_state_topic": "68-1-0" // <-- add fan state topic } }
🎁 MQTT APIs
You have full access to all Openzwave-Shared APIs (and more) by simply using MQTT.
Zwave EventsZwave Events
If Send Zwave Events flag of Gateway settings section is enabled all Zwave events are published to MQTT. Here you can find a list with all available events
Topic
<mqtt_prefix>/_EVENTS_/ZWAVE_GATEWAY-<mqtt_name>/<event name>
Payload
{ "data": [ "1.4.3319" ] // an array containing all args in order }
ExampleExample
Topic
zwave2mqtt/_EVENTS/ZWAVE_GATEWAY-z2m/node_ready
Payload
{ "data": [ 1, { "manufacturer": "AEON Labs", "manufacturerid": "0x0086", "product": "ZW090 Z-Stick Gen5 EU", "producttype": "0x0001", "productid": "0x005a", "type": "Static PC Controller", "name": "", "loc": "" } ] }
Zwave APIsZwave APIs
To call a Zwave API you just need to publish a JSON object like:
{ "args": [2, 1] }
Where
args is an array with the args used to call the api, the topic is:
<mqtt_prefix>/_CLIENTS/ZWAVE_GATEWAY-<mqtt_name>/api/<api_name>/set
The result will be published on the same topic without
/set
Example: If I publish the previous json object to the topic
zwave/_CLIENTS/ZWAVE_GATEWAY-office/api/getAssociations/set
I will get this response (in the same topic without the suffix
/set):
{ "success": true, "message": "Success zwave api call", "result": [1] }
result will contain the value returned from the API. In this example I will get an array with all node IDs that are associated to the group 1 (lifeline) of node 2.
Custom APIsCustom APIs
There are some custom apis that can be called that are not part of Zwave Client:
- All Zwave Clients scenes management methods if preceeded by a
_will use the internal scenes management instead of OZW scenes:
_createScene
_removeScene
_setScenes
_getScenes
_sceneGetValues
_addSceneValue
_removeSceneValue
_activateScene
_setNodeNameand
_setNodeLocationwill use internal nodes store to save nodes names/locations in a json file
refreshNeighborns: Returns an Array, the Array index is the nodeId, array value is an Array with all node neighborns
getNodes: Returns an array with all nodes in the network (and their info/valueids)
getInfo: Returns an object with:
homeid: homeId
name: homeId Hex
version: OpenZwave version
uptime: Seconds from when the app process is started. It's the result of
process.uptime()
lastUpdate: Timestamp of latest event received from OZW
status: Client status. Could be: 'driverReady', 'connected', 'scanDone', 'driverFailed', 'closed'
cntStatus: Controller status received from ozw notifications controller command. If inclusion/exclusion is running it wold be
Waiting
Set valuesSet values
To write a value using MQTT you just need to send the value to set in the same topic where the value updates are published by adding the suffix
/set to the topic (READONLY VALUES CANNOT BE WRITE).
Example with gateway configured with
named topics:
If I publish the value
25.5 (also a payload with a JSON object with the value in
value property is accepted) to the topic
zwave/office/nodeID_4/thermostat_setpoint/heating/set
I will set the Heating setpoint of the node with id
4 located in the
office to
25.5. To check if the value has been successfully write just check when the value changes on the topic:
zwave/office/nodeID_4/thermostat_setpoint/heating
BroadcastBroadcast
You can send broadcast values to all values with a specific suffix in the network.
Broadcast API is accessible from:
<mqtt_prefix>/_CLIENTS/ZWAVE_GATEWAY-<mqtt_name>/broadcast/<value_topic_suffix>/set
value_topic_suffix: the suffix of the topic of the value I want to control using broadcast.
It works like the set value API without the node name and location properties.
If the API is correctly called the same payload of the request will be published
to the topic without
/set suffix.
Example of broadcast command (gateway configured as
named topics):
zwave/_CLIENTS/ZWAVE_GATEWAY-test/broadcast/thermostat_setpoint/heating/set
Payload:
25.5
All nodes with command class
thermostat_setpoint and value
heating will be set to
25.5 and I will get the same value on the topic:
zwave/_CLIENTS/ZWAVE_GATEWAY-test/broadcast/thermostat_setpoint/heating
📷 Screenshots
SettingsSettings
Control PanelControl Panel
Groups associationsGroups associations
ScenesScenes
MeshMesh
DebugDebug
Health check endpointsHealth check endpoints
/health: Returns
200 if both mqtt and zwave client are connected,
500 otherwise
/health/mqtt: Returns
200 if mqtt client is connected,
500 otherwise
/health/zwave: Returns
200 if zwave client is connected,
500 otherwise
Remember to add the header:
Accept: text/plain to your request.
Example:
curl localhost:8091/health/zwave -H "Accept: text/plain"
Environment variablesEnvironment variables
Note: Each one of the following environment variables corresponds to their respective options in the UI settings and options saved in the UI take presence over these environment variables.
OZW_NETWORK_KEY
OZW_SAVE_CONFIG
OZW_POLL_INTERVAL
OZW_AUTO_UPDATE_CONFIG
OZW_CONFIG_PATH
OZW_ASSUME_AWAKE
A: Why when I add a value to Gateway values table I don't see all my devices?
B: When adding values to the gateway values table it shows JUST ONE DEVICE FOR EACH TYPE. This is to make it easier and faster to setup your network as if you have a network with lot devices (light, light dimmers for example) you just need to add the values you want to bridge to mqtt (for a light it will always be just the switch to turn it on/off for exmple without all configuration values) and it will bridge those values for all the devices of that type (without configure the values one by one).
A: My device is X and has been discovered as Y, why?
B: Hass Discovery is not easy, zwave have many different devices with different values. To try to understand how to discover a specific value I have used this file that shows what kind of value is expeted based on value class and index. Unfortunally not all devices respect this specifications so for those cases I have created Hass Devices table where you can manually fix the discovery payload and than save it to make it persistent. I have also created a file
/hass/devices.js where I place all devices specific values configuration, your contribution is needed there, so submit a PR with your files specification to help it grow.
🙏 Thanks
Thanks to this people for help with issues tracking and contributions:
📝 TODOs
- Better logging
- Dockerize application
- Package application with PKG
- HASS integration, check zigbee2mqtt
- Add unit test
- JSON validator for settings and scenes
- Better nodes status management using 'testNode'
- Network graph to show neighbours using vue-d3-network | https://giters.com/jabastien/Zwave2Mqtt | CC-MAIN-2022-40 | en | refinedweb |
Machine learning, or ML, is a subfield of AI focused on algorithms that learn models from data.
Let’s look at a practical application of machine learning in the field of Computer Vision called neural style transfer. In 2015, researchers used deep learning techniques to create an algorithm that mixed the content of one image with the artistic style of another. This new algorithm generated unique images, but also offered a unique perspective into how our visual system may infer new artistic concepts.
As its name suggests, neural style transfer relies on neural networks to perform this task. The exact details of this implementation are beyond the scope of this tutorial, but you can learn more in this blog post on artistic style transfer or from the original research manuscript.
In this tutorial, you will apply neural style transfer using Jupyter Notebook and the Linux command line to take an image like this:
and transform it by applying the artistic style of Vincent van Gogh’s “Starry Night” to create this image:
To complete this tutorial, you will need:
Working with machine learning models can be memory intensive, so your machine should have at least 8GB of memory to perform some of the calculations in this tutorial.
In this tutorial, we’ll use an open-source implementation of neural style transfer provided by Hang Zhang called PyTorch-Style-Transfer. This particular implementation uses the
PyTorch library.
Activate your programming environment, and install the
torchvision and
torchfile packages with the following command:
- pip install torchvision torchfile
Pip will automatically retrieve their dependencies:
OutputSuccessfully installed numpy-1.22.0 pillow-9.0.0 torch-1.10.1 torchfile-0.1.0 torchvision-0.11.2 typing-extensions-4.0.1
To avoid cluttering your home directory with files, create a new directory called
style_transfer and use it as your working directory:
- mkdir style_transfer
- cd style_transfer
Next, clone the
PyTorch-Style-Transfer repository to your working directory using the
git clone command. You can learn more about Git in this Git tutorial series.
- git clone
The author of this repository has placed the code we will be using in the
experiments folder of the
PyTorch-Style-Transfer repository, so switch to this directory once all files have been cloned:
- cd PyTorch-Style-Transfer/experiments
Take a look at the contents of the
experiments directory:
- ls
You’ll see the following directories:
Outputcamera_demo.py dataset images main.py models net.py option.py utils.py
In this tutorial you’ll work with the
images/ directory, which contains stock images, and the
main.py script, which is used to apply neural style transfer to your images.
Before moving to the next section, you also need to download the pre-trained deep learning model required to run neural style transfer. These models can be large and therefore not suitable for storing on GitHub, so the author provides a small script to download the file. You’ll find the script at
models/download_model.sh.
First, make the script executable:
- chmod +x ./models/download_model.sh
Then execute the script to download the model:
- ./models/download_model.sh
Now that everything’s downloaded, let’s use these tools to transform some images.
To illustrate how neural style transfer works, let’s start by using the example provided by the author of the
PyTorch-Style-Transfer repository. Since we will need to display and view images, it will be more convenient to use a Jupyter notebook.
Launch Jupyter from your terminal:
- jupyter notebook
Then access Jupyter by following the instructions presented.
Once Jupyter is displayed, create a new notebook by selecting New > Python 3 from the top right pull-down menu:
This opens a new notebook where you can enter your code.
At the top of the notebook, add the following code to load the required libraries.
import torch import os import subprocess from IPython.display import Image from IPython.display import display
Along with
torch, we’re also importing the standard libraries
os and
subprocess, which we’ll use to run Python scripts directly from Jupyter notebook. We also include the
IPython.display library, which lets us display images within the Jupyter notebook.
Note: Type
ALT+ENTER (or
SHIFT+ENTER on macOS) to run the code and move into a new code block within your notebook. Do this after each code block in this tutorial to see your results.
The example provided in the
README file of the
PyTorch-Style-Transfer repository uses stock images located in the
images/ directory and the
main.py script. You will need to provide at least five arguments in order to run the
main.py script:
/images/content).
/images/21styles).
/models).
--cuda=1parameter, otherwise use
--cuda=0.
To run the neural style transfer code, we’ll specify the required arguments and use the
subprocess library to run the command in the shell.
First, let’s define the path to our working directory. We’ll store in a variable called
workingdir:
# define the path to the working directory experiment_dir = 'style_transfer/PyTorch-Style-Transfer/experiments' workingdir = '{}/{}'.format(os.environ['HOME'], experiment_dir)
We’ll use this variable throughout our code when we point to images and other files.
Now let’s define the path to the
main.py script, as well as the list of arguments that we will use as input for this test run. We’ll specify that the content image is
venice-boat.jpg, the style image is
starry_night.jpg, and we’ll save the output of our neural style transfer to a file called
test.jpg:
# specify the path to the main.py script path2script = '{}/main.py'.format(workingdir) # specify the list of arguments to be used as input to main.py args = ['eval', '--content-image', '{}/images/content/venice-boat.jpg'.format(workingdir), '--style-image', '{}/images/21styles/starry_night.jpg'.format(workingdir), '--model', '{}/models/21styles.model'.format(workingdir), '--output-image', '{}/test.jpg'.format(workingdir), '--cuda=0']
Before running the test example, you can take a quick look at the content and style images that you have chosen for this example by executing this code in your notebook:
content_image = Image('{}/images/content/venice-boat.jpg'.format(workingdir)) style_image = Image('{}/images/21styles/starry_night.jpg'.format(workingdir)) display(content_image) display(style_image)
You’ll see these images displayed in the output:
Finally, concatenate the call to
main.py and its list of arguments and run it in the shell using the
subprocess.check_output function:
# build subprocess command cmd = ['python3', path2script] + args # run the command x = subprocess.check_output(cmd, universal_newlines=True)
Depending on the amount of memory available on your machine, this may take a minute or two to run. Once it has completed, you should see a
test.jpg file in your working directory. From a Jupyter notebook, you can use Ipython magic commands to display the contents of your working directory within the Jupyter notebook:
!ls $workingdir
Alternatively, you can use the
ls command in your terminal. Either way you’ll see the following output:
Output__pycache__ dataset main.py myutils option.py camera_demo.py images models net test.jpg
You’ll see a new file called
test.jpg, which contains the results of the neural style transfer using your input content and style images.
Use the
Image function to display the content of
test.jpg:
Image('{}/test.jpg'.format(workingdir))
The artistic style of Vincent van Vogh’s Starry Night canvas has been mapped to the content of our Venitian boat images. You’ve successfully applied neural style transfer with a textbook example, so let’s try repeating this exercise with different images.
So far, you’ve used the images provided by the author of the library we’re using. Let’s use our own images instead. To do this, you can either find an image you are interested in and use the URL for the image in the following command, or use the URL provided to use Sammy the Shark.
We’ll use some IPython magic again to download the image to our working directory and place it into a file called
sammy.png.
!wget -O - '' > $workingdir/sammy.png
When you run this command in your notebook, you’ll see the following output:
Output--2017-08-15 20:03:27-- Resolving assets.digitalocean.com (assets.digitalocean.com)... 151.101.20.233 Connecting to assets.digitalocean.com (assets.digitalocean.com)|151.101.20.233|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 10483 (10K) [image/png] Saving to: 'STDOUT' - 100%[===================>] 10.24K --.-KB/s in 0.001s 2017-08-15 20:03:27 (12.9 MB/s) - written to stdout [10483/10483]
Use the
Image command to display the new image in the notebook:
Image('{}/sammy.png'.format(workingdir))
Following the same workflow as the test run, let’s run our artistic style transfer model using Rocket Sammy as the content image, and the same Starry Night picture as our style image.
We’ll use the same code we used previously, but this time we’ll specify the content image to be
sammy.png, the style image to be
starry_night.jpg, and we write the output to a file called
starry_sammy.jpg. Then we execute the command:
# specify the path to the main.py script path2script = '{}/main.py'.format(workingdir) # specify the list of arguments to be used as input to main.py args = ['eval', '--content-image', '{}/sammy.png'.format(workingdir), '--style-image', '{}/images/21styles/starry_night.jpg'.format(workingdir), '--model', '{}/models/21styles.model'.format(workingdir), '--output-image', '{}/starry_sammy.jpg'.format(workingdir), '--cuda=0'] # build subprocess command cmd = ['python3', path2script] + args # run the bash command x = subprocess.check_output(cmd, universal_newlines=True)
Then use the
Image function to view the results of transferring the artistic style of Vincent van Vogh’s Starry Night to the content of your Rocket Sammy image.
Image('{}/starry_sammy.jpg'.format(workingdir))
You’ll see the new stylized Rocket Sammy:
Let’s try this again by mapping a different style image to our picture of Rocket Sammy. We’ll use Picasso’s The Muse this time. Again, we use
sammy.png as our content image, but we’ll change the style image to be
la_muse.jpg. We’ll save the output to
musing_sammy.jpg:
# specify the path to the main.py script path2script = '{}/main.py'.format(workingdir) # specify the list of arguments to be used as input to main.py args = ['eval', '--content-image', '{}/sammy.png'.format(workingdir), '--style-image', '{}/images/21styles/la_muse.jpg'.format(workingdir), '--model', '{}/models/21styles.model'.format(workingdir), '--output-image', '{}/musing_sammy.jpg'.format(workingdir), '--cuda=0'] # build subprocess command cmd = ['python3', path2script] + args # run the bash command x = subprocess.check_output(cmd, universal_newlines=True)
Once the code has finished running, display the output of your work using the output filename you specified and the
Image function:
Image('{}/musing_sammy.jpg'.format(workingdir))
By now, you should have a good idea how to use these transformations. Try using some of your own images if you haven’t already.
In this tutorial, you used Python and an open-source PyTorch implementation of a neural style transfer model to apply stylistic transfer to images. The field of machine learning and AI is vast, and this is only one of its applications. Here are some additional things you can explore:!
There’s a more up to date, more user friendly/easier to use, and better implementation of Neural-Style in Pytorch here:
It works with Python 3 and Python 2.7.
Also, PyTorch can now be installed using pip or pip3 with just simply “pip3 install torch”, “pip3 install torchvision”.
I got as far as trying to run the main.py script, but it cannot load pytorch apparently! Here is the error message I got:
root@machine-learning-1:/home/science/style_transfer/PyTorch-Style-Transfer/experiments# python3 main.py --content-image ./images/content/venice-boat.jpg --style-image ./images/21styles/starry_night.jpg --model ./models/21styles.model --output-image ./test.jpg --cuda=0 Traceback (most recent call last): File “main.py”, line 17, in <module> import torch ModuleNotFoundError: No module named ‘torch’**
I’m having the same issue with a new ML droplet just created. Torch version 0.1.12_2
This comment has been deleted
Hi there, when I’ve tried to execute the block of script in this tutorial on Python Notebook. This chunk of error just appeared, did I miss anything?
CalledProcessError Traceback (most recent call last) <ipython-input-41-8835dc3d4151> in <module>() 3 4 # run the command ----> 5 x = subprocess.check_output(cmd, universal_new[‘python3’, ‘/home/science/style_transfer/PyTorch-Style-Transfer/experiments/main.py’, ‘eval’, ‘–content-image’, ‘/home/science/style_transfer/PyTorch-Style-Transfer/experiments/images/content/venice-boat.jpg’, ‘–style-image’, ‘/home/science/style_transfer/PyTorch-Style-Transfer/experiments/images/21styles/starry_night.jpg’, ‘–model’, ‘/home/science/style_transfer/PyTorch-Style-Transfer/experiments/models/21styles.model’, ‘–output-image’, ‘/home/science/style_transfer/PyTorch-Style-Transfer/experiments/test.jpg’, ‘–cuda=0’]’ returned non-zero exit status 1 | https://www.digitalocean.com/community/tutorials/how-to-perform-neural-style-transfer-with-python-3-and-pytorch | CC-MAIN-2022-40 | en | refinedweb |
Hi Michael, > I've just started looking into to porting my existing ImageJ plugins > to ImageJ2. I have hit a problem in that calls to functions like > setDisplayRange seem not to take effect until my plugin returns. Yes, that is expected behavior, and a consequence of how ImageJ2's legacy layer works. > The following plugin lets me interactively adjust the image display > range when run with ImageJ1 but with ImageJ2 there is no effect to the > image until I close the plugIn dialog. Right. The rule of thumb is that IJ1 plugins which interactively alter an image will not do so in ImageJ2, due to the way the legacy layer works. > I also note that the ImageJ2 Adjust->Brightness/Contrast (appears new) > can update the image interactively, whereas the Adjust->WindowLevel > (looks like existing ImageJ1 UI) does not work. We rewrote the Brightness/Contrast command specifically due to this issue. The goal is to port all of ImageJ1's interactive plugins to ImageJ2, to avoid this limitation in the legacy layer. There is an ImageJ2 design page about backwards compatibility and the legacy layer online at: We may be able to overcome the limitation with interactive plugins to an extent, but it is difficult in general. Another solution which is coming soon is that we are working on a toggle in the Help menu to fully switch back and forth between ImageJ1 and ImageJ2 modes. The legacy layer will translate all data structures upon switch, so you can run your interactive legacy plugins in ImageJ1 mode, then switch back to ImageJ2 when finished. When you say you want to port your existing ImageJ plugins, do you mean use them in ImageJ2 via the legacy layer? (Which is what you have tried so far.) Or fully update the code to use ImageJ2 data structures? For the latter, the migration will be complete when no more ImageJ1 classes (ij.*) are used, but only ImageJ2 classes (imagej.*). If you decide to go that route, we would be very happy to help with the conversion process. The plan is to write a plugin porting guide, but we do not have one yet. Regards, Curtis On Wed, Nov 7, 2012 at 4:57 PM, Michael Ellis <michael.ellis at dsuk.biz>wrote: > I've just started looking into to porting my existing ImageJ plugins to > ImageJ2. I have hit a problem in that calls to functions like > setDisplayRange seem not to take effect until my plugin returns. > > The following plugin lets me interactively adjust the image display range > when run with ImageJ1 but with ImageJ2 there is no effect to the image > until I close the plugIn dialog. > > I also note that the ImageJ2 Adjust->Brightness/Contrast (appears new) can > update the image interactively, whereas the Adjust->WindowLevel (looks like > existing ImageJ1 UI) does not work. > > Any help greatly appreciated! > > Example code below > > > //=========================================================================== > > > package SmartCapture; > > import java.awt.AWTEvent; > import ij.IJ; > import ij.ImagePlus; > import ij.gui.DialogListener; > import ij.gui.GenericDialog; > import ij.plugin.filter.ExtendedPlugInFilter; > import ij.plugin.filter.PlugInFilterRunner; > import ij.process.ImageProcessor; > > public class Test_IJ2 implements ExtendedPlugInFilter, DialogListener { > > private final static String PLUGIN_NAME = Test_IJ2.class.getSimpleName(); > > private static int FLAGS = // bitwise or of the following flags: > DOES_8G | KEEP_PREVIEW; // When using preview, the preview image can be > kept as a result > > ImagePlus imp; > private double low; > private double high; > > public int setup(String arg, ImagePlus imp) { > > if (imp == null) { > IJ.error(PLUGIN_NAME, "No image.\nOpen or create an image first then run > " > + PLUGIN_NAME); > return DONE; > } > > this.imp = imp; > > return FLAGS; > } > > public int showDialog(ImagePlus imp, String command, PlugInFilterRunner > pfr) { > > assert (imp != null); > if (imp == null) > return DONE; > > GenericDialog gd = new GenericDialog(PLUGIN_NAME + "..."); > gd.addMessage(imp.getTitle()); > gd.addSlider("low", 0, 255, 0); > gd.addSlider("high", 0, 255, 255); > gd.addPreviewCheckbox(pfr, " Preview"); > gd.addDialogListener(this); > gd.showDialog(); // user input (or reading from macro) happens here > if (gd.wasCanceled()) // dialog cancelled? > return DONE; > return IJ.setupDialog(imp, FLAGS); > } > > public boolean dialogItemChanged(GenericDialog gd, AWTEvent e) { > > low = gd.getNextNumber(); > high = gd.getNextNumber(); > IJ.log(String.format("low=%g high=%g\n", low, high)); > > return true; > } > > public void run(ImageProcessor ip) { > IJ.log("run called\n"); > imp.setDisplayRange(low, high); > } > > public void setNPasses(int nPasses) { > IJ.log(String.format("setNPasses(%d)\n", nPasses)); > } > > } > > > _______________________________________________ > ImageJ-devel mailing list > ImageJ-devel at imagej.net > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://imagej.net/pipermail/imagej-devel/2012-November/001274.html | CC-MAIN-2022-40 | en | refinedweb |
Each Answer to this Q is separated by one/two green lines.
I’m trying to get a pretty print of a dictionary, but I’m having no luck:
>>> import pprint >>> a = {'first': 123, 'second': 456, 'third': {1:1, 2:2}} >>> pprint.pprint(a) {'first': 123, 'second': 456, 'third': {1: 1, 2: 2}}
I wanted the output to be on multiple lines, something like this:
{'first': 123, 'second': 456, 'third': {1: 1, 2: 2} }
Can
pprint do this? If not, then which module does it? I’m using Python 2.7.3.
Use
width=1 or
width=-1:
In [33]: pprint.pprint(a, width=1) {'first': 123, 'second': 456, 'third': {1: 1, 2: 2}}
You could convert the dict to json through
json.dumps(d, indent=4)
import json print(json.dumps(item, indent=4)) { "second": 456, "third": { "1": 1, "2": 2 }, "first": 123 }
If you are trying to pretty print the environment variables, use:
pprint.pprint(dict(os.environ), width=1)
Two things to add on top of Ryan Chou’s already very helpful answer:
- pass the
sort_keysargument for an easier visual grok on your dict, esp. if you’re working with pre-3.6 Python (in which dictionaries are unordered)
print(json.dumps(item, indent=4, sort_keys=True)) """ { "first": 123, "second": 456, "third": { "1": 1, "2": 2 } } """
dumps()will only work if the dictionary keys are primitives (strings, int, etc.)
The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .
| https://techstalking.com/programming/python/pprint-dictionary-on-multiple-lines/ | CC-MAIN-2022-40 | en | refinedweb |
The Prio qdisc is a simple classful queueing discipline that contains an arbitrary number of classes of differing priority. More...
#include "prio-queue-disc.h"
The Prio qdisc is a simple classful queueing discipline that contains an arbitrary number of classes of differing priority.
Introspection did not find any typical Config paths.
The classes are dequeued in numerical descending order of priority. By default, three Fifo queue discs are created, unless the user provides (at least two) child queue discs.
If no packet filter is installed or able to classify a packet, then the packet is assigned a priority band based on its priority (modulo 16), which is used as an index into an array called priomap. If a packet is classified by a packet filter and the returned value is non-negative and less than the number of priority bands, then the packet is assigned the priority band corresponding to the value returned by the packet filter. Otherwise, the packet is assigned the priority band specified by the first element of the priomap array.
Priomap
No TraceSources are defined for this type.
Size of this type is 1000 bytes (on a 64-bit architecture).
Definition at line 50 of file prio-queue-disc.h.
PrioQueueDisc constructor.
Definition at line 71 of file prio-queue-disc.cc.
References NS_LOG_FUNCTION.
Definition at line 77 of file pri 185 of file prio-queue-disc.cc.
References ns3::QueueDisc::AddQueueDiscClass(), ns3::ObjectFactory::Create(), ns3::QueueDisc::GetNInternalQueues(), ns3::QueueDisc::GetNQueueDiscClasses(), ns3::Object::Initialize(), NS_LOG_ERROR, NS_LOG_FUNCTION, and ns3::ObjectFactory::SetTypeId().
This function actually extracts a packet from the queue disc.
Implements ns3::QueueDisc.
Definition at line 143 of file prio-queue-disc.cc.
References ns3::QueueDisc::Dequeue(), ns3::QueueDisc::GetNPackets(), ns3::QueueDisc::GetNQueueDiscClasses(), ns3::QueueDisc::GetQueueDiscClass(), NS_LOG_FUNCTION, and NS_LOG_LOGIC.
This function actually enqueues a packet into the queue disc.
Implements ns3::QueueDisc.
Definition at line 103 of file prio-queue-disc.cc.
References ns3::QueueDisc::Classify(), ns3::QueueDisc::GetNPackets(), ns3::QueueDisc::GetNQueueDiscClasses(), ns3::SocketPriorityTag::GetPriority(), ns3::QueueDisc::GetQueueDiscClass(), m_prio2band, NS_ASSERT_MSG, NS_LOG_DEBUG, NS_LOG_FUNCTION, NS_LOG_LOGIC, and ns3::PacketFilter::PF_NO_MATCH. 164 of file prio-queue-disc.cc.
References ns3::QueueDisc::GetNPackets(), ns3::QueueDisc::GetNQueueDiscClasses(), ns3::QueueDisc::GetQueueDiscClass(), NS_LOG_FUNCTION, NS_LOG_LOGIC, and ns3::QueueDisc::Peek().
Get the band (class) assigned to packets with specified priority.
Definition at line 93 of file prio-queue-disc.cc.
References m_prio2band, NS_ASSERT_MSG, and NS_LOG_FUNCTION.
Get the type ID.
Definition at line 57 of file prio-queue-disc.cc.
References m_prio2band, and ns3::TypeId::SetParent().
Initialize parameters (if any) before the first packet is enqueued.
This method is automatically called at simulation initialization time, after the CheckConfig() method has been called.
Implements ns3::QueueDisc.
Definition at line 219 of file prio-queue-disc.cc.
References NS_LOG_FUNCTION.
Set the band (class) assigned to packets with specified priority.
Definition at line 83 of file prio-queue-disc.cc.
References m_prio2band, NS_ASSERT_MSG, and NS_LOG_FUNCTION.
Priority to band mapping.
Definition at line 87 of file prio-queue-disc.h.
Referenced by DoEnqueue(), GetBandForPriority(), GetTypeId(), and SetBandForPriority(). | https://www.nsnam.org/docs/doxygen/classns3_1_1_prio_queue_disc.html | CC-MAIN-2022-40 | en | refinedweb |
>> display an image in JavaFX?
Advanced Java Using Eclipse IDE: Learn JavaFX & Databases
33 Lectures 7.5 hours
Complete Oracle JavaFX Bootcamp! Build Real Projects In 2021
64 Lectures 12.5 hours
Emenwa Global, Ejike IfeanyiChukwu
The javafx.scene.image.Image class is used to load an image into a JavaFX application. This supports BMP, GIF, JPEG, and, PNG formats.
JavaFX provides a class named javafx.scene.image.ImageView is a node that is used to display, the loaded image.
To display an image in JavaFX −
Create a FileInputStream representing the image you want to load.
Instantiate the Image class bypassing the input stream object created above, as a parameter to its constructor.
Instantiate the ImageView class.
Set the image to it by passing above the image object as a parameter to the setImage() method.
Set the required properties of the image view using the respective setter methods.
Add the image view mode to the group object.
Example
import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import javafx.application.Application; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.stage.Stage; public class ImageViewExample extends Application { public void start(Stage stage) throws IOException { //creating the image object InputStream stream = new FileInputStream("D:\images\elephant.jpg"); Image image = new Image(stream); //Creating the image view ImageView imageView = new ImageView(); //Setting image to the image view imageView.setImage(image); //Setting the image view parameters imageView.setX(10); imageView.setY(10); imageView.setFitWidth(575); imageView.setPreserveRatio(true); //Setting the Scene object Group root = new Group(imageView); Scene scene = new Scene(root, 595, 370); stage.setTitle("Displaying Image"); stage.setScene(scene); stage.show(); } public static void main(String args[]) { launch(args); } }
Output
- Related Questions & Answers
- How to display an image in HTML?
- How to add scroll bar to an image in JavaFX?
- How to add context menu to an image in JavaFX?
- How to add an image as label using JavaFX?
- How to add an image to a button (action) in JavaFX?
- How to Invert the color of an image using JavaFX?
- How to change the aspect ratio of an image in JavaFX?
- How to set image as hyperlink in JavaFX?
- How to add image patterns to nodes in JavaFX?
- OpenCV JavaFX application to alter the sharpness of an image
- How to add image to the menu item in JavaFX?
- JavaFX example to decrease the brightness of an image using OpenCV.
- How can I display an image using Pillow in Tkinter?
- How can I display an image using cv2 in Python?
- How to display OpenCV Mat object using JavaFX? | https://www.tutorialspoint.com/how-to-display-an-image-in-javafx | CC-MAIN-2022-40 | en | refinedweb |
Next: Default Arguments, Previous: Variable-length Return Lists, Up: Functions and Scripts [Contents][Index]
The body of a user-defined function can contain a
return statement.
This statement returns control to the rest of the Octave program. It
looks like this:
return
Unlike the
return statement in C, Octave’s
return
statement cannot be used to return a value from a function. Instead,
you must assign values to the list of return variables that are part of
the
function statement. The
return statement simply makes
it easier to exit a function from a deeply nested loop or conditional
statement.
Here is an example of a function that checks to see if any elements of a vector are nonzero.
function retval = any_nonzero (v) retval = 0; for i = 1:length (v) if (v (i) != 0) retval = 1; return; endif endfor printf ("no nonzero elements found\n"); endfunction
Note that this function could not have been written using the
break statement to exit the loop once a nonzero value is found
without adding extra logic to avoid printing the message if the vector
does contain a nonzero element.
When Octave encounters the keyword
return inside a function or
script, it returns control to the caller immediately. At the top level,
the return statement is ignored. A
return statement is assumed
at the end of every function definition. | https://docs.octave.org/v4.0.3/Returning-from-a-Function.html | CC-MAIN-2022-40 | en | refinedweb |
I have a feeling I'm doing something silly here.
Trying to read a rotary pot with Arduino and send the data to Processing. The Arduino serial monitor displays lovely integers exactly as I expect, however when I try to read from the same serial port in Processing, all I see is 0's.
This happens whether I close Arduino + the serial monitor or not. However I've noticed when I run the Processing sketch without closing the serial monitor it throws up junk characters:
d¤ª¤ªdjdª¤ªdjd¤ª¤ªPªdjddj¤dªj¤dªjPdª
But still zeroes in the Processing output window.
The sketches are simplified versions of the AnalogInput/SimpleRead examples in Arduino/Processing respectively - sorry but I couldn't copy the code in forum format for some reason.
Arduino
int sensorPin = 2; // select the input pin for the potentiometer int ledPin = 13; // select the pin for the LED int sensorValue = 0; // variable to store the value coming from the sensor void setup() { // declare the ledPin as an OUTPUT: pinMode(ledPin, OUTPUT); Serial.begin(38400); } void loop() { // read the value from the sensor: sensorValue = analogRead(sensorPin); Serial.print(sensorValue/8); Serial.println(); }
Processing
import processing.serial.*; Serial myPort; // Create object from Serial class int val; // Data received from the serial port void]; String portName = "/dev/cu.usbserial-A7006Qyj"; myPort = new Serial(this, portName, 38400); } void draw() { println(val); } | https://forum.arduino.cc/t/noob-question-serial-input-with-processing/39355 | CC-MAIN-2022-40 | en | refinedweb |
vw t5 multivan zieht strom
volvo s60 coolant leak
klipsch reference r 10sw 10quot 300w powered
cheap aircraft tools
npm run dev different port
best mod menu for lspdfr
verizon 5g home
cancel echeck paypal 2018
omlet arcade pc
atos acquisition list
fx mining investment
jurassic world dinosaurs x reader
oceanside police incident report
kiss the ground
texas teacher salary increase bill
christie nicole the rhiannon mini
daily rewards canada login
hebrew word for opportunity
elasticsearch sort buckets by key
slimline wall mounted electric fires
dr kagan miami deaths
x files resist or serve cheats
make your own sword
beepbox minecraft
contrastive loss paper
martin all around saddle for sale
ammonia project
bowflex max trainer m9
condo for sale riverside ohio
modulo ipdm nissan
easy beatles songs drums
toyota 86 hood
plutonium bind command
credit suisse testing interview questions
what size solar panel to keep car battery charged
garage door lift assist
arcadegeddon best plugins
bristol law offers 2022
how to get router ip
hentai mom is my doll
slam information matrix
five dark fates
tomcat with bromethalin
countryballs plushies for sale
jupiter in aries in 7th house for libra ascendant
breakout edu replacement locks
chao baby sonic
116 129 inch waterproof
samsung 3nm vs tsmc 3nm
capture one ipad release
how to deal with parasocial relationships
kalorik maxx air fryer replacement basket
you are not a part of any zohocrm service orgs please remove the scope to generate the token
m1 caliber t1 tomahawk
dumont applitrack
aruba show lldp
dried craft pods
dress bravos usmc
craigslist studio apartments with utilities included
new 60 hp outboard for sale
simulink list of blocks
how to activate observation haki in blox fruits
harry potter fanfiction fred and george blood quill
1933 ford model 40
south korea travel restrictions
gameboy advance
toxic shame in relationships
python 2d fft example
dc shoes stag
bore size meaning
charging stand with smart connector for ipad
night shift by annie crown read online
you must provide financial information from your 2020 tax return on this page
astroai infrared thermometer 380 not for
grouper fillet in air fryer
2020 bmw s1000rr clear clutch cover
what is the probability of getting an even number in a die
lgbt healthcare discrimination statistics
3 wick candles
crock pot 1001 best
bollywood index movies
diy boring tool
not everyone can message this account messenger meaning
freesexgames com
half crimp pinky
graco 4ever dlx
jins vs owndays
smartpetlove snuggle blanket for pets
2022 jeep grand cherokee l beeping while driving
houses to rent in boksburg by owner
arcpy layer object
lifeboat launching procedure pdf
petsmart clarksville indiana
small expressions 2022
monster high 13 wishes
cross draw fixed blade knife
erc update 2022
lock screen wallpaper 4k
dillon precision xl750 price
harris teeter sub of the day list
roblox r15 require script
jquery ajax promise
blox fruits hack teleport fruit
azure devops yaml split string
docking station dual monitor
duff mckagan net worth 2021
harry potter adopted by the royal family fanfiction
blackstorm sword conan exiles
uaw otc catalog 2022
twrp enable mtp
asrock z690 steel legend
bmw code 601d
hotels in cape town cbd
stockbee pdf
lot price per square meter in laguna
lyxpro balanced xlr cable premium series
http www ftpbd net
how does a hydroboost brake system work
same day withdrawal casino
psvita rom download
mcreator player skin
miraculous specials
serrated blade purpose
boss fight studio store
certified white boy clothing
igbo sweet names for wife
edgeswitch lite firmware
cloudflare webinars
humanoidrootpart velocity
utorrent magnet link not showing files
words with alternating vowels and consonants
outside bozeman events
craigslist victorville ca personals
savage anschutz mark 12
soft token banque misr
intrackin bot
senior citizen housing projects in india
university of miami pay grade 50
rightmove crosby bungalows
best hospitals in dc area
maelys b flat belly firming
snowboard zip hoodie
worcester greenstar 28i junior hot water temperature control
multiple orgasm teen
tapp airsoft m4 adapter
elegoo uno r3 board
ko meaning in yoruba
proxmox remove ceph node
gw2 login bot
stata graph bar label x axis
we are never ever getting back together lyrics
yardman 46 inch deck belt diagram
claralyn balazs immunity
high plains observer hutchinson county
linq inner join lambda
mobile homes for sale virginia
missile command atari value
the magic of betrayal emerald lakes
toram online skill
java program to find hcf and lcm of three numbers
how to get robux ads
largest scrap metal companies uk
bullet antenna for truck
unity merge scene
plaquemine breaking news
jersey terbaik 2020
shark nv702ukt argos
paramount plus cracked apk
palatka crime news
hiv 4th generation test window period
how to send usdc from metamask
3233 marbon rd
fadespace insert
cathedral of the immaculate conception
pontiac g6 car accessories
aqa biology paper 1 topics 2022
companies looking for bakkies to rent
pea builders
klipsch r41m vs r51m
anne arundel county foreclosure records
scotia homes arbroath
name generator based on personality
wedding venues like amangiri
blood money game
tonka steel classics
munich piano competition
eros conjunct venus natal
the divorced billionaire heiress chapter 1090
battle of the ampere michael vey
2006 fleetwood american tradition
isle of wight real estate taxes
qvc new host julia cearley
2007 lexus is350 e85
backyard discovery weston cedar swing set walmart
how to change a group head seal on a sunbeam em7000
used centerline racing wheels
mini whiskey barrel decor
square d homeline 200 amp panel
kawai digital piano 250
jupiter creek gold diggings map
kydex multitool sheath for leatherman
karaoke revolution american idol encore 2
many double whammy synastry
oakland funeral home obituaries
stc 45 wood doors
galaxy dx 979f review
social work training courses
massage gun heads
best mini split for shed
modest clothing brands
heat energy grade 4 ppt
that was funny copypasta
dkms not found
army forklift models
alita open matte
faa administrator list
cold noses at the pearly
poppy growth stages
accident on 15 north
large toy chest for living room
object reference not set to an instance of an object bannerlord
what is the krews discord
cgtrader black cat
iframe calendar widget
pictures of anna nicole smiths tits
pj masks owlette classic
rpg maker mv battle on map
braemar ducted heating keeps turning off
sharepoint hover over image popup
powerapps table in collection
female to male body spa in bhopal
intoxication contract cases
ek vector strix 3090 manual
cricut mystery box feb 2022
rec 2020 pq vs hlg
roms and instructions games
most expensive silver spoon
400000 bling points convert to bitcoin
how to install hydra in termux 2021
static mesh and skeletal mesh
5 main types of artificial intelligence
mercury 300 vs 400
2zz engine for sale near me
crystal engineering corporation
politika citulje danas
glorious extended gaming mouse mat 3xl
where is san francisco on a map
nespresso capsules vertuoline double
bezier curve example
gocube the bluetooth smart cube
live streaming funeral services near odintsovo
shiatsu massage back pain
w163 rough idle
jackson county auditor ohio
diamond bb7v reviews
worthington ag parts jobs
sensory spider diagram
aaa triplex pump manual
kendo ui classes
bioinformatics software engineer reddit
recursively add elements to array
1938 dodge sedan parts
safelite damaged my headliner
streamlit pandas dataframe
badcock catalog book
mega sd terraonion
aristocrat viridian ws ram clear
predator hunters uk
can an inverter charge a battery
python get index of item in list
pixiv fanbox leak
ppop generation team c
pommard vintage chart
scotch thermal laminating pouches 100 pack 89 x
are emotes roster wide lost ark
arbor reloading press
zabbix admin default password
qualcomm snapdragon ride
mp3 sample
2006 bmw 330xi wiring diagram
ssense reddit
webtoon spanish platform
fallout 4 magazine locations map
drip calculator
broan 683 dimensions
dj music man security breach
1947 to 1954 chevy pickup for sale
manacled by senlinyu english ao3
5f1 vs 5f2a
dating cancer survivor
nashville golf show 2022
15 3716 tcx
gta 5 car mods pack
ego trimmer tool only
mazda rx7 catalytic converter scrap price
fractal design motherboard standoffs
the book of giants the
tump mail
townhouses to rent by owner
python rtmp server
rancho grande mobile home park for sale
topdon phoenix plus update cost
raspberry pi 4 mame
echo cs 680 mods
hardwood boards
fca regulatory sandbox
toyota damage car for sale
homes for sale in columbiana ohio
mobile home dealers massachusetts
trick or treat cherry hill nj
bcg oslo
24k gold 50 dollar bill value
sed replace space with backslash
relationship games online
everything changes creek canyon
root beer barrel drink
auth0 saml
noggin preschool learning apk
ssl connection is required please specify ssl options and retry postgres
amulet synonym
bobcat stump puller
crab places near me
worst governors in oregon
volkswagen van 2022 price
best gold scalper ea
flats to rent whitehaven harbour
subaru legacy 2023
dometic fridge model dmr702
gravesend crime news
hedera hashgraph ledger
chiron trine uranus synastry
erdem x h m
lucky charms cereal bags
hricane kinder keyboard
garland isd payroll
la county sheriff detective
crufts results day 2
rumble app for smart tv
lully avatar
pymol select residues around ligand
hidden facts in chemistry textbook pdf
spirit animals series
ros2 remap topic
istavriti online
lng liquefaction process
j channel menards
edtpa task 1 special education examples
casaluna linen sheets
snowflake sql query example
saffron vs st johns wort reddit
beach dune trex home depot
adventurous things to do in germany
flower of life symbol
ue5 lumen quality
gloryfit app
offer and acceptance contract law notes
lvgl keypad example
honda vtx 1800 performance
izuku comforts jirou fanfiction
armv8l is 64 bit
she blocked me on facebook but not my number
millet dosa recipe
harry potter rescued by the delacours fanfiction
airsoft mp5k gbb
fire retardant suit price
badgers basketball schedule
ocean beach dog beach hotel
2004 chevy trailblazer cooling fan speed sensor
ciel x short reader
the sun vanished series
pixel 6 app search bar not working
bitmatrix b2 font free download
samsung a02s fastboot mode
kotap mabc 24 all
frog sounds mp3
craigslist arizona trailers for sale
batmom masterlist
digitech rp1000
plymouth barracuda convertible for sale
cool maker sewing machine
stirile protv astazi
joop joop homme edt
the oracle the jubilean mysteries
womier k61 software
cock the way grandma liked it
crane business for sale
solar flare strain review
luxurious queen size bed
amateur porn in monroe louisiana
navigation screen repair
security risk assessment tool
css height fit container
my grandmother passed away message islam
nf concert 2023
noisy lifters
gyuto vs santoku reddit
vultures in norse mythology
shooting in sioux city yesterday
theoretical probability word problems
p0410 mercedes benz
cyberstart l7 c10
welltory arrhythmia
kenshi skeleton recruit code
douglas alderson
moominvalley season 3 watch online free
rightmove helmsley
please wait for the windows search rdp
splunk count field
evan moor daily reading comprehension
matching pfps are cringe reddit
pdf quilting patterns
marlink voucher
psychiatry salary sdn
vrchat physics bones release date
formlabs lpu
09 tahoe dash cover
usdt audit
fruit picking jobs in europe for foreigners
free album cover art download
tangram puzzle download
lost in oz
herren schl pfer mit eingriff
oscilloscope software open source
rv trader airstream
kubota b1640 loader for sale
mts tractors
winning scratch off codes
cape hatteras vacation
boombox script roblox
falcon company in dubai
starbuds gummies 600mg review
rosewill neon m54 rgb gaming mouse
static caravan parks portugal
heritage rough rider shoots left
kms vl all aio v45
jeff bezos farmland
98 rock tampa phone number
united memorial medical center houston
import serial python
henry blair family life
7 step pre cut stair stringers
autel evo lite plus review
whatsername tiktok real name
d2 pickit
aliner camper for sale missouri
childrens disposable face masks made in usa
how to tell if a silkie chick is male or female
resident alien where to watch
update glibc amazon linux
5ghz wifi relay switch
invalid tnef message
the elf on
1 x 3 furring strips
1986 dodge ram 318 timing
fairy tail fanfiction lucy turns into a wolf
vitamin b12 ipi
green hell ayahuasca 3
easy eddy paddle board discount code
harry potter fanfiction harry molested
do gps trackers need wifi
eas rf apparel ink tag
tiktok adulting version apk
spark capital crypto
dvd cd storage tower
spanish verb flashcards pdf
how strong is rimuru quora
morgan 51 out island for sale
craftsman lawn tractor model 917 troubleshooting
slap battles private server commands
poyun chapter 50
tell us about a time you overcame a challenge examples
cargo collective image size
sims 4 farm mods
digital 101 questions and answers
pinellas county arrests yesterday
percy jackson is harry potters half brother fanfiction
40 inch monitor
talaria ebike range
slight tremors newborn
nvidia geforce 8600m gs kaufen
kibana user authentication
gigamic quoridor strategy game gcqo
ethiosat longitude number
qishare 133 14
kendo dropdownlist get selected value jquery
places to live near ramstein afb
rising rampage special edition reddit
bts reaction possessive
ek43 single dose hopper
kone apprenticeship
sissy beauty salon makeover stories
hololive art reddit
car accident council bluffs
sandblasting grit
what are skyslope forms
who is the most handsome member of bts
artie cruella
itachi shinden book light and darkness
discount knives and swords
destiny 2 deepsight hive sword
2010 peugeot 3008 problems
apartments for rent in the city
120v 60hz light bulbs
mini pop it
how to reset ambient temperature sensor dodge journey
dummy rest api
asyncio exception handler
tpc passport contact number
finish the phrase game for seniors
mia taeng ep 1 eng sub
catia v5 r25
fashion trends 2022 winter
mlp tickle quiz
microsoft excel linkedin quiz answers 2022
troypoint uk turk
mannys mom fitness lipstick alley
byr meaning twitter
southwest speciality products 51003c dr pepper
ronnie mcnutt full video reddit
noi aphmau fan art
jock itch cream
kurt angle now
block json render callback
is air bud on disney plus
duke 2022 football commits
storyworth competitors
samsonite checked luggage size
storm doors clearance 32x80
poulan 16 inch chainsaw fuel lines diagram
a talent for life midsomer murders
new housing developments craigavon
yamaha p71 review
amazon relay hotshot
pogo classic boggle bash
aircraft fire bottle refill
avengers marvel wiki
windows 10 2022 iso
ovulyatsiya testi
remington 7615 police value
caesars rewards login
osint challenges reddit
imperialism vs anti imperialism essay
power wheels wild thing mods
tandem bike hire manchester
toyota aygo semi automatic clutch replacement
why is nemacolin closed
quicke loader parts usa
jds labs element 2 uk
lock and dam 15 camera
art 3d model pose
at home euthanasia maine
inverted v chart pattern
top 10 tiktok influencers
fly fishing beanie
weakest cigarettes
ipad vs boox note air 2
phobos madness combat
mean green alternator output
grpc cancelled
ces speakers
bungalows for sale with loch views in scotland
hornsby council kerbside cleanup dates 2021
link aggregation huawei
i don t like my step daughter
qplaintextedit set text python
speaking opportunities at conferences 2022
tamagotchi refuses to play games
edit input field in react
animal shelter dallas
smelting palladium
air bar max wholesale
arma limitless bodies
boho hairstyle wedding
owasp zap tutorial pdf
2006 toyota tacoma towing capacity
how to migrate minecraft account to microsoft
crestliner pro tiller 1850 for sale
toy chica weight gain story
what is live scan
clare siobhan willow
how to cancel john casablanca
picrew vrgonic
blocking request from unknown origin jupyter
gmk taro r3
xim apex halo infinite
waste shredder for sale
cj2a parts list
ben franklin store
lego light brick part number
newcally lashes false eyelashes cat eyes
failed while removing virtual ethernet switch connections
how to sell nft on solana
printf double
foam wood beams
ps1 texture pack
10 weeks pregnant twins ultrasound
cyberpunk 2077 rogue romance
roblox regions
desk calendar 2021
buddha with many arms meaning
second order low pass filter calculator
sudshare coupon
lewmar 44 winch parts
cambridge o level threshold november 2021
2006 international dt466 for sale
costco gazebo 16 x 20
sherco 300 sef dyno
the sisters caf a novel
press a numeric key to select an alternative bootloader
led deck lights
desktop faraday cage
9x18 makarov ammo near me
trading company catalog
marrying a cuban woman
obsolete stevens gun parts
how to make a voodoo doll with a sock
create servicenow ticket from teams
inverter stromerzeuger 4000 watt
underdark wow
mtd backup openwrt
i love you like no otter
upsimples baby changing bag
sabertrio retention screw
haris obit
retrieving your erc20 token sent to fantom opera wallet
bmw e39 motortuning
how long will it take the object to reach the maximum height
tailwind range slider react
best nanny agency chicago
how to restart note 20 ultra
subaru outback accessories 2022
hyperx pulsefire core gaming hx mc004b
pegasus 5e mount
pepsico termination policy
the beginning of the
from the mixed up files of mrs basil summary
plant city funeral homes
google colab command line
lx27amie prodigieuse tome
street fighter 2
swarovski tennis necklace set
pagalworld marathi movie download 2022
horace 101 dalmatians
menards 4x4x12 treated post
monkey mod manager
mens flexfit baseball caps
telegram this channel cannot be displayed because it was used to spread android
kato n gauge trains
siemens onboarding process
fischl elysian realm signets
ex council land rovers for sale
geeekpi raspberry pi mini tower kit
scanner glass replacement
phase leon tattoo
electrical panel box door replacement
12 baby ferrets for sale
i spy extreme
little rocky run neighbor to neighbor
old rustic furniture for sale
heart disease prediction dataset kaggle
enclosed hotel liminal space
podman auth json example
gs300 tps calibration
guitar template pdf
serendipity corn jung
silicon valley gilfoyle
kayak mods for fishing
olivia and everett weston novel
foggy weather pokemon go coordinates
html table header vertical text
velogk warp car
chevy cobalt anti theft bypass
tappan lake hunting map
rpg maker mv battle on map
wioa approved training programs
when levy was growing up why was she indifferent to jamaica
naturewise vitamin d3 5000iu
best palm vape battery
bts tourism impact
9mm drum magazine
amazon basics hdmi to dvi adapter cable
pre and post processing in ultrasound
teknoparrot rom pack
quickbooks pos crack serial keygen
wrath of the dragon king dragonwatch book
ikea billy bookcase hole plugs
screen slurp nick and em
lspdfr backup
tiktok roblox song id
having an attractive figure synonym
i80 road conditions wyoming
psp metrics practice test
jimmyhere youtooz
home assistant history
shanghai expo 2022
mobile device id tracking
nonce setter iphone 6s
street sesh download
sandisk ultra 64gb class 10
how to turn on rear speakers in chevy traverse
mercedes sprinter fault code p0299
banned horror movies on netflix
casio oceanus review
major soft drink companies
fort pierce car auction
underrated filipino movies
moab 2022 events
softbills and finches for sale
what is dyscalculia ielts reading answers pdf
how long is lobsterfest at red lobster 2022
ms lottery winners 2022
04874 vw code
pine script variables
stata graphs cheat sheet
hey yoo hy760
koyomi calendar
a brand new ending stay book
rensa filtration las vegas
frank sinatra italian love songs
sbn frances and friends
python replace none with empty string in json
vi and caitlyn ao3
2002 chevy tracker solid axle swap
hamsa in islam
komga download
2021 honda f6b for sale
tu mere saath saath mp3 song download pagalworld
song lyrics about child growing up
how to make a json file for fnf psych engine
dungeon props
wotlk best pvp healer
m35a3 bobbed deuce for sale
lesbian strapon sex free online videos
famous female convicts on the first fleet
quickie microfiber mop
the secret doctrine summary pdf
toyota tundra secondary air injection pump recall
digitrax dcs50 manual
brute force android pin raspberry pi
international b414 reviews
hydraulic oil for mahindra tractor
firmware lg tv usb
radiomaster rc
scratch training for teachers free
the king in love viu
ariens 04743700 friction wheel
unifi high tcp latency
general lee rap song
chernobyl hbo
asic miner repair us
bella poarch born
econet en7528
german surgical instruments manufacturers
fontainebleau doral apartments for rent
bios image aether sx2
chevrolet blazer classic
mustang skid steer wheel bearing replacement
2 point perspective drawing
kayla bradford arrest mcminn county
chemical element i crossword clue
mt4 clock
chicco keyfit 35 stroller adapter
a259 roadworks
baseball tournaments in florida 2022
how big is the meat industry
cyberpunk 2077 shotgun list
how to clean sticky mechanical keyboard switches
verizon installation appointment
excel convert column to array
splunk replication factor
reply with video instagram
mtg card database
analog circuit design interview questions
adopt me script arceus x 2022
feit floodlight camera app
dirt bike rally fairing
smart contract bytecode
m8 bolt 3d model
hell baby 2013 ok ru
stfc best burning crew
key largo camping
livelynine brushed nickel vinyl peel
canoptek scarab swarm paint
pelican bass raider 10e for sale
dell inspiron srs premium sound laptop specs
unsent message to victor
skycut d24 manual
irs manual pdf
petpet gif generator
48 inch gray bathroom vanity with top
building project management courses
head gasket material sheet
stbemu code unlimited
anime sad girl
come holy spirit let your fire fall chords
analog multiplexer chip
smok novo 2 hacks
benefit advisor usa
francesca love is blind instagram
cms telehealth billing guidelines 2021
white bed frame with storage
invicta menx27s 8928 pro diver collection
bennington triangle rocks
citric acid powder in eye
zte router lan not working
dm556 vs dm542 vs tb6600
neural dsp next plugin
makeup cc folder sims 4
why do cheaters lie when caught
inatrade laporan realisasi impor
coding interview feedback examples
terraform locals file
software center
nvidia a40 vs v100
mother and daughter look like sisters
pasco county judges
stripper girl party fuck her face
gigabyte motherboard bios
tinder nz online
wii sports club download code
concerts in cairo 2022
webster parish arrests 2022
poem paraphrasing exercises with answers
macallan classic cut series
black and pink house santa monica
american bulldog fort worth
gtl prepaid settlement
farm manager 2022 review
viper mini scroll wheel replacement
1997 mitsubishi eclipse gsx for sale
roblox captcha impossible
list of soft and hard th words
neat handwriting font
freightliner spn 524042
mcgee creek lake level
eventafterallrender fullcalendar example
unit test typed httpclient
wing rib and spar design
evony war horn
msi driver camera
roblox dragon ball z final stand script pastebin
unity addressables missing
ross bike vintage
trailer sales auburn maine
octapharma online screening
aisi queens highlights bob wigs
siemens salary negotiation
sig sauer x ray3 day night suppressor sights
iowa pbs advance magazine
mckesson uphv3036 staydry ultra underpads 30quot
saza hub latest version
lexington medical center directory
city of largo permits
enfj female rarity
florida summer 2022 weather predictions
lidl quiz
rotax max won t idle
spotify album cover download
cortana body actor
browning catalogue 2021
new trex arms sidecar
short nights of the shadow catcher
openwrt x86 reddit
masters swimming columbia mo
kubota bx25 hood for sale
demolition derby wisconsin
deluge socks5
serovital advanced for women
albuquerque gangs list
dark academia presets free
java io filenotfoundexception permission denied android
best couples outfits
have you ever seen the rain lyrics
kraken v2 vs v3
drag brunch philadelphia
is picrew safe
x a novel summary
number line to 50 negative and positive
carrier cased coil dimensions
1992 lexus ls400 common problems
m1 garand airsoft gbb
394 barrett drive stateline nv
stm32 tcp example
portable air conditioner vertical window
tabc certificate inquiry
audio technica ath g1wl wireless review
drive file stream menu drive file stream
stellaris no outliner
shanghai nampa
used ford prices
david cassidy children
huge black nipples
anime character name start with n
emerson vibration analysis chart
discord stream quality
bean hopper breville
educational insights brainbolt
qg 14k turkey
successful libra woman
20 lb propane tank thread size
themes for android download
bazi season month
quilted northern ultra plush vs soft and strong
security breach
esphome serial bridge
wireplumber guide
python int to 32 bit binary
ansible playbook example github
german dagger for sale
free tube porn young couples
does herpes go away
intel rst samsung nvme driver
swizzin apps
aquatic shower stall installation
young justice fanfiction oc experiment
is chump change offensive
dkms install arch
x96 mini tv box firmware
len rome
black radiance true complexion creme contour palette
free wrestling simulator
queen battlewinner
sling weapon speed
house wytch
little dinosaur ten
train car for sale
dell tower server t40
highland council pensions phone number
piano keyboard printable pdf
iphone 13 airplane mode
json bas
1967 mustang project car
matic hack
khubani aerocity
flipkart mobile offers
twitch ban asmr streamers
spc camber bolts frs
peterbilt ignition switch diagram
where can i watch helluva boss season 2
yum disable ssl
honda ep3 performance parts
8 sub enclosure
kaerek model homes
baby video monitor reviews
are there wolves in west virginia tygart valley
sims 4 twisted wonderland uniform
yoruba lesson note for primary 5
cuddle nyc
x570 tomahawk slow ethernet
pathfinder lost omens world guide pdf
ldconfig manually link individual libraries
create profile in oracle 12c
verkerk pigeons for sale
eurythmics discography zip
lunch boxes insulated
flyer design size in illustrator
slope of log log plot matlab
lodges for sale cotswolds
biggest construction companies in usa
2009 les paul standard
blue def def002 2pk
loscoe facebook
apex sharing in salesforce
conviasa vuelos
ultimate god cyoa interactive
silk fabric for embroidery
arma 3 ace commands
chiappa little badger scope
bass cat puma sts
ramshaw real estate login
overcooked all you can eat crossplay
vekkia rechargeable amber reading
universal basic income a universally bad idea
chmod permission denied
japan wet open pussy
desoto homes for sale 75115
driven southern alphas book 1
acnh christmas designs
berkeley cs 162
core sound fishing report
binance deposit suspended
m8 to m10 stud
generation zero weapon replacer
who owns birchwood foods
silverado bose speakers
curvy girls canx27t date billionaires
koke transfermarkt
cuddling with a guy who isn t your boyfriend
bulldogs for sale on hoobly
amd ryzen 3 gaming laptop
business analysis of tesla
sherlock gnomes full movie
tai hao vs oem
chevelle for sale near alabama
5e incense cost
webflow automatic slider
carnival ride model
upcoming auctions in iowa
rpm remove package and all dependencies
serotonin vasodilation | https://cs-advert.pl/1312/06/2006.html | CC-MAIN-2022-40 | en | refinedweb |
Apache also Docker containers and Kubernetes, the IT industry is moving forward even faster now, and Camel is evolving too in order to ease the developers as it always has been. To get an idea of kinda tools you would need to develop and run applications on Docker and Kubernetes, check out the Fabric8 project and specifically tools such as the Docker Maven plugin, Kubernetes CDI extension, Kubernetes Java client, Arquilian tests for Kubernetes,:
<camelContext id="camel" xmlns=""> <route> <from uri="direct:start"/> <loadBalance> <circuitBreaker threshold="2" halfOpenAfter="1000"> <exception>MyCustomException</exception> </circuitBreaker> <to uri="mock:result"/> </loadBalance> </route> </camelContext> became?
public class ClientRoute extends RouteBuilder { @Override public void configure() { from("timer:trigger?period=1s") .log(" Client request: ${body}") .hystrix() .to("") // use onFallback() to provide a repsonse message immediately: .transform().simple("Fallback ${body}") // use onFallbackViaNetwork() when there is a 2nd service call .onFallbackViaNetwork() .to("") .end() .log("Client response: ${body}"); } } foget that to create a true resilient application, you need more than Hystrix. Hystrix will do bulkheading at thread pool level, but that is not enough if you don’t apply the same principle at process, host and phisical machine level. To create a create a resilient distributed system, you will need to use also Retry, Throttling, Timeout… and other good bractices some of which I have described in Camel Design Patterns book.
To get some hands on feeling of the new pattern, check the example and then start defending your Camel based Microservices with Hystrix. | https://www.javacodegeeks.com/2016/06/create-resilient-camel-applications-hystrix-dsl.html | CC-MAIN-2022-40 | en | refinedweb |
Welcome to this tutorial on the PriorityQueue class of the Java collections framework. In this lesson, we’ll be learning about how to use this powerful tool through a series of examples. By the end of this tutorial, you’ll be an expert on using PriorityQueues in your own Java programming projects. So let’s get started!
If you need to process objects based on priority rather than FIFO, then a PriorityQueue is what you’re looking for. It’s implemented in Java using the Queue interface and internally uses a Binary Heap.
Priority Queue in Java
A PriorityQueue is perfect when you need to process objects based on priority. As we all know, a Queue follows the First-In-First-Out algorithm, but In PriorityQueue, items are retrieved according to their priorities. By default, the priority is determined by objects’ natural ordering, but this can be overridden by a Comparator provided at queue construction time.
A priority queue is a data structure that allows users to insert elements in any order but retrieve them in a sorted manner. Priority queues use a data structure called binary heap to store elements. A binary heap is a self-organizing binary tree which adjusts itself each time elements are added or removed from it. In the case of min-heap, the smallest element (1) is always kept at the very top regardless of the order in which it was inserted.
It’s important to note that priority queues (binary heaps) don’t necessarily store elements in absolute sorted order. This is for efficiency in terms of speed of insertion and retrieval. I’ll show this with an example by iterating through a priority queue.
Creating PriorityQueue
To create a priority queue in Java, we must import the java.util.PriorityQueue package. Once we have imported the package, we can create a priority queue by following these steps:
PriorityQueue<Integer> numbers = new PriorityQueue<>();
Methods Of PriorityQueue:
The Priority Queue class provides a way to manage data so that the most important items are always processed first. Let’s take a look at how to perform some of the most common operations on this type of data structure.
Adding Elements:
To add an element to a priority queue, we can use the add() method. The elements in the queue are stored according to their priority, with lower priorities being stored first by default.
package com.SoftwareTestingO.collections; import java.util.PriorityQueue; public class PriorityQueueAdd { public static void main(String[] args) { PriorityQueue<String> pqueue = new PriorityQueue<>(); pqueue.add("Software"); pqueue.add("Testingo"); pqueue.add("Blog"); System.out.println(pqueue); } }
Removing Elements:
You can remove an element from a priority queue by using the remove() method. If there are multiple objects that you want to remove, then the first occurrence of the object will be removed. You can also use the poll() method to remove the head and return it.
package com.SoftwareTestingO.collections; import java.util.PriorityQueue; public class PriorityQueueRemove { public static void main(String[] args) { PriorityQueue<String> pqueue = new PriorityQueue<>(); pqueue.add("Software"); pqueue.add("Testingo"); pqueue.add("Blog"); System.out.println("Initial PriorityQueue "+pqueue); //Remove Element pqueue.remove("Blog"); System.out.println("After Remove - " + pqueue); System.out.println("Poll Method - " + pqueue.poll()); System.out.println("Final PriorityQueue - " + pqueue); } }
The sequence of items in the priority queue is not always in sorted order, but when we retrieved the items then they were retrieved always in sorted order.
Accessing the elements:
Since a queue follows the First In First Out principle, we can only access the head of the queue. To access elements from a priority queue, we can use the peek() method.
package com.SoftwareTestingO.collections; import java.util.PriorityQueue; public class PriorityQueueAccess { public static void main(String[] args) { PriorityQueue<String> pqueue = new PriorityQueue<>(); pqueue.add("Software"); pqueue.add("Testingo"); pqueue.add("Blog"); System.out.println(pqueue); //Accessing The elements String element=pqueue.peek(); System.out.println("Accessed Element: " + element); } }
Iterating the PriorityQueue:
There are multiple ways to traverse through the PriorityQueue. The most famous way is converting the queue to an array and using a for loop. However, there is also an inbuilt iterator that can be used.
package com.SoftwareTestingO.collections; import java.util.Iterator; import java.util.PriorityQueue; public class PriorityQueueIterating { public static void main(String[] args) { PriorityQueue<String> pqueue = new PriorityQueue<>(); pqueue.add("Software"); pqueue.add("Testingo"); pqueue.add("Blog"); System.out.println(pqueue); //Iterating The elements Iterator iterator = pqueue.iterator(); while (iterator.hasNext()) { System.out.print(iterator.next() + " "); } } }
Conclusion:
In this Java queue tutorial, we learned how to use the PriorityQueue class which can store elements either by default in a natural ordering or you can specify a custom ordering using a comparator. | https://www.softwaretestingo.com/priorityqueue-in-java/ | CC-MAIN-2022-40 | en | refinedweb |
If Javascript, an old language with so many flaws, could evolve like it did, Python could learn from it and step out of its ivory tower.
This isn’t a targeted attack on Python. It is a constructive opinion from a programmer who had started his career and spent years working with it. So brace yourself — Python is evolving slowly and could use the level of improvement Javascript has had.
Javascript has come a long way since being the most hated but widely-used scripting language of the web that was created in less than a month. With ES6 and the sister language Typescript (that arguably helped propelled JS ecosystem to a new height), Javascript is no longer an ugly language to be looked down to. For instance, it has changed dramatically from a callback-based language into promise-based and async-await syntax in what seemed like a blink of an eye compared to the rate Python moved to reach a unanimous support for v3.
A very impressive deed of Javascript, in my opinion, is its move toward functional programming. The syntax is clean, learning from the great functional languages like ML. This alone sprouted so many great improvements to the ecosystem such as React, ReasonML, and more. Let’s admit it, there’s a lot to love about modern Javascript compared to Python 3.
Until recently, Python had always been under tight moderation of its creator, Guido Rossum. While that had its good parts, the downside is the language’s slow development due to the bottleneck from the resistance of the creator.
Lambda what?
For example, Guido Rossum distaste for lambdas was well-known. Perhaps the goal of Python’s half-baked lambda syntax that I’ve always found clumsy and useless to this day has always been to keep users from using it in the first place.
x = lambda a : a + 10
There is nothing wrong with this syntax, since many Lisp dialects also use the keyword lambda to initialize an anonymous function. However, being a whitespace-significant language and not expression-based, the lambda syntax of Python makes a cumbersome and ambiguous experience in applying lambdas:
x = lambda a : a + 10 (lambda b : b * 2))
Now who says Python is still easy to read? This kind of beats the very reasons of lambdas, which are expressiveness and clarity. This appears ambiguous to me. Compared to the anonymous function syntax of ES6:
let x = (c => c + 2)(a => a + 10)(b => b * 2)(y);
Although not the cleanest (the cleanest lambda syntax goes to Lisp), the applications are clear, considered you know it is right-associated. The only thing Javascript is missing is piping syntax, which would have made it an even more expressive programming tool.
let x = y |> (c => c + 2) |> (a => a + 10) |> (b => b * 2)
map(function, iterable, …)
Admit it, you either hate or feel meh over mapping iterables in Python, or you might not even use it.
Most languages, including Javascript, treat mapping function as the iterable or iterator method. This removes the iterable from the argument list and makes it much clearer with the callback function being the last argument.
let nums = [1, 2, 3, 4].map(n => n * 2);
Here is another map function in Rust:
let nums = [1, 2, 3, 4].iter().map(|n| n * 2);
Some functional languages such as Haskell and Ocaml use the
module:function approach, which is similar to what Python does. However, their expression-based syntax and unambiguous scoped variable binding makes the code readable and easy to get right. Here is a sample of a map function in Ocaml:
let nums = let double n = n * 2 in List.map double [1; 2; 3; 4]
You need to do this in Python:
double = lambda n : n * 2 nums = map(double, [1, 2, 3, 4])
You may think Python looks way cleaner here, which you may be right. But consider most use cases where you chain operators.
let nums = [1, 2, 3, 4] .map(n => n * 2) .filter(n => n > 2); .reduce((acc, current) => acc + current);
In Ocaml, it’s just magical:
let nums = [1; 2; 3; 4] |> map (fun n -> n * 2) |> filter (fun n -> n > 2) |> fold_left ( + ) 0
I’ll leave you to write an equivalent in Python (HINT: It is impossible to read!)
Decorators
Decorators are my pet peeves. To me, they were the inflection point from Python mantra of “there’s only one way to do things” and the promise of a transparent, non-magical language.
def hello(func): def inner(): print("Hello ") func() return inner def name(): print("Alice") # `hello` is a decorator function obj = hello(name) obj() # prints "Hello Alice" The short-handed version for this is: @hello def name(): print("Alice") if __name__ == '__main__': name() # prints "Hello Alice"
Decorators “hide” the logic essential and completely change the behavior of a “host” function it decorates, which is nothing short of magical to more than half of all Python users today. Albeit useful, it still puzzles me to these days why Python developers decided to adopt decorators.
Somehow, Javascript has found its niche in everything. However, its ES6 improvement and Typescript enhancement has made it even more enticing to users who dislike it (myself, for one, had become a serious Javascript user after ES6). Python similarly hold a very big niche, including web programming, data science, and machine learning, which warrant its ivory tower. However, today’s tools are evolving fast, and not changing seems like standing still waiting for a new kid on the block to beat it in the home court.
If you like my writing, please follow me on Twitter to get more of technical opinions. I’m also on Medium and would love to see you there!
Top comments (1)
Couldn't agree more. I often come up with a nice composed algorithm that I wind up partially rewriting into imperative form in python. | https://dev.to/pancy/python-should-learn-from-javascript-2jp7 | CC-MAIN-2022-40 | en | refinedweb |
April 6, 2021
Welcome to Django 3.2!
These release notes cover the new features, as well as some backwards incompatible changes you’ll want to be aware of when upgrading from Django 3.1 or earlier. We’ve begun the deprecation process for some features.
See the How to upgrade Django to a newer version guide if you’re updating an existing project.
Django 3.2 is designated as a long-term support release. It will receive security updates for at least three years after its release. Support for the previous LTS, Django 2.2, will end in April 2022.
Django 3.2 supports Python 3.6, 3.7, 3.8, 3.9, and 3.10 (as of 3.2.9). We highly recommend and only officially support the latest release of each series.
AppConfigdiscovery¶
Most pluggable applications define an
AppConfig subclass
in an
apps.py submodule. Many define a
default_app_config variable
pointing to this class in their
__init__.py.
When the
apps.py submodule exists and defines a single
AppConfig subclass, Django now uses that configuration
automatically, so you can remove
default_app_config.
default_app_config made it possible to declare only the application’s path
in
INSTALLED_APPS (e.g.
'django.contrib.admin') rather than the
app config’s path (e.g.
'django.contrib.admin.apps.AdminConfig'). It was
introduced for backwards-compatibility with the former style, with the intent
to switch the ecosystem to the latter, but the switch didn’t happen.
With automatic
AppConfig discovery,
default_app_config is no longer
needed. As a consequence, it’s deprecated.
See Configuring applications for full details.
When defining a model, if no field in a model is defined with
primary_key=True an implicit
primary key is added. The type of this implicit primary key can now be
controlled via the
DEFAULT_AUTO_FIELD setting and
AppConfig.default_auto_field
attribute. No more needing to override primary keys in all models.
Maintaining the historical behavior, the default value for
DEFAULT_AUTO_FIELD is
AutoField. Starting
with 3.2 new projects are generated with
DEFAULT_AUTO_FIELD set to
BigAutoField. Also, new apps are generated with
AppConfig.default_auto_field
set to
BigAutoField. In a future Django release the
default value of
DEFAULT_AUTO_FIELD will be changed to
BigAutoField.
To avoid unwanted migrations in the future, either explicitly set
DEFAULT_AUTO_FIELD to
AutoField:
DEFAULT_AUTO_FIELD = 'django.db.models.AutoField'
or configure it on a per-app basis:
from django.apps import AppConfig class MyAppConfig(AppConfig): default_auto_field = 'django.db.models.AutoField' name = 'my_app'
or on a per-model basis:
from django.db import models class MyModel(models.Model): id = models.AutoField(primary_key=True)
In anticipation of the changing default, a system check will provide a warning
if you do not have an explicit setting for
DEFAULT_AUTO_FIELD.
When changing the value of
DEFAULT_AUTO_FIELD, migrations for the
primary key of existing auto-created through tables cannot be generated
currently. See the
DEFAULT_AUTO_FIELD docs for details on migrating
such tables.
The new
*expressions positional
argument of
Index() enables creating
functional indexes on expressions and database functions. For example:
from django.db import models from django.db.models import F, Index, Value from django.db.models.functions import Lower, Upper class MyModel(models.Model): first_name = models.CharField(max_length=255) last_name = models.CharField(max_length=255) height = models.IntegerField() weight = models.IntegerField() class Meta: indexes = [ Index( Lower('first_name'), Upper('last_name').desc(), name='first_last_name_idx', ), Index( F('height') / (F('weight') + Value(5)), name='calc_idx', ), ]
Functional indexes are added to models using the
Meta.indexes option.
pymemcachesupport¶
The new
django.core.cache.backends.memcached.PyMemcacheCache cache backend
allows using the pymemcache library for memcached.
pymemcache 3.4.0 or
higher is required. For more details, see the documentation on caching in
Django.
The new
display() decorator allows for easily
adding options to custom display functions that can be used with
list_display or
readonly_fields.
Likewise, the new
action() decorator allows for
easily adding options to action functions that can be used with
actions.
Using the
@display decorator has the advantage that it is now
possible to use the
@property decorator when needing to specify attributes
on the custom method. Prior to this it was necessary to use the
property()
function instead after assigning the required attributes to the method.
Using decorators has the advantage that these options are more discoverable as they can be suggested by completion utilities in code editors. They are merely a convenience and still set the same attributes on the functions under the hood.
django.contrib.admin¶
ModelAdmin.search_fields now allows searching against quoted phrases
with spaces.
Read-only related fields are now rendered as navigable links if target models are registered in the admin.
The admin now supports theming, and includes a dark theme that is enabled according to browser settings. See Theming support for more details.
ModelAdmin.autocomplete_fields now respects
ForeignKey.to_field and
ForeignKey.limit_choices_to when searching a related
model.
The admin now installs a final catch-all view that redirects unauthenticated users to the login page, regardless of whether the URL is otherwise valid. This protects against a potential model enumeration privacy issue.
Although not recommended, you may set the new
AdminSite.final_catch_all_view to
False to disable the
catch-all view.
django.contrib.auth¶
The default iteration count for the PBKDF2 password hasher is increased from 216,000 to 260,000.
The default variant for the Argon2 password hasher is changed to Argon2id.
memory_cost and
parallelism are increased to 102,400 and 8
respectively to match the
argon2-cffi defaults.
Increasing the
memory_cost pushes the required memory from 512 KB to 100
MB. This is still rather conservative but can lead to problems in memory
constrained environments. If this is the case, the existing hasher can be
subclassed to override the defaults.
The default salt entropy for the Argon2, MD5, PBKDF2, SHA-1 password hashers is increased from 71 to 128 bits.
django.contrib.contenttypes¶
The new
absolute_max argument for
generic_inlineformset_factory()
allows customizing the maximum number of forms that can be instantiated when
supplying
POST data. See Limiting the maximum number of instantiated forms for more details.
The new
can_delete_extra argument for
generic_inlineformset_factory()
allows removal of the option to delete extra forms. See
can_delete_extra for more information.
django.contrib.gis¶
The
GDALRaster.transform() method now supports
SpatialReference.
The
DataSource class now supports
pathlib.Path.
The
LayerMapping class now supports
pathlib.Path.
django.contrib.postgres¶
The new
ExclusionConstraint.include attribute allows creating
covering exclusion constraints on PostgreSQL 12+.
The new
ExclusionConstraint.opclasses attribute allows setting
PostgreSQL operator classes.
The new
JSONBAgg.ordering attribute determines the ordering of the
aggregated elements.
The new
JSONBAgg.distinct attribute determines if aggregated values
will be distinct.
The
CreateExtension operation
now checks that the extension already exists in the database and skips the
migration if so.
The new
CreateCollation and
RemoveCollation operations
allow creating and dropping collations on PostgreSQL. See
Managing collations using migrations for more details.
Lookups for
ArrayField now allow
(non-nested) arrays containing expressions as right-hand sides.
The new
OpClass()
expression allows creating functional indexes on expressions with a custom
operator class. See Functional indexes for more details.
django.contrib.sitemaps¶
The new
alternates,
languages and
x_default allow
generating sitemap alternates to localized versions of your pages.
Third-party database backends can now skip or mark as expected failures
tests in Django’s test suite using the new
DatabaseFeatures.django_test_skips and
django_test_expected_failures attributes.
The new
no_append_slash() decorator
allows individual views to be excluded from
APPEND_SLASH URL
normalization.
Custom
ExceptionReporter subclasses can now
define the
html_template_path
and
text_template_path
properties to override the templates used to render exception reports.
The new
FileUploadHandler.upload_interrupted()
callback allows handling interrupted uploads.
The new
absolute_max argument for
formset_factory(),
inlineformset_factory(), and
modelformset_factory() allows
customizing the maximum number of forms that can be instantiated when
supplying
POST data. See Limiting the maximum number of instantiated forms for more details.
The new
can_delete_extra argument for
formset_factory(),
inlineformset_factory(), and
modelformset_factory() allows
removal of the option to delete extra forms. See
can_delete_extra for more information.
BaseFormSet now reports a user facing error,
rather than raising an exception, when the management form is missing or has
been tampered with. To customize this error message, pass the
error_messages argument with the key
'missing_management_form' when
instantiating the formset.
The
week_format attributes of
WeekMixin and
WeekArchiveView now support the
'%V' ISO 8601 week format.
loaddata now supports fixtures stored in XZ archives (
.xz) and
LZMA archives (
.lzma).
dumpdata now can compress data in the
bz2,
gz,
lzma,
or
xz formats.
makemigrations can now be called without an active database
connection. In that case, check for a consistent migration history is
skipped.
BaseCommand.requires_system_checks now supports specifying a list of
tags. System checks registered in the chosen tags will be checked for errors
prior to executing the command. In previous versions, either all or none
of the system checks were performed.
Support for colored terminal output on Windows is updated. Various modern terminal environments are automatically detected, and the options for enabling support in other cases are improved. See Syntax coloring for more details.
The new
Operation.migration_name_fragment property allows providing a
filename fragment that will be used to name a migration containing only that
operation.
Migrations now support serialization of pure and concrete path objects from
pathlib, and
os.PathLike instances.
The new
no_key parameter for
QuerySet.select_for_update(),
supported on PostgreSQL, allows acquiring weaker locks that don’t block the
creation of rows that reference locked rows through a foreign key.
When() expression now allows
using the
condition argument with
lookups.
The new
Index.include and
UniqueConstraint.include
attributes allow creating covering indexes and covering unique constraints on
PostgreSQL 11+.
The new
UniqueConstraint.opclasses attribute allows setting
PostgreSQL operator classes.
The
QuerySet.update() method now respects the
order_by() clause on
MySQL and MariaDB.
FilteredRelation() now supports
nested relations.
The
of argument of
QuerySet.select_for_update() is now allowed
on MySQL 8.0.1+..
The new
QuerySet.alias() method allows creating reusable aliases for
expressions that don’t need to be selected but are used for filtering,
ordering, or as a part of complex expressions.
The new
Collate function allows
filtering and ordering by specified database collations.
The
field_name argument of
QuerySet.in_bulk() now accepts
distinct fields if there’s only one field specified in
QuerySet.distinct().
The new
tzinfo parameter of the
TruncDate and
TruncTime database functions allows
truncating datetimes in a specific timezone.
The new
db_collation argument for
CharField and
TextField allows setting a
database collation for the field.
Added the
Random database function.
Aggregation functions,
F(),
OuterRef(), and other expressions now
allow using transforms. See Expressions can reference transforms for
details.
The new
durable argument for
atomic()
guarantees that changes made in the atomic block will be committed if the
block exits without errors. A nested atomic block marked as durable will
raise a
RuntimeError.
Added the
JSONObject database function.
The new
django.core.paginator.Paginator.get_elided_page_range() method
allows generating a page range with some of the values elided. If there are a
large number of pages, this can be helpful for generating a reasonable number
of page links in a template.
Response headers are now stored in
HttpResponse.headers. This can be
used instead of the original dict-like interface of
HttpResponse objects.
Both interfaces will continue to be supported. See
Setting header fields for details.
The new
headers parameter of
HttpResponse,
SimpleTemplateResponse, and
TemplateResponse allows setting response
headers on instantiation.
The
SECRET_KEY setting is now checked for a valid value upon first
access, rather than when settings are first loaded. This enables running
management commands that do not rely on the
SECRET_KEY without needing to
provide a value. As a consequence of this, calling
configure() without providing a valid
SECRET_KEY, and then going on to access
settings.SECRET_KEY will now
raise an
ImproperlyConfigured exception.
The new
Signer.sign_object() and
Signer.unsign_object() methods allow
signing complex data structures. See Protecting complex data structures for more
details.
Also,
loads() become shortcuts for
TimestampSigner.sign_object() and
unsign_object().
The new JSONL serializer allows using
the JSON Lines format with
dumpdata and
loaddata. This
can be useful for populating large databases because data is loaded line by
line into memory, rather than being loaded all at once.
Signal.send_robust() now logs
exceptions.
floatformat template filter now allows using the
g suffix to
force grouping by the
THOUSAND_SEPARATOR for the active locale.
Templates cached with Cached template loaders are now correctly reloaded in development.
Objects assigned to class attributes in
TestCase.setUpTestData() are
now isolated for each test method. Such objects are now required to support
creating deep copies with
copy.deepcopy(). Assigning objects which
don’t support
deepcopy() is deprecated and will be removed in Django 4.1.
DiscoverRunner now enables
faulthandler by default. This can be disabled by using the
test --no-faulthandler option.
DiscoverRunner and the
test management command can now track timings, including database
setup and total run time. This can be enabled by using the
test
--timing option.
Client now preserves the request query string when
following 307 and 308 redirects.
The new
TestCase.captureOnCommitCallbacks() method captures callback
functions passed to
transaction.on_commit() in a list. This allows you to test such
callbacks without using the slower
TransactionTestCase.
TransactionTestCase.assertQuerysetEqual() now supports direct
comparison against another queryset rather than being restricted to
comparison against a list of string representations of objects when using the
default value for the
transform argument.
The new
depth parameter of
django.utils.timesince.timesince() and
django.utils.timesince.timeuntil() functions allows specifying the number
of adjacent time units to return.
Built-in validators now include the provided value in the
params argument
of a raised
ValidationError. This allows
custom error messages to use the
%(value)s placeholder.
The
ValidationError equality operator now ignores
messages and
params ordering.
This section describes changes that may be needed in third-party database backends.
The new
DatabaseFeatures.introspected_field_types property replaces these
features:
can_introspect_autofield
can_introspect_big_integer_field
can_introspect_binary_field
can_introspect_decimal_field
can_introspect_duration_field
can_introspect_ip_address_field
can_introspect_positive_integer_field
can_introspect_small_integer_field
can_introspect_time_field
introspected_big_auto_field_type
introspected_small_auto_field_type
introspected_boolean_field_type
To enable support for covering indexes (
Index.include) and covering
unique constraints (
UniqueConstraint.include), set
DatabaseFeatures.supports_covering_indexes to
True.
Third-party database backends must implement support for column database
collations on
CharFields and
TextFields or set
DatabaseFeatures.supports_collation_on_charfield and
DatabaseFeatures.supports_collation_on_textfield to
False. If
non-deterministic collations are not supported, set
supports_non_deterministic_collations to
False.
DatabaseOperations.random_function_sql() is removed in favor of the new
Random database function.
DatabaseOperations.date_trunc_sql() and
DatabaseOperations.time_trunc_sql() now take the optional
tzname
argument in order to truncate in a specific timezone.
DatabaseClient.runshell() now gets arguments and an optional dictionary
with environment variables to the underlying command-line client from
DatabaseClient.settings_to_cmd_args_env() method. Third-party database
backends must implement
DatabaseClient.settings_to_cmd_args_env() or
override
DatabaseClient.runshell().
Third-party database backends must implement support for functional indexes
(
Index.expressions) or set
DatabaseFeatures.supports_expression_indexes to
False. If
COLLATE
is not a part of the
CREATE INDEX statement, set
DatabaseFeatures.collate_as_index_expression to
True.
django.contrib.admin¶
Pagination links in the admin are now 1-indexed instead of 0-indexed, i.e.
the query string for the first page is
?p=1 instead of
?p=0.
The new admin catch-all view will break URL patterns routed after the admin
URLs and matching the admin URL prefix. You can either adjust your URL
ordering or, if necessary, set
AdminSite.final_catch_all_view to
False,
disabling the catch-all view. See What’s new in Django 3.2 for more details.
Minified JavaScript files are no longer included with the admin. If you require these files to be minified, consider using a third party app or external build tool. The minified vendored JavaScript files packaged with the admin (e.g. jquery.min.js) are still included.
ModelAdmin.prepopulated_fields no longer strips English stop words,
such as
'a' or
'an'.
django.contrib.gis¶
Support for PostGIS 2.2 is removed.
The Oracle backend now clones polygons (and geometry collections containing polygons) before reorienting them and saving them to the database. They are no longer mutated in place. You might notice this if you use the polygons after a model is saved.
Upstream support for PostgreSQL 9.5 ends in February 2021. Django 3.2 supports PostgreSQL 9.6 and higher.
The end of upstream support for MySQL 5.6 is April 2021. Django 3.2 supports MySQL 5.7 and higher.
Django now supports non-
pytz time zones, such as Python 3.9+’s
zoneinfo module and its backport.
The undocumented
SpatiaLiteOperations.proj4_version() method is renamed
to
proj_version().
slugify() now removes leading and trailing dashes
and underscores.
The
intcomma and
intword template filters no longer
depend on the
USE_L10N setting.
Support for
argon2-cffi < 19.1.0 is removed.
The cache keys no longer includes the language when internationalization is
disabled (
USE_I18N = False) and localization is enabled
(
USE_L10N = True). After upgrading to Django 3.2 in such configurations,
the first request to any previously cached value will be a cache miss.
ForeignKey.validate() now uses
_base_manager rather than
_default_manager to check that related
instances exist.
When an application defines an
AppConfig subclass in
an
apps.py submodule, Django now uses this configuration automatically,
even if it isn’t enabled with
default_app_config. Set
default = False
in the
AppConfig subclass if you need to prevent this
behavior. See What’s new in Django 3.2 for more details.
Instantiating an abstract model now raises
TypeError.
Keyword arguments to
setup_databases() are now
keyword-only.
The undocumented
django.utils.http.limited_parse_qsl() function is
removed. Please use
urllib.parse.parse_qsl() instead.
django.test.utils.TestContextDecorator now uses
addCleanup() so that cleanups registered in the
setUp() method are called before
TestContextDecorator.disable().
SessionMiddleware now raises a
SessionInterrupted exception
instead of
SuspiciousOperation when a session
is destroyed in a concurrent request.
The
django.db.models.Field equality operator now correctly
distinguishes inherited field instances across models. Additionally, the
ordering of such fields is now defined.
The undocumented
django.core.files.locks.lock() function now returns
False if the file cannot be locked, instead of raising
BlockingIOError.
The password reset mechanism now invalidates tokens when the user email is changed.
makemessages command no longer processes invalid locales specified
using
makemessages --locale option, when they contain hyphens
(
'-').
The
django.contrib.auth.forms.ReadOnlyPasswordHashField form field is now
disabled by default. Therefore
UserChangeForm.clean_password() is no longer required to return the
initial value.
The
cache.get_many(),
get_or_set(),
has_key(),
incr(),
decr(),
incr_version(), and
decr_version() cache operations now
correctly handle
None stored in the cache, in the same way as any other
value, instead of behaving as though the key didn’t exist.
Due to a
python-memcached limitation, the previous behavior is kept for
the deprecated
MemcachedCache backend.
The minimum supported version of SQLite is increased from 3.8.3 to 3.9.0.
CookieStorage now stores
messages in the RFC 6265 compliant format. Support for cookies that use
the old format remains until Django 4.1.
The minimum supported version of
asgiref is increased from 3.2.10 to
3.3.2.
Assigning objects which don’t support creating deep copies with
copy.deepcopy() to class attributes in
TestCase.setUpTestData() is deprecated.
Using a boolean value in
BaseCommand.requires_system_checks is
deprecated. Use
'__all__' instead of
True, and
[] (an empty list)
instead of
False.
The
whitelist argument and
domain_whitelist attribute of
allowlist instead of
whitelist, and
domain_allowlist instead of
domain_whitelist. You may need to rename
whitelist in existing
migrations.
The
default_app_config application configuration variable is deprecated,
due to the now automatic
AppConfig discovery. See What’s new in Django 3.2
for more details.
Automatically calling
repr() on a queryset in
TransactionTestCase.assertQuerysetEqual(), when compared to string
values, is deprecated. If you need the previous behavior, explicitly set
transform to
repr.
The
django.core.cache.backends.memcached.MemcachedCache backend is
deprecated as
python-memcached has some problems and seems to be
unmaintained. Use
django.core.cache.backends.memcached.PyMemcacheCache
or
django.core.cache.backends.memcached.PyLibMCCache instead.
The format of messages used by
django.contrib.messages.storage.cookie.CookieStorage is different from
the format generated by older versions of Django. Support for the old format
remains until Django 4.1. | https://django.readthedocs.io/en/latest/releases/3.2.html | CC-MAIN-2022-40 | en | refinedweb |
I am currently studying on multiple inheriting operation.What ihave learnt is we can’t inherit multiple class but we can inherit a class and an interface so i just need to switch a class to an interface. Like below
class a{} class b{} interface d{} //class c:a,b{} That does not work //class c:a,d{} or class c:b,d{} but these need to work
If i return to my question respect for multiple inheritance operation, Here is an complete program that explains my problem.
using System; using System.Collections.Generic; using System.Text; namespace interface_ile_multiple_inheritance { interface set_get { double _getx { get; } double _gety { get ; } protected double total;//I want to use it by inheriting } class set_items { protected double x; protected double y; public set_items(double _x, double _y) { x = _x; y = _y: } } class display : set_items, set_get { public display(double x, double y) : base(x, y) { } public double _getx { get { return x; } } public double _gety { get { return y; } } public double total_res() { total = x + y; // Error line 1 total does not exist return total; //Error line 2 total does not exist } } class Program { static void Main(string[] args) { display disp1 = new display(12, 12); Console.WriteLine("Total:" + disp1.total_res().ToString());//HERE İ NEED TO FİND x+y Console.WriteLine("Area:" + (disp1._getx * disp1._gety).ToString());//HERE İ NEED TO FİND x*y } }
}
Even though i can inherit interface’s property signatures,why can’t i inherit its integer variable?! | https://extraproxies.com/how-do-i-inherit-an-interfaces-variable/ | CC-MAIN-2022-40 | en | refinedweb |
Introduction
In this article we will learn how we can seamlessly generate fake data in Flutter that we can use with-in our test suites, or just for some placeholder data in our app when the backend is not completely setup yet or any other scenario that we just need a fake data in general. You will also learn how to create a Flutter package and use that package with-in your application, it can be used both in
lib and
test folder. And lastly you will learn how to setup a basic monorepo for your future projects, however I am not going to tackle about monorepos in this post on what are its pros and cons.
However, if you don't really need the monorepo setup, just skip over to the actual implementation of the Factory Pattern.
Packages
We will only require the following packages for us to seamlessly generate fake data.
- faker: Will handle generating of random data of any data type.
- uuid: Will generate a random uuid for us.
- build_runner: Responsible for auto-generating us some boilerplate code.
- freezed: Is where we create our data models.
Setup
To follow along, we'll start to setup a monorepo, first create your terminal and navigate into that directory on where you put up all your projects. I prefer to have mine under
/dev/personal-projects so I'll
cd into it.
Next up is create a directory and call it "flutter-fake-data-factory-pattern" via
mkdir terminal command.
mkdir flutter-fake-data-factory-pattern
Then next is to initialize Git in this directory so we can start up to track changes. Or maybe upload this into your repository or not. Whichever you prefer.
cd flutter-fake-data-factory-pattern && git init
Create a new directory called
packages and create a Flutter package inside that directory and call it
app_core
mkdir packages cd packages flutter create --template=package app_core
Once that is done you can proceed with installing the dependencies mentioned above.
Have the following in your dependencies
pubspec.yaml file
# Add this so our local package will not be published to pub.dev publish_to: "none" dependencies: freezed_annotation: ^0.14.2 json_annotation: ^4.0.1 faker: ^2.0.0 uuid: ^3.0.4 dev_dependencies: build_runner: ^2.0.5 freezed: ^0.14.2 json_serializable: ^4.1.3
Install the dependencies afterwards.
cd app_core && flutter pub get
Next is to create a Flutter project is where our Flutter app lives. Now go up 2 directories via
cd ../..
Then create another directory and call it "apps", it is where we will house our Flutter apps.
mkdir apps flutter create client_app
Then finally, just import the local package in "client_app"
pubspec.yaml
app_core: path: ../../packages/app_core
Now that it is all setup and done we can proceed with the implementation of the Factory Pattern.
Keep in mind that since we have this kind of setup, all of our data models will be coming from the local package that we created
app_core
Creating the ModelFactory abstract
We'll want to create an abstract class for our factory classes so that they will be aligned with the actual implementation, then we'll just have a generic type argument to pass down to differ its results when other classes implements the abstract.
import 'package:faker/faker.dart'; import 'package:uuid/uuid.dart'; abstract class ModelFactory<T> { Faker get faker => Faker(); /// Creates a fake uuid. String createFakeUuid() { return Uuid().v4(); } /// Generate a single fake model. T generateFake(); /// Generate fake list based on provided length. List<T> generateFakeList({required int length}); }
We are only keeping it simple, we just want a single item and a List of items of a particular data model.
- The getter
fakeris the
Fakerinstance that got instantiated, it is where we grab all random data of any type.
- Next is the
createFakeUuidmethod which is responsible for generating us a fake uuid out of the fly. Typically uuids are used when you have a data provider like Firestore or NoSQL databases like MongoDB, or a relational database that has a primary key type of uuid. But you can switch this up any however you want it to be.
- The
generateFakeis responsible for creating a single data model that requires implementation details; We'll have the implementation details for this in the factory class that extends this abstract class.
- Lastly, the
generateFakeListwill return a list, implementation details are not concerned here as well and it is being handled in the factory class the implements this abstract. It will just simply return a list of what is returned from
generateFakemethod.
Defining our data models
We'll just have it simple, we'll want two for now a
User and a
Tweet data model.
import 'package:freezed_annotation/freezed_annotation.dart'; part 'user.g.dart'; part 'user.freezed.dart'; @freezed class User with _$User { factory User({ required String id, required String name, required String email, int? age, @Default(false) bool suspended, }) = _User; factory User.fromJson(Map<String, dynamic> json) => _$UserFromJson(json); }
import 'package:freezed_annotation/freezed_annotation.dart'; part 'tweet.g.dart'; part 'tweet.freezed.dart'; @freezed class Tweet with _$Tweet { factory Tweet({ required String id, required String content, @Default([]) List<String> replies, @Default(0) int likes, @Default(0) int retweets, }) = _Tweet; factory Tweet.fromJson(Map<String, dynamic> json) => _$TweetFromJson(json); }
Don't forget to run
build_runner to let it generated the boilerplate code for us. Run it via this terminal command (You have to be in
app_core directory otherwise this won't run):
flutter pub run build_runner build --delete-conflicting-outputs
If you want to learn more about data modeling with
freezed package, read more about it from this tutorial that I wrote.
Creating a factory class
Now to implement the abstract class
ModelFactory
import 'package:app_core/app_core.dart'; class UserFactory extends ModelFactory<User> { @override User generateFake() { return User( id: createFakeUuid(), email: faker.internet.email(), name: '${faker.person.firstName()} ${faker.person.lastName()}'.trim(), age: faker.randomGenerator.integer(25), suspended: faker.randomGenerator.boolean(), ); } @override List<User> generateFakeList({required int length}) { return List.generate(length, (index) => generateFake()); } }
import 'package:app_core/app_core.dart'; class TweetFactory extends ModelFactory<Tweet> { @override Tweet generateFake() { return Tweet( id: createFakeUuid(), content: faker.lorem.words(99).join(' '), likes: faker.randomGenerator.integer(5000), retweets: faker.randomGenerator.integer(2500), replies: List.generate( faker.randomGenerator.integer(105), (index) => faker.lorem.words(120).join(' '), ), ); } @override List<Tweet> generateFakeList({required int length}) { return List.generate(length, (index) => generateFake()); } }
Sample usage for generating fake data from the factory class
It's so easy and straight forward just like what you are thinking right now on how we can use these factory classes to generate us some fake data.
/// A new instance of [UserFactory] final userFactory = UserFactory(); /// Generates a [User] data model final fakeUser = userFactory.generateFake(); /// Will generate a [List] of 10 [User] data model final fakeUsers = userFactory.generateFakeList(10);
We can just basically import it anywhere and we don't have to manually set up the values for each property a User data model has.
Conclusion
We learned how to use
Faker package and applied abstraction to avoid writing up the same code again and again for other data models to generate fake data. This would come quite so handy when we write a lot of unit tests with mocking as we don't have to create a data model instance and define each properties for each test case. And lastly we can also make use of this in the UI for placeholder data when the API is not yet ready to integrate with the frontend app.
Full source code can be found from the repository
Top comments (0) | https://dev.to/carlomigueldy/generating-fake-data-in-flutter-using-the-factory-pattern-for-unit-testing-3i8k | CC-MAIN-2022-40 | en | refinedweb |
#include <rte_mbuf.h>
Private data in case of pktmbuf pool.
A structure that contains some pktmbuf_pool-specific data that are appended after the mempool structure (in private data).
Definition at line 262 of file rte_mbuf.h.
Size of data space in each mbuf.
Definition at line 263 of file rte_mbuf.h.
Size of private area in each mbuf.
Definition at line 264 of file rte_mbuf.h.
reserved for future use.
Definition at line 265 of file rte_mbuf.h. | http://doc.dpdk.org/api/structrte__pktmbuf__pool__private.html | CC-MAIN-2022-40 | en | refinedweb |
Tag:Correct
Solution of incorrect MTRR table in Linux with 4G memory
This will cause NVIDIA’s driver to fail to accelerate 2D,solveThe general solution is to rewrite the MTRR table. echo “disable=2″ >| /proc/mtrr echo “disable=1″ >| /proc/mtrr echo “disable=3″ >| /proc/mtrr echo “disable=4″ >| /proc/mtrr echo “disable=0″ >| /proc/mtrr echo “base=0×00000000 size=0×80000000 type=write-back” >| /proc/mtrr echo “base=0×80000000 size=0×40000000 type=write-back” >| /proc/mtrr echo “base=0xC0000000 size=0×10000000 type=write-back” >| […]
Technology sharing | how does video correction work in art teaching?
In recent years, the online education industry has entered the fast lane of development. People’s demand for education has become more and more clear and demanding. After a storm, we found that onlyHigh quality products and emphasis on user experienceOnly in the post epidemic era can the institutions be more and more prosperous. In the […]
Net about the processing of the pictures uploaded by the high-speed camera
I Foreground page code: <!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “”> <html xmlns=””> <head runat=”server”> <title></title> <script type=”text/javascript”> var savePath = “D:\\capture\\”; var isStart = false; function init() { createDir(); } function createDir() { captrue.bCreateDir(savePath); } function startPlay() { isStart = captrue.bStartPlay(); } function stopPlay() { captrue.bStopPlay(); } function getFileName() { var date = […]
Talk about claudib’s list command
order This paper mainly studies the list command of claudib LeftPushCommand claudb-1.7.1/src/main/java/com/github/tonivade/claudb/command/list/LeftPushCommand.java @Command(“lpush”) @ParamLength(2) @ParamType(DataType.LIST) public class LeftPushCommand implements DBCommand { @Override public RedisToken execute(Database db, Request request) { ImmutableList<SafeString> values = request.getParams().asList().tail().reverse(); DatabaseValue result = db.merge(safeKey(request.getParam(0)), list(values), (oldValue, newValue) -> list(newValue.getList().appendAll(oldValue.getList()))); return RedisToken.integer(result.size()); } } Leftpushcommand implements the dbcommand interface. Its execute method extracts […]
Several ways of array de duplication
1. Double layer for loop is used to realize array de duplication let arr = [1,2,3,4,3,2,3,5]; let unique = (arr)=>{ //The first layer is the previous item of the for loop array for(var i=0; i<arr.length; i++){ //The second layer is the last item of the for loop array for(var j=i+1; j<arr.length; j++){ if(arr[i] === arr[j]){ […]
Introduction to ISP image processing process
ISP is at the center of the whole imaging system. picture Article catalogue 1 ISP function 1.1 device control 1.2 format conversion 1.3 image quality optimization 2 ISP algorithm flow ISP function Device control Controls the shutter and gain of the sensor Control lens zoom and focus Control the aperture of the lens Control the […]
Kuguayun classroom (Tencent cloud version) v1 3.8 release open source knowledge payment solution
v1.3.8(2021-07-11) to update Correct GitHub warehouse information in Readme Add the command to clear the on-demand address cache Several cache key names are renamed and the background site name is modified to the user site name Label name comparison ignores case Redesign the front and back login interface Correct the parameter description of the picture […]
How to pass W3C verification?
In addition to formulating various labeling regulations, the W3C also provides verification functions to allow web page producers to check whether they really comply with W3C regulations.prefaceIn addition to formulating various labeling regulations, W3C also provides verification function to let Web page producers check whether they really comply with W3C regulationsHow to achieve W3C xhtml1 […]
Euclidean algorithm
Euclidean algorithm The rolling division method calculates the maximum common divisor of two non negative integers a and B. For example, the greatest common divisor of 24 and 30 is 6 Decomposition minimum prime factorDecomposition 24 = 2 x 2 x 2 x 3Decomposition 30 = 2 x 3 x 5 extractExtract 2 x 3 […]
Kuguayun classroom (Tencent cloud version) v1 3.9 release open source knowledge payment solution
v1.3.9(2021-07-24) Update content When there are no courses under the correction category, all course problems will be queried Fixed some path problems in sitemap Fixed issues related to the course package Optimize the relevant logic of Q & a part Optimize relevant logic of comments Optimize browser title display Optimize the logic related to audit […]
Technology sharing | how to do video correction in art teaching?
In recent years, the demand for online education has become more and more clear. After the development of online education, we have only found that there is a higher and higher demand for online educationHigh quality products and emphasis on user experienceInstitutions can survive and thrive in the post epidemic era. In the track of […]
Ios15 system startup time acceleration
The most interesting features in wwdc21 are deeply hidden in the Xcode 13 release notes: All programs and dylibs deployed on MacOS 12 or IOS 15 and later operating systems now use the chain repair format. This format uses different load commands and linkedit data and cannot be run or loaded on a lower version […] | https://developpaper.com/tag/correct/ | CC-MAIN-2022-40 | en | refinedweb |
Bug #13270
IRB hangs when printing "\e]"
Description
Steps to reproduce:
irb
print "\e]"
- Or:
puts "\e["
- try CMD+C, nothing happens
- try CMD+D, prints "30m"
Expected behavior:
- just prints "30m" (that's what
prydoes)
Ruby versions tried:
- ruby 2.3.3p222 (2016-11-21 revision 56859) [x86_64-darwin16]
- ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-darwin16]
History
Updated by shevegen (Robert A. Heiler) over 2 years ago
Is this darwin-specific? It appears to work fine on my linux system here.
ruby 2.4.0p0 (2016-12-24 revision 57164) [i686-linux]
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
- Status changed from Open to Feedback
I can't reproduce it on darwin15.
Does it happen without irb, just
ruby -e print "\e]"?
If only with irb, does it with
irb -f?
Updated by snood1205 (Eli Sadoff) over 2 years ago
- Status changed from Feedback to Open
I can reproduce it on Darwin, so I'm switching it back to open.
My
ruby -v is
ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-darwin14]
Also it occurs with
irb -f but not with
ruby -e
Updated by snood1205 (Eli Sadoff) over 2 years ago
Even more information, this is reproducible on ruby -v ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-linux], but instead of printing out "30m" after CMD+D it prints out nothing. This seems to be a bug within IRB. Interesting, this behavior seems to be OS defined as well. The following program in C
#include <stdio.h> int main() { puts("\e]"); }
has different outputs based on the OS. On macOS Sierra, it outputs
30m with
gcc,
cc, and
clang, whereas on Fedora 22 it outputs nothing with both
gcc and
cc. I can't exactly figure out what is wrong, but it is quite odd.
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
- Status changed from Open to Feedback
What terminal emulator are you using, the standard
Terminal.app?
Updated by domaio (Dorian M) over 2 years ago
Nobuyoshi Nakada wrote:
What terminal emulator are you using, the standard
Terminal.app?
I'm using iTerm 3.0.14.
And on Terminal.app (v2.7.1 (387)) I get
>> puts "\e]" il >> puts "\e[" nil
(notice the first "n" missing)
Also, why
pry (0.10.4), on iTerm:
> puts "\e[" nil > puts "\e]" 1;36mnil >
I'm on MacOS Sierra 10.12.1 (16B2555)
Updated by znz (Kazuhiro NISHIYAMA) over 2 years ago
I can reproduce
print "\e]" and Ctrl+C, nothing happens. But I can't reproduce using
puts "\e[". And I can't reproduce Ctrl+D
prints "30m". Ctrl+D causes simply exited.
I think iTerm eats output from "\e]" (OSC) to "\a" (BEL) (or "\e" or something else).
pry outputs some
"\e"s after evaluation, then it seems to be without hang.
I typed
print "\e]", Enter, Ctrl+C,
puts "\a" (can't see), Enter. Then it outputs
=> nil and prompt.
% rbenv exec irb -r irb/completion --simple-prompt >> print "\e]" => nil >>
Updated by nobu (Nobuyoshi Nakada) over 2 years ago
- Status changed from Feedback to Rejected
It is not ruby specific, and (probably) expected behavior of some terminal emulators.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/13270 | CC-MAIN-2019-35 | en | refinedweb |
Groovy Goodness: Find Non-Null Results After Transformation in a Collection
Groovy Goodness: Find Non-Null Results After Transformation in a Collection
Join the DZone community and get the full member experience.Join For Free
Since Groovy 1.8.1 we can use the findResults method and pass a closure to transform elements in a collection and get all non-null elements after transformation. We also have the findResult method to return the first non-null transformed element, but with findResults we get all non-null elements.
def stuff = ['Groovy', 'Griffon', 'Gradle', 'Spock', 'Grails', 'GContracts'] def stuffResult = stuff.findResults { it.size() == 6 ? "$it has 6 characters" : null } assert stuffResult == ['Groovy has 6 characters', 'Gradle has 6 characters', 'Grails has 6 characters'] def map = [what: 'Finish blog post', priority: 1, when: new Date()] def mapResult = map.findResults { it.value instanceof String ? "Key $it.key is of type String" : null } assert mapResult == ['Key what is of type String']
From
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/groovy-goodness-find-non-null | CC-MAIN-2019-35 | en | refinedweb |
Contents
So far in this course, we have been using built-in functions that come with Python. In this lesson, we will learn how we can create our own functions. But before we do that, let’s spend some time learning why we even need them in the first place.
Suppose you want to create a program which allows users to calculate the sum between two numbers. At this point, you shouldn’t have any problem writing such programs; anyway, your code might look this:
This program calculates the sum of all numbers from
10 to
30. If we want to calculate the sum of numbers from
100 to
200, we would need to update the program as follows:
As you can see both versions of the program is nearly identical, the only difference is in the values of the
start and
end variables. So everytime we want to calculate the sum of two numbers, we would need to update the source of the program. It would be good if we somehow just reuse the entire code without doing any modification. We can do that using functions.
What is a Function?
A function is named group of statements, which perform a specific task. The syntax of defining a function is as follows:
A function consists of two parts: header and body. The function header starts with the
def keyword, followed by name of the function, followed by arguments and ends with a colon (
:).
The
def is a reserved keyword, so you shouldn’t use it as a variable or function name in your programs.
function_name can be any valid identifier. After the function name, we have a list of arguments inside parentheses separated by a comma (
,). We use these arguments to pass the necessary data to the function. A function can take any number of arguments or none at all. If a function doesn’t accept any argument then the parentheses is left empty.
In the next line, we have a block of statements or function body. The function body contains statements which define what the function does. As usual, Python uses indentation of statements to determine when block starts and ends. All the statements in the body of the function must be equally indented otherwise you will get a syntax error.
Pay special attention to the last statement in the function body i.e
<return statement>. The
return statement is used to return a value from the function. The
return statement is not mandatory, some function return values while others don’t. If a function doesn’t have
return statement in the body then a reserved keyword
None is returned automatically.
None is actually an object of a built-in type
NoneType. Don’t worry if you find
return statement confusing; they are not, we will discuss
return statement in detail in the upcoming section.
Here is a small function which prints current date and time along with a greeting:
python101/Chapter-13/first_function.py
The
greet() function doesn’t accept any arguments, that’s why parentheses are left empty. The function body contains two
print() statements. These two statements will be executed when we call
greet() function. The
greet() function doesn’t return any value.
Function Call
A function definition does nothing by itself. To use a function we must call it. The syntax of calling a function is as follows:
If a function doesn’t accept any arguments then use the following syntax:
The following code calls
greet() function:
python101/Chapter-13/calling_first_function.py
Output:
The function call must appear after the function is defined otherwise, you will encounter
NameError exception. For example:
python101/Chapter-13/call_before_definition.py
Output:
When a function is called the program control jumps to that function definition and executes the statements inside the function body. After executing the body of the function, the program control jumps back to the part of the program which called the function and resumes execution at that point.
The following example demonstrates what happens when a function is called.
python101/Chapter-13/transfer_of_control.py
Output:
In lines 3-5, we have defined a
greet() function. The
print() statement in line 7, prints string
"Before calling greet()" to the console. In line 8, we are calling
greet() function. At this point, the execution of statements following the call to
greet() halts and program control jumps to the definition of the
greet() function. After executing the body of the
greet() function program control again jumps back to the point where it left off and resumes the execution from there.
Our previous program has only function. It is not unusual for programs to have hundreds or even thousands of functions. In python, it a common convention to define a function called
main() which gets called when the program start. This
main() function then goes on to call other function as needed. The following program demonstrates the flow of program control when we have two functions in a program.
python101/Chapter-13/two_func_program_control.py
In lines 3-5, and 7-10, we have defined two functions
greet() and
main(). The
greet() function is now updated to accept an argument called
name, which it then uses in the next line to greet the user.
The
main() function doesn’t accept any arguments and has three statements inside the body.
The statement in line 12, calls the
main() function. The program control jumps to the body of the
main() function. The first statement inside the
main() prints string
"main() function called" to the console. The statement in line 9, calls the
greet() function with an argument
"Jon" which will be assigned to the variable
name in the function header. At this point, execution of statements following the call to
greet() halts and program control jumps to the body of the
greet() function. After executing the body of the
greet() function, program control jumps back to where it left off and executes the
print() statement in line 10. As there are no more statements left to execute in the
main() function and program control jump again back to where it left off to execute statements after the function call (line 12).
Local Variables, Global Variables and Scope
Variable Scope: The scope of a variable refers to the part of the program where it can be accessed.
The variable we create inside the function is called a local variable. Local variables can only be accessed inside the body of the function in which it is defined. In other words, the scope of a local variable starts from the point they are defined and continues on until the end of the function. Local variables are subject to garbage collection as soon as the function ends. As a result, trying to access a local variable outside of its scope will result in an error.
On the other end of the spectrum, we have Global variables. The Global variables are variables that are defined outside of any functions. The scope of a global variable starts from the point they are defined and continues on until the program ends.
Now consider the following examples:
Example 1:
python101/Chapter-13/variable_scope.py
Output:
In line 1, we have created a global variable named
global_var. It is then accessed in line 12, inside the
func() function and in line 16, outside the function. We have also declared a local variable named
local_var inside the function
func(). It is then accessed inside the function in line 9.
Let’s see what happens, if we try to access a local variable outside the function. To do so, uncomment the code in line 18 and run the program again.
Output:
The error
NameError: name 'local_var' is not defined tells us that there is no variable named
local_var exists in this scope.
What if we have local and global variables of the same name? Consider the following program.
Example 2:
python101/Chapter-13/same_global_and_local.py
Output:
Here we have a global variable
num in line 1 and a local variable of the same name inside the function in line 4. Whenever there is a conflict between a local and global variable inside the function, the local variable gets the precedence. This is the reason why
print() function (line 5) prints the value of the local
num variable. However, outside the function,
num refers to the global
num variable.
We can also use the same variable names in different function without conflicting with each other.
python101/Chapter-13/same_variable_names_in_different_functions.py
Output:
Passing Arguments
An argument is nothing but a piece of data passed to the function, when it is called. As said before, a function can take any number of arguments or none at all. For example,
print() function accepts one or more arguments but
random.random() function accepts none.
If you want a function to receive arguments, when it is called, we must first define one or more parameters. A parameter or parameter variable is simply a variable in the function header which receives an argument when the function is called. Just like local variables, the scope of parameter variables is only limited to the body of the function. Here is an example of a function which accepts a single argument:
When function
add_100() is called with an argument, the value of the argument is assigned to the variable
num and the
print() statement prints the value of
num after adding
100 to it.
The following program demonstrates how to call a function with an argument.
python101/Chapter-13/function_argument.py
Output:
In line 5, function
add_100() is called with an argument
100. The value of the argument is then assigned to the parameter variable
num.
Example 2: Function to calculate the factorial of a number.
python101/Chapter-13/factorial.py
Output:
The factorial of a number
n is defined as multiplication of all digits from
1 to
n.
where
n! denotes factorial of
n. Here are some examples:
Now, let’s see how the for loop works when the value of
n is
4:
After the 4th iteration loop terminates and
print() function prints the factorial of the number.
Example 3: Passing multiple arguments to the function
python101/Chapter-13/multiple_arguments.py
Output:
When
calc() function is called in line 8, the argument
10 is passed to parameter variable
num1 and
20 is passed to parameter variable
num2.
The order of arguments passed while calling the function must match the order of parameters in the function header, otherwise, you may get unexpected results.
Pass by Value
Recall that everything in Python is an object. So a variable for an object, is actually a reference to the object. In other words, a variable stores the address where an object is stored in the memory. It doesn’t contain the actual object itself.
When a function is called with arguments, it is the address of the object stored in the argument is passed to the parameter variable. However, just for the sake of simplicity, we say the value of an argument is passed to the parameter while invoking the function. This mechanism is known as Pass By Value. Consider the following example:
Output:
Notice that the
id values are same. This means that variable
arg1 and
para1 references the same object. In other words, both
arg1 and
para1 points to the same memory location where
int object (
100) is stored.
This behavior has two important consequences:
- If arguments passed to function is immutable, then the changes made to the parameter variable will not affect the argument.
- However, if the argument passed to the function is mutable, then the changes made to the parameter variable will affect the argument.
Let’s examine this behavior by taking some examples:
Example 1: Passing immutable objects to function.
python101/Chapter-13/passing_immutable_objects.py
Output:
In line 7,
func() is called with an argument
arg1 (which points to an immutable object
int). The value of
arg1 is passed to the parameter
para1. Inside the function value of
para1 is incremented by
100 (line 2). When the function ends, the print statement in line 8 is executed and the string
"After function call, arg1 = 100" is printed to the console. This proves the point that no matter what function does to
para1, the value of
arg1 remains the same.
If you think about it this behavior makes perfect sense. Recall that the contents of immutable objects can’t be changed. So whenever we assign a new integer value to a variable we are essentially creating a complete new
int object and at the same time assigning the reference of the new object to the variable. This is exactly what’s happening inside the
func() function.
Example 2: Passing mutable objects to function
python101/Chapter-13/passing_mutable_objects.py
Output:
The code is almost the same, but here we are passing a list to the function instead of an integer. As the list is a mutable object, consequently changes made by the
func() function in line 2, affects the object pointed to by variable
arg1.
Positional and Keyword Arguments
Arguments to a function can be passed in two ways:
- Positional argument.
- Keyword argument.
In the first method, we pass arguments to a function in the same order as their respective parameters in the function header. We have been using this method to pass arguments to our functions. For example:
python101/Chapter-13/pythagorean_triplets.py
The statement
is_pythagorean_triplet(3, 4, 5) passes
3 to
base,
4 to
height and
5 to
perpendicular, and prints
"Numbers passed are Pythagorean Triplets". However, the statement
is_pythagorean_triplet(3, 5, 4), passes
3 to base,
5 to
height and
4 to
perpendicular and prints
"Numbers passed are not Pythagorean Triplets", which is wrong. So when using positional arguments always make sure that order of arguments in function call and order of parameters in function header matches. Otherwise, you may get expected results.
The other way to pass arguments to a function is to use Keyword arguments. In this method we pass each argument in the following form:
where
parameter_name is the name of the parameter variable in the function header and
val refers to the value you want to pass to the parameter variable. Because, we are associating parameter name with values, the order of arguments in the function call doesn’t matter.
Here are some different ways in which we can call
is_pythagorean_triplet() function using keyword arguments:
Keyword arguments are a little bit flexible because we don’t have to remember the order of parameters in the function header.
Mixing Positional and Keyword arguments
We can also mix positional arguments and keyword arguments in a function call. In doing so, the only requirement is that positional arguments must appear before any keyword arguments. It means that the following two calls are perfectly valid because in both calls positional arguments are appearing before keyword arguments.
However, we can’t do this:
The problem here is that the positional argument (
5) is appearing after the keyword argument (
height=4). Trying to call
is_pythagorean_triplet() in this way results in the following error:
Returning Values
Up to this point, we have been creating functions which don’t return any values, such functions are also known as void functions.
To return value from a function we use
return statement. It’s syntax is:
The square brackets (
[]) around the
expression indicates that it is optional. If omitted a special value
None is returned.
When
return statement is encountered inside a function, the function terminates and the value of the
expression followed by the
return keyword is sent back to the part of the program that called the function. The
return statement can appear anywhere in the body of the function. The functions which returns values are known as value-returning functions.
Here is an example:
A function can be called in two ways, depending upon whether they return value or not.
If a function returns a value then a call to such a function can be used as an operand in any expression in the program. For example:
In the above expression, we are first calling the
add() function, and then assigning the return value of the function to the
result variable. Had we not used the
return statement in the
add() function, we wouldn’t be able to write this code. Here are some other ways in which we can call
add() function.
We are not bound to use the return value from the function. If we don’t want to use the return value, just call the function as a statement. For example:
In this case, the return value of
add() is simply discarded.
Let’s rewrite our factorial program to return the factorial instead of printing it.
python101/Chapter-13/return_factorial.py
Output:
In the above example, we are returning an integer value from the function, but we can use any type of data
int,
float,
str,
bool; you name it. The following program demonstrates how to return
bool type from the function:
python101/Chapter-13/is_even_or_odd.py
1st run Output:
2nd run Output:
If expression followed by
return keyword is omitted then a special value
None is returned.
python101/Chapter-13/returning_none.py
Output:
We can also use
return statement multiple times inside the function but as soon as the first
return statement is encountered the function terminates and all the statements following it are not executed. For example:
python101/Chapter-13/grade_calculator.py
First run output:
Second run output:
Void Function returns None
In Python, void functions are slightly different than functions found in C, C++ or Java. If the function body doesn’t have any
return statement then a special value
None is returned when the function terminates. In Python,
None is a literal of type
NoneType which used to denote the absence of a value. It is commonly assigned to a variable to indicate that the variable does not points to any object.
The following program demonstrates that the void functions return
None.
python101/Chapter-13/void_function.py
Output:
Sure enough!
add() function indeed returns
None. So we can say that in Python, all functions return value whether you use
return statement or not. However, this doesn’t mean that you can use void functions just like a value-returning function. Consider the following example:
python101/Chapter-13/using_void_function_as_non_void_function.py
Output:
In line 4, we are trying to add value returned from
add() i.e
None to the integer
100, but the operation failed because
+ operation can’t add
NoneType to
int.
That’s why a void function is generally invoked as a statement like this:
Returning Multiple Values
To return multiple values from a function just specify each value separated by a comma (
,) after the
return keyword.
When calling a function returning multiple values, the number of variables on the left side of
= operator must be equal to the number of values returned by the
return statement. So if a function returns two values then you must use 2 variables on the left side of
= operator. Here is an example:
python101/Chapter-13/returning_multiple_values.py
Output:
Notice how values are assigned while calling the function. The statement:
assigns the smaller number to variable
number1 and greater number to variable
number2.
Default Arguments
In Python, we can define a function with default parameter values, this default value will be used when a function is invoked without any argument. To specify a default value for the parameter just specify the value using the assignment operator followed by parameter name. Consider the following example:
python101/Chapter-13/setting_default_values.py
Output:
In line 7, we are calling function
calc_area() without any arguments, so default values
2 and
3 will be assigned to
length and
width parameters respectively.
In line 8, we are calling
calc_area() by passing
4 to
length and
6 to
width. As values to both the parameters are provided while calling the function, the default value will not be used in this case. The same is true for
calc_area() call in line 9, except here we are using keyword arguments.
In line 10, we are only providing value to
length parameter using keyword argument, as a result, the default value for
width parameter will be used.
2 thoughts on “Functions in Python”
“The statement is_pythagorean_triplet(3, 4, 5) passes 3 to base, 4 to height and 5 to base, and prints ”
I believe there is an error here. 5 is being passed to perpendicular and not base
Code Updated. | https://overiq.com/python-101/functions-in-python/ | CC-MAIN-2019-35 | en | refinedweb |
Scout/NewAndNoteworthy/5.0
This page shows what you need to know about the new Eclipse Scout 5.0 release shipped with Eclipse Mars (Release: Wednesday, June 24, 2015).
Contents
- 1 M1
- 2 M2
- 3 M3
- 4 M4
- 5 M5
- 6 M6
- 7 M7
- 8 RC1
- 9 RC2
- 10 RC3
- 11 RC4
M1
- Planned Scout Version: 4.1
- Release Date: August 22, 2014 (Build Date: August 20)
[RAP] Multisession Coockiestore Enabled by default
bug 441555 The multisession coockie store for Scout RAP introduced with Scout 4.0 (Scout/NewAndNoteworthy/4.0#Multisession_Cookiestore) is now enabled by default.
M2
- Planned Scout Version: 4.1
- Release Date: October 03, 2014 (Build Date: October 01)
SDK: new entity from the Package Explorer
bug 439334. You can now create Scout entity (Codes, Form Fields, Columns, Menus…) directly from the Package Explorer View. You now have these items directly in the context menu.
M3
- Planned Scout Version: 4.2
- Release Date: November 14, 2014 (Build Date: November 12)
[JAX-WS] JAX-WS support for Java 1.8
bug 446478 adds support for Java 1.8 by moving Java runtime-depending code into fragments. Products using org.eclipse.scout.jaxws216 must be extended by one of the following fragments:
Java 1.6, 1.7 -> org.eclipse.scout.jaxws216.jre16.fragment Java 1.8 -> org.eclipse.scout.jaxws216.jre18.fragment
Add the following two properties to your config.ini if you are using an OSGi runtime that is not already defining the execution environment for Java 1.8 (prior Luna release):
org.osgi.framework.executionenvironment-1.8 org.osgi.framework.system.capabilities.extra=osgi.ee; osgi.ee="JavaSE";version:List<Version>="1.8"
Servlet 3.1 support
bug 433736 adds basic support for running Scout with a servlet 3.1. New servlet 3.1 features are used yet.
For an existing project add one of the following plugins to your server product file:
servlet 2.5-3.0 -> org.eclipse.scout.rt.server.servlet25 servlet 3.1 -> org.eclipse.scout.rt.server.servlet31
Distinguish between left-click and right-click on table row
bug 443490 It is now possible to distinguish between left and right-clicks on table rows with :
AbstractTable.execRowClick(ITableRow, MouseButton)
SDK: new entity from the Java Editor
You can now create Scout entity (Codes, Form Fields, Columns, Menus…) directly from the Java Editor. You just need to open the Proposal Window (CTRL+Space) and look at the end of the list. If you do not want to scroll down, simply press the “Up-Arrow” Key, you will jump there immediately.
M4
- Planned Scout Version: 4.2
- Release Date: December 19, 2014 (Build Date: December 17)
Empty Checkboxes
bug 416043 Default behaviour for checkbox changed (AbstractBooleanField and AbstractCheckBox): A boolean field is considered empty if unchecked. Previously it was considered empty, if it did not contain any value.
ServerJobService
bug 448572 It is now possible to customize server jobs. ServerJob creation in Scout is now always done with IServerJobService.createJobFactory.
The server session class could previously be configured in some services (jaxws,clustersync,offlinedispatcher...) or was retrieved by naming convention. These configurations have been consolidated and use now the configuration in org.eclipse.scout.rt.server.ServerJobService#serverSessionClassName. Old properties are still read for legacy support. Please server config.ini to
org.eclipse.scout.rt.server.ServerJobService#serverSessionClassName=<fully qualified ServerSession name>
e.g.
org.eclipse.scout.rt.server.ServerJobService#serverSessionClassName=org.eclipsescout.helloworld.server.ServerSession
Exceptions in Pages
bug 452266 Exceptions in AbstractPage#execPageActivated and AbstractPage#execPageDeactivated are now also handled by IExceptionHandlerService.
MailUtility Cleanup
bug 454416 MailUtility was moved to the subpackage mail. Methods were cleaned and signatures were changed from array ([]) to java.util.List.
Deprecate execLoadTableData() in table pages
bug 444210
execLoadTableData() in TablePage is now deprecated. The recommended way is to update to TablePageData for your table and to use
importPageData(..) in the
execLoadData(SearchFilter) method.
If you do not want switch to the TablePageData pattern immediately, a migration path might be to use the new
importTableData(Object[][]) method in the
execLoadData(SearchFilter) method.
Code like this:
@Override protected Object[][] execLoadTableData(SearchFilter filter) throws ProcessingException { PersonSearchFormData searchFormData; //initialize searchFormData depending on the filter return SERVICES.getService(IStandardOutlineService.class).getPersonTableData(searchFormData); }
Needs to be updated to something like this:
@Override protected void execLoadData(SearchFilter filter) throws ProcessingException { PersonSearchFormData searchFormData; //initialize searchFormData depending on the filter importTableData(SERVICES.getService(IStandardOutlineService.class).getPersonTableData(searchFormData)); }
New system property for external configuration file
bug 455222 Scout supports the specification of an external configuration file in the system properties.
When loading the configuration Scout is retrieving the value of the system property
external.configuration.file.
If the value is set Scout will add the specified path to the path list of the external configurations to be loaded.
This allows specifying the path of the config.ini in the pom.xml so that the configuration can be used with Maven Tycho. (This is a workaround for the Tycho Bug bug 356193)
MultiClientSessionCookieStore with better synchronization and fixed potential memory leak
bug 455184 The MultiClientSessionCookieStore introduced in the 4.0 (Luna) release used strong references to client sessions. This resulted in a memory leak unless the
sessionStopped(IClientSession) method was called. The new implementation now uses weak references to the client sessions to avoid this.
Additionally, the internal synchronization of the MultiClientSessionCookieStore was improved to be more granular.
M5
- Planned Scout Version: 5.0
- Release Date: February 06, 2015 (Build Date: February 04)
Change the default for TableData
With bug 455282 we have changed the default TableData from Array based TableData to Bean based TableData.
Without changing anything, if you use AbstractTableField directly in your form, your FormData will be updated from array based to bean based. For the migration, you will have two possibilities:
- 1. Updating your logic using the formData (in the server probably) to work with the new formData.
- or 2. Configure your tableFields in your form to use array based table data (In this case the generated FormData will be the same as in luna). The easiest way to do so is to extend from the new class:
AbstractArrayTableFieldinstead of
AbstractTableField.
For the long term, solution 1 is the preferred one. Array based table-data will disapear on the long term.
The type of the generated TableData is only a question of configuration of the
@FormData annotation. You are free to configure your table field templates as you need.
ICodeService implementation for client-only applications
With bug 444213 we have introduced a new ICodeService implementation that can be used in a standalone client scenario.
By default the 2 old implementations of ICodeService stay the same:
org.eclipse.scout.rt.client.services.common.code.CodeServiceClientProxy: relies on a remote service (server side)
org.eclipse.scout.rt.server.services.common.code.CodeService: has dependencies on server stuff (IClusterNotification, IClientNotification...)
The new implementation is:
org.eclipse.scout.rt.shared.services.common.code.SharedCodeService it corresponds to the old CodeService implementation without the server stuff and this implementation is now located in the shared plugin.
The (Server)CodeService is now extending the SharedCodeService.
In the client scout application, the implementation of ICodeService is CodeServiceClientProxy. This implementation requires a scout server application and is not suitable in a client-only application. In this case, you might want to register SharedCodeService as additional service with a higher priority in your client plug-in:
<service factory="org.eclipse.scout.rt.client.services.ClientServiceFactory" class="org.eclipse.scout.rt.shared.services.common.code.SharedCodeService" session="{your app}.client.ClientSession"> </service>
If you need some information contained in your client (like the current partition id), you can create your own implementation (ClientCodeService) extending SharedCodeSerivce:
public class ClientCodeService extends SharedCodeService { @Override protected Long provideCurrentPartitionId() { Map<String, Object> sharedVariableMap = ClientSession.get().getSharedVariableMap(); if (sharedVariableMap.containsKey(ICodeType.PROP_PARTITION_ID)) { return (Long) sharedVariableMap.get(ICodeType.PROP_PARTITION_ID); } return super.provideCurrentPartitionId(); } }
Of course you need also to register your own CodeService implementation it in the plugin.xml file.
IAccessControlService implementation for client-only applications
With bug 456476 we have introduced a new IAccessControlService implementation that can be used in a standalone client scenario.
In a default scout application the 2 old implementations of IAccessControlService stay the same:
org.eclipse.scout.rt.client.services.common.security.AccessControlServiceClientProxy: relies on a remote service (server side)
org.eclipse.scout.rt.server.services.common.security.AbstractAccessControlService: has a dependency on server stuff (IClusterNotification, IClientNotification...). Typical server application extends this abstract class and overrides execLoadPermissions().
The new implementation is:
org.eclipse.scout.rt.shared.services.common.security.AbstractSharedAccessControlService it corresponds to the old
AbstractAccessControlService implementation without the server stuff and this implementation is now located in the shared plugin.
The
AbstractAccessControlService is now extending the
AbstractSharedAccessControlService.
In the client scout application, the implementation of IAccessControlService is AccessControlServiceClientProxy. This implementation requires a scout server application and is not suitable in a client-only application. For standalone client you can now provide your own ClientAccessControlService extending AbstractSharedAccessControlService:
public class ClientAccessControlService extends AbstractSharedAccessControlService { @Override protected Permissions execLoadPermissions() { Permissions permissions = new Permissions(); permissions.add(new SomeReadPermission()); permissions.add(new SomeUpdatePermission()); //... return permissions; } }
Do not forget to register your ClientAccessControlService as additional service with a higher priority than AccessControlServiceClientProxy in your client plug-in:
<service factory="org.eclipse.scout.rt.client.services.ClientServiceFactory" class="{your app}.client.services.common.security.ClientAccessControlService" session="{your app}.client.ClientSession"> </service>
Drop Legacy support for RemoteServiceAccessPermission in AccessControlService
RemoteServiceAccessPermission was introduced in June 2011, to grant remote access to a service interface from gui to server. Until mars, when missing, this permission was added to the Permissions object in the default AccessControlService implementation. When it was the case, following warning was logged:
Legacy security hint: missing any RemoteServiceAccessPermissions in AccessController. Please verify the class {your AccessControlService} to include such permissions for accessing services using client proxies. Adding default rule to allow services of pattern '*.shared.*'
With mars release (also implemented with bug 456476), this legacy support was dropped, meaning that you need to ensure by yourself that this permission is present for each log in user who requires it.
The default AccessControlService created with new scout project already contains this permission:
import java.security.AllPermission; import java.security.Permissions; import org.eclipse.scout.rt.server.services.common.security.AbstractAccessControlService; import org.eclipse.scout.rt.shared.security.RemoteServiceAccessPermission; public class AccessControlService extends AbstractAccessControlService { @Override protected Permissions execLoadPermissions() { Permissions permissions = new Permissions(); permissions.add(new RemoteServiceAccessPermission("*.shared.*", "*")); //TODO fill access control service permissions.add(new AllPermission()); return permissions; } }
If you had this warning with Luna, you need to modify your AccessControlService for mars, otherwise your users wont be able to connect to the server.
New Extension API
See the new Extensibility concept page for details.
M6
- Planned Scout Version: 5.0
- Release Date: March 27, 2015 (Build Date: March 25)
Replacement for ValidateOnAnyKey mechanism
Deprecated:
AbstractBasicField.getConfiguredValidateOnAnyKey()
IBasicField.isValidateOnAnyKey()
IBasicField.setValidateOnAnyKey(boolean)
Use instead:
AbstractBasicField.getConfiguredUpdateDisplayTextOnModify()
IBasicField.isUpdateDisplayTextOnModify()
IBasicField.setUpdateDisplayTextOnModify(boolean)
M7
- Planned Scout Version: 5.0
- Release Date: May 08 , 2015 (Build Date: May 06)
Changes in execOwnerValueChange
To avoid nullchecks in execOwnerValueChange we changes the behavior as follows:
execOwnerValueChanged is only called, if the selection matches the owner type: E.g.
MenuType SingleSelection: only called, if the selection is a single selection MenuType MultiSelection: only called, if the selection is a multi selection MenuType MultiSelection: only called, if the selection is a multi selection
This behavior be useful in in most cases. If a callback is required on all selections, a custom implementation is needed.
RAP Filechooser Components Moved to regular Scout Feature
RAP Filechooser was moved to the rap feature. Therefore, the scout rap incubation components could also be moved to the scout RT feature.
org.eclipse.scout.rt.ui.rap.incubator.filechooser has been removed.
Migration: add new dependencies to rap product:
org.apache.commons.fileupload org.apache.commons.io org.eclipse.rap.filedialog org.eclipse.rap.fileupload
Remove incubator bundles from target and product files.
RC1
- Planned Scout Version: 5.0
- Release Date: May 22, 2015 (Build Date: May 20)
RC2
- Planned Scout Version: 5.0
- Release Date: May 29, 2015 (Build Date: May 27)
RC3
- Planned Scout Version: 5.0
- Release Date: June 05, 2015 (Build Date: June 03)
RC4
- Planned Scout Version: 5.0
- Release Date: June 12, 2015 (Build Date: June 10) | http://wiki.eclipse.org/Scout/NewAndNoteworthy/5.0 | CC-MAIN-2019-35 | en | refinedweb |
IPv6
Internet Protocol version 6
Synopsis:
#include <sys/socket.h> #include <netinet/in.h> int socket( AF_INET6, SOCK_RAW, proto);
Description:
The IP6 protocol is the network-layer protocol used by the Internet Protocol version 6 family (AF_INET6). Options may be set at the IP6 level when using higher-level protocols based on IP6 (such as TCP and UDP ). It may also be accessed through a raw socket when developing new protocols, or special-purpose applications.
There are several IP6-level setsockopt() /getsockopt() options. They are separated into the basic IP6 sockets API (defined in RFC 2553), and the advanced API (defined in RFC 2292). The basic API looks very similar to the API presented in IP . The advanced API uses ancillary data and can handle more complex cases.
Basic IP6 sockets API
You can use the IPV6_UNICAST_HOPS option to set the hoplimit field in the IP6 header on unicast packets. If you specify -1, the socket manager uses the default value. If you specify a value of 0 to 255, the packet uses the specified value as it hoplimit. Other values are considered invalid and result in an error code of EINVAL. For example:
int hlim = 60; /* max = 255 */ setsockopt( s, IPPROTO_IPV6, IPV6_UNICAST_HOPS, &hlim, sizeof(hlim) );
The IP6 multicasting is supported only on AF_INET6 sockets of type SOCK_DGRAM and SOCK_RAW, and only on networks where the interface driver supports multicasting.
The IPV6_MULTICAST_HOPS option changes the hoplimit for outgoing multicast aren't forwarded beyond the local network. Multicast datagrams with a hoplimit of 0 won't be transmitted on any network, but may be delivered locally if the sending host belongs to the destination group and if multicast loopback hasn overrides the default for subsequent transmissions from a given socket:
unsigned int outif; outif = if_nametoindex("ne0"); setsockopt( s, IPPROTO_IPV6, IPV6_MULTICAST_IF, &outif, sizeof(outif) );
(The outif argument eliminating the overhead of receiving their own transmissions. Don't use the IPV6_MULTICAST_LOOP option if there might be more than one instance of your application on a single host (e.g. a conferencing program), or if the sender doesn't belong to the destination group (e.g. a time-querying program).
A multicast datagram sent with an initial hoplimit greater than 1 may be delivered to the sending host on a different interface from that on which it was sent, if the host belongs to the destination group on that other interface. The loopback control option has no effect on such a delivery.
A host must become a member of a multicast group before it can receive datagrams sent to the group. To join a multicast group, use the IPV6_JOIN_GROUP option:
struct ipv6_mreq mreq6; setsockopt( s, IPPROTO_IPV6, IPV6_JOIN_GROUP, &mreq6, sizeof(mreq6) );
Note that the mreq6 argument has the following structure:
struct ipv6_mreq { struct in6_addr ipv6mr_multiaddr; unsigned int ipv6mr_interface; };
Set the ipv6mr_interface member to 0 to choose the default multicast interface, or set it) );
The mreq6 argument contains the same values as used to add the membership. Memberships are dropped when the socket is closed or the process exits.
The IPV6_PORTRANGE option IPV6_BINDV6ONLY option controls the behavior of the AF_INET6 wildcard listening socket. The following example sets the option to 1:
int on = 1; setsockopt( s, IPPROTO_IPV6, IPV6_BINDV6ONLY, &on, sizeof(on) );
If you set the IPV6_BINDV6ONLY option to 1, the AF_INET6 wildcard listening socket accepts IP6 traffic only. If set to 0, the socket accepts IPv4 traffic as well, as if it were from an IPv4 mapped address, such as ::ffff:10.1.1.1. Note that if you set the option to 0, IPv4 access control gets much more complicated. For example, even if you have no listening AF_INET socket on port X, you'll end up accepting IPv4 traffic by an AF_INET6 listening socket on the same port. The default value for this flag is copied at socket-instantiation time, from the net.inet6.ip6.bindv6only variable from the sysctl utility. The option affects TCP and UDP sockets only.
Advanced IP6 sockets API
The advanced IP6 sockets API lets applications specify or obtain details about the IP6 header and extension headers on packets. The advanced API uses ancillary data for passing data to or from the socket manager.
There are also setsockopt() / getsockopt() options to get optional information on incoming packets:
- IPV6_PKTINFO
- IPV6_HOPLIMIT
- IPV6_HOPOPTS
- IPV6_DSTOPTS
-() , as one or more ancillary data objects.
If IPV6_PKTINFO is enabled, the destination IP6 address and the arriving interface index are available via struct in6_pktinfo on an ancillary data stream. You can pick the structure by checking for an ancillary data item by setting the cmsg_level argument to IPPROTO_IPV6 and the cmsg_type argument to IPV6_PKTINFO.
If IPV6_HOPLIMIT is enabled, the hoplimit value on the packet is made available to the application. The ancillary data stream contains an integer data item with a cmsg_level of IPPROTO_IPV6 and a cmsg_type of IPV6_HOPLIMIT.
The inet6_option_space() family of functions help you parse ancillary data items for IPV6_HOPOPTS and IPV6_DSTOPTS. Similarly, the inet6_rthdr_space() family of functions help you parse ancillary data items for IPV6_RTHDR.
You can pass ancillary data items with normal payload data, using the sendmsg() function. Ancillary data items are parsed by the socket manager, and are used to construct the IP6 header and extension headers. For the cmsg_level values listed above, the ancillary data format is the same as the inbound case.
Additionally, you can specify a IPV6_NEXTHOP data object. The IPV6_NEXTHOP ancillary data object specifies the next hop for the datagram as a socket address structure. In the cmsghdr structure containing this ancillary data, the cmsg_level argument is IPPROTO_IPV6, the cmsg_type argument is IPV6_NEXTHOP, and the first byte of cmsg_data is the first byte of the socket address structure.
If the socket address structure contains an IP6 address (e.g. the sin6_family argument is AF_INET6 ), then the node identified by that address must be a neighbor of the sending host. If that address equals the destination IP6 address of the datagram, then this is equivalent to the existing SO_DONTROUTE socket option.
For applications that don't, or can't use the sendmsg() or the recvmsg() function, the IPV6_PKTOPTIONS socket option is defined. Setting the socket option specifies any of the optional output fields:
setsockopt( fd, IPPROTO_IPV6, IPV6_PKTOPTIONS, &buf, len );
The buf argument points to a buffer containing one or more ancillary data objects; the len argument is the total length of all these objects. The application fills in this buffer exactly as if the buffer were being passed to the sendmsg() function as control information.
The options set by calling setsockopt() for IPV6_PKTOPTIONS are called sticky options because once set, they apply to all packets sent on that socket. The application can call setsockopt() again to change all the sticky options, or it can call setsockopt() specified that it wants to receive. The buf argument points to the buffer that the call fills in. The len argument is a pointer to a value-result integer; when the function is called, the integer specifies the size of the buffer pointed to by buf, and on return this integer contains the actual number of bytes that were stored in the buffer. The application processes this buffer exactly as if it were returned by recvmsg() as control information.
Advanced API and TCP sockets
When using getsockopt() with the IPV6_PKTOPTIONS option and a TCP socket, only the options from the most recently received segment are retained and returned to the caller, and only after the socket option has been set. The application isn't allowed to specify ancillary data in a call to sendmsg() on a TCP socket, and none of the ancillary data described above is ever returned as control information by recvmsg() on a TCP socket.
Conflict resolution
In some cases, there are multiple APIs defined for manipulating an IP6 header field. A good example is the outgoing interface for multicast datagrams: it can be manipulated by IPV6_MULTICAST_IF in the basic API, by IPV6_PKTINFO in the advanced API, and by the sin6_scope_id field of the socket address structure passed to the sendto() function.
In QNX Neutrino, when conflicting options are given to the socket manager, the socket manager gets the value in the following order:
- options specified by using ancillary data
- options specified by a sticky option of the advanced API
- options specified by using the basic API
- options specified by a socket address.
Raw IP6 Sockets
Raw IP6 sockets are connectionless, and are normally used with sendto() and recvfrom() , although you can also use connect() to fix the destination for future packets (in which case you can use read() or recv() , and write() or send() ).
If proto is 0, the default protocol IPPROTO_RAW is used for outgoing packets, and only incoming packets destined for that protocol are received. If proto is nonzero, that protocol number is used on outgoing packets and to filter incoming packets.
Outgoing packets automatically have an IP6 header prepended to them (based on the destination address and the protocol number the socket is created with). Incoming packets are received without the IP6 header or extension headers.
All data sent via raw sockets must be in network byte order; all data received via raw sockets is in network-byte order. This differs from the IPv4 raw sockets, which didn't specify a byte ordering and typically used the host's byte order.
Another difference from IPv4 raw sockets is that complete packets (i.e. IP6 packets with extension headers) can't be read or written using the IP6 raw sockets API. Instead, ancillary data objects are used to transfer the extension headers, as described above.
All fields in the IP6 header that an application might want to change (i.e. everything other than the version number) can be modified using ancillary data and/or socket options by the application for output. All fields in a received IP6 header (other than the version number and Next Header fields) and all extension headers are also made available to the application as ancillary data on input. Hence, there's no need for a socket option similar to the IPv4 IP_HDRINCL socket option.
When writing to a raw socket, the socket manager automatically fragments the packet if the size exceeds the path MTU, inserting the required fragmentation headers. On input, the socket manager reassembles received fragments, so the reader of a raw socket never sees any fragment headers.
Most IPv4 implementations give special treatment to a raw socket created with a third argument to socket() of IPPROTO_RAW, whose value is normally 255. We note that this value has no special meaning to an IP6 raw socket (and the IANA currently reserves the value of 255 when used as a next-header field).
For ICMP6 raw sockets, the socket manager calculates and inserts the mandatory ICMP6 checksum.
For other raw IP6 sockets (i.e. for raw IP6 sockets created with a third argument other than IPPROTO_ICMPV6), the application must:
- Set the new IPV6_CHECKSUM socket option to have the socket manager compute and store a pseudo header checksum for output.
- Verify the received pseudo header checksum on input, discarding the packet if the checksum is in error.
This option prevents applications from having to perform source-address selection on the packets they send. The checksum incorporates the IP. Disabled means:
- The socket manager won't calculate and store a checksum for outgoing packets.
- The socket manager kernel won't verify a checksum for received packets.
Based on:
RFC 2553, RFC 2292, RFC 2460 | http://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/i/ip6_proto.html | CC-MAIN-2019-35 | en | refinedweb |
C# | Object Class
The Object class is the base class for all the classes in .Net Framework. It is present in the System namespace. In C#, the .NET Base Class Library(BCL) has a language-specific alias which is Object class with the fully qualified name as System.Object. Every class in C# is directly or indirectly derived from the Object class. If a Class does not extend any other class then it is the direct child class of Object class and if extends other class then it is an indirectly derived. Therefore the Object class methods are available to all C# classes. Hence Object class acts as a root of the inheritance hierarchy in any C# Program. The main purpose of the Object class is to provide the low-level services to derived classes.
There are two types in C# i.e Reference types and Value types. By using System.ValueType class, the value types inherit the object class implicitly. System.ValueType class overrides the virtual methods from Object Class with more appropriate implementations for value types. In other programming languages, the built-in types like int, double, float etc. does not have any object-oriented properties. To simulate the object-oriented behavior for built-in types, they must be explicitly wrapped into the objects. But in C#, we have no need of such wrapping due to the presence of value types which are inherited from System.ValueType that are further inherited from System.Object. So in C#, value types also work similar to reference types. Reference types directly or indirectly inherit the object class by using other reference types.
Explanation of the above Figure: Here, you can see the Object class at the top of the type hierarchy. Class 1 and Class 2 are the reference types. Class 1 is directly inheriting the Object class while Class 2 is Indirectly inheriting by using Class 1. Struct1 is value type that implicitly inheriting the Object class through the System.ValueType type.
Example:
Ouput:
For Object obj1 = new Object(); Object System.Object System For String str System.ValueType Int32 System.Int32 System
Constructor
Methods
There are total 8 methods present in the C# Object class as follows:
Important Points:
- C# classes don’t require to declare the inheritance from Object class as the inheritance is implicit.
- Every method defined in the Object class is available in all objects in the system as all classes in the .NET Framework are derived from Object class.
- Derived classes can and do override Equals, Finalize, GetHashCode and ToString methods of Object class.
- The process of boxing and unboxing a type internally causes a performance cost. Using the type-specific class to handle the frequently used types can improve the performance cost.
Recommended Posts:
- C# | Class and Object
- C# | Check if an array object is equal to another array object
- C# | Add an object to the end of the ArrayList
- C# | Add an object to the end of Collection<T>
- Object and Dynamic Array in C#
- Object.ReferenceEquals() Method in C#
- C# | Method returning an object
- C# | Getting the key at the specified index of a SortedList object
- C# | Getting the keys in a SortedList object
- C# | Getting the Values in a SortedList object
- C# | Get an IDictionaryEnumerator object in OrderedDictionary
- Object and Collection Initializer in C#
- C# | Search in a SortedList object
- C# | Getting the value at the specified index of a SortedList object
- Different ways to create an. | https://www.geeksforgeeks.org/c-sharp-object-class/ | CC-MAIN-2019-35 | en | refinedweb |
How to: Create Data Using .NET Business Connector
Applies To: Microsoft Dynamics AX 2012 R3, Microsoft Dynamics AX 2012 R2, Microsoft Dynamics AX 2012 Feature Pack, Microsoft Dynamics AX 2012
Using .NET Business Connector, you can access Microsoft Dynamics AX data or business logic from a .NET-connected application. The following example provides the code to create data in a Microsoft Dynamics AX table. For a complete working example, see Walkthrough: Integrate an Application with Microsoft Dynamics AX Using .NET Business Connector.
Procedures
Adding a Reference to .NET Business Connector
This example assumes that you have a project in which you want to access Microsoft Dynamics AX data. You must create a connection using .NET Business Connector. In this section, you will add a reference to the .NET Business Connector assembly.
To add a reference to .NET Business Connector
In Solution Explorer, right-click References, and then click Add Reference.
In the Add Reference window, click the Browse tab.
Specify the location of Microsoft.Dynamics.BusinessConnectorNet.dll, and then click Add.
Note
For a typical install, the assembly is located at C:\Program Files (x86)\Microsoft Dynamics AX\60\Client\Bin\.
Click OK.
To set the project target framework
In Solution Explorer, right-click your application project, and then click Properties.
In the Application tab, set the Target framework to .NET Framework 4.
To update the application configuration settings
In Solution Explorer, double-click the app.config file and update the following configuration settings.
<startup useLegacyV2RuntimeActivationPolicy=”true”> <supportedRuntime version “v4.0” />
On the File menu, click Save app.config.
Note
When you build your project, the development environment automatically creates a copy of your app.config file, changes its file name so that it has the same file name as your executable, and then moves the new .config file in the bin directory.
If you do not update the app.config file, you will get the following error:
“Mixed mode assembly is built against version 'v2.0.50727' of the runtime and cannot be loaded in the 4.0 runtime without additional configuration information.”
Creating a Record
In this section, you will add code to create a record in a Microsoft Dynamics AX table. You will log on to Microsoft Dynamics AX using the Axapta class. This example adds a record to the CustStatisticsGroup table called MyState.
To create a record
Add a using statement to the referenced .NET Business Connector assembly.
using Microsoft.Dynamics.BusinessConnectorNet;
Add the following code to add a record.
// Create the .NET Business Connector objects. Axapta ax; AxaptaRecord axRecord; string tableName = "CustStatisticsGroup"; try { // Login to Microsoft Dynamics AX. ax = new Axapta(); ax.Logon(null, null, null, null); // Create a new CustStatisticsGroup table record. using (axRecord = ax.CreateAxaptaRecord(tableName)) { // Provide values for each of the CustStatisticsGroup record fields. axRecord.set_Field("CustStatisticsGroup", "04"); axRecord.set_Field("StatGroupName", "No Priority Customer"); // Commit the record to the database. axRecord.Insert(); } } catch (Exception e) { Console.WriteLine("Error encountered: {0}", e.Message); // Take other error action as needed. }
To see the record added in the table, go to AOT > Data Dictionary > Tables, right-click CustStatisticsGroup, and then click Open. Click the green execute button to execute the SQL statement. The new No Priority Customer group is added to the table.
Next Steps
You can also read, update, and delete Microsoft Dynamics AX data. For more information, see How to: Read Data Using .NET Business Connector, How to: Update Data Using .NET Business Connector, and How to: Delete Data Using .NET Business Connector. You may want to call X++ business logic in addition to accessing data. For more information, see How to: Call Business Logic Using .NET Business Connector.
See also
Walkthrough: Integrate an Application with Microsoft Dynamics AX Using .NET Business Connector
How to: Call Business Logic Using .NET Business Connector
Announcements: New book: "Inside Microsoft Dynamics AX 2012 R3" now available. Get your copy at the MS Press Store.
Feedback | https://docs.microsoft.com/en-us/dynamicsax-2012/developer/how-to-create-data-using-net-business-connector | CC-MAIN-2019-35 | en | refinedweb |
Login button that becomes a circular progressbar on click.
For help getting started with Flutter, view our online documentation.
For help on editing plugin code, view the documentation.
example/lib/main.dart
import 'package:flutter/material.dart'; import 'package:login_button/login_button('Login Button example app'), ), body: new Center(child: new LoginButton()), ), ); } }
Add this to your package's pubspec.yaml file:
dependencies: login_button: :login_button/login. | https://pub.dev/packages/login_button | CC-MAIN-2019-35 | en | refinedweb |
Can our garden water itself?
In this project we continue our search to keep our garden well watered, but this time we start fresh with a new project...A self watering garden!
Re-using some of the kit from Project 1, in this project we introduce relays, 12V circuits and peristaltic pumps that will water our garden based on the soil moisture sensor from Project 1. All we need to do is keep a water butt full of water, either through rain or grey water collection!
For this project you will need
- Pi Zero W
- Rasp.IO Analog Zero board
- Vellemen Soil Moisture Sensor
- A waterproof box
- USB battery
- Jumper jerky (Dupont connections)
- Relay Board
- 12V Peristaltic Pump
- Plastic Hose to match diameter of pump
- 12V power supply (for outdoor use)
- Barrel Jack to terminal (for the 12V power supply)
- Waterproof boxes to contain the 12V power supply and pump
- Water Butt / Storage
- Wago 221 Connector (For 12V GND)Wago 221 Connector (For 12V GND) main sensor we are using is a simple soil moisture sensor from Velleman. The moisture sensor is a simple analog sensor which connects to the 3V and GND pins on the Analog Zero and the output of the sensor is connected to A0. The output from the sensor is in the form of a voltage from 0V to 3.3V (as we are using the 3.3V power from the Pi Zero GPIO) if there is no conductivity, i.e the soil is dry then no voltage is conducted, if the soil is wet then the soil will most likely conduct all of the voltage.
The other part of the project is a relay, used to control the 12V circuit for our peristaltic pump which will pump water from a water butt to our plants using a rotating motion to “squeeze” the water through the connected plastic hose. The relay is controlled from the GPIO of our Pi. In this case we connect the relay to 3V, GND and the Input of the relay to GPIO17.
The Analog Zero will take a little time to solder, and we shall also need to solder the pins for I2C and solder the 3V and GND pins for later. Once soldered, attach the Analog Zero to all 40 pins of the GPIO and then connect the sensor and relay board as per the diagram. You will also need to provide the 12V power supply to the peristaltic pump. The + connection from the 12V supply goes to the relay, via the normally open connection, the icon looks like an open switch.
Build the project so that the wiring is as follows. connection.. While not strictly necessary, now would be a great time to reboot to ensure that the changes have been made correctly. Then return to the Raspbian desktop. With the hardware installed and configured, we can now move on to writing the code for this project.
Writing the code
To write the code for this project we have used the latest Python editor, Thonny. Of course you are free to use whatever editor you see fit. You will find Thonny in the Main Menu, under the Programming sub-menu.
We start the code for this project by importing two libraries. The first is the GPIO Zero library, used for simple connections to electronic components. In this case we import the MCP3008 class for our Analog Zero board and then we import DigitalOutputDevice, a generic class to create our own output device.
from gpiozero import MCP3008, DigitalOutputDeviceimport time
Now lets create two objects, the first, soil is used to connect our code to the Velleman soil moisture sensor, connected to A0 on the Analog Zero board, which is channel 0 on the MCP3008 ADC. Our second object is a connection to the relay, which is triggered by an output device, on GPIO17.
soil = MCP3008(channel=0)relay = DigitalOutputDevice(17)
Moving on to the main part of the code we create a loop that will constantly run the code within it. Inside the loop the first line of code creates a variable, soil_check. This variable will store the value passed to it by the MCP3008, which is handled via the soil object. As this value is extremely precise we use the round function to round the returned value to two decimal places.
while True: soil_check = round(soil.value,2)
Next we print the value stored in the variable to advise the user on the soil moisture level, handy for debugging the code! Then the code waits for one second.
print('The wetness of the soil is',soil_check) time.sleep(1)
To check the soil moisture level we use an if conditional test. This will test the value stored in the soil_check variable against a hard coded value. In this case 0.1 was found to be very dry soil, but of course you are free to tinker and find the value right for your soil. If the soil is too dry then the condition is passed and the code is executed.
if soil_check <= 0.1:
So what is the code that will be run if the condition is met? Well remember the relay object that we created earlier? We are going to use that object to turn on the relay, effectively closing the open switch and enabling the 12V circuit to be completed. This will trigger the peristaltic pump to life and pump water into the plants. Now for testing we set the time to two seconds, but in reality this will be much longer, depending on the length of hose that the water needs to pass through. So when enough water has been passed we need to turn off the relay, cutting the 12V circuit. The code then waits for 10 seconds before the loop repeats. Again these times are in seconds for test purposes, but in reality they would be in minutes.
relay.on() time.sleep(2) relay.off() and start to water the plants, obviously be careful with this!
Once checked, place something conductive between the two prongs and you will see that the output is just printed to the Python shell and no watering self_watering.py
Now in the same terminal, launch the project by typing
./self_watering.py
Now the project will run in the terminal, checking our soil moisture levels and watering as necessary!/self_watering pump should start pumping water into the plants.
Power down the Pi Zero W, place it in a waterproof container along with a USB battery power source, ensure the soil sensor is out of the box. Place the project in your garden, and make sure the soil moisture sensor is firmly in the ground. Power up the Pi Zero W, and now your garden can now water itself! | https://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/2017/09/20/iot-garden-project-2-self-watering-garden | CC-MAIN-2019-35 | en | refinedweb |
#include <AliFMDCorrector.h>
Internal data structure to keep track of the histograms. Never streamed.
Definition at line 179 of file AliFMDCorrector.h.
Default CTOR
Definition at line 405 of file AliFMDCorrector.cxx.
Constructor
Definition at line 415 of file AliFMDCorrector.cxx.
Copy constructor
Definition at line 437 of file AliFMDCorrector.cxx.
Destructor
Reimplemented from AliForwardUtil::RingHistos.
Definition at line 470 of file AliFMDCorrector.cxx.
Make output
Definition at line 480 of file AliFMDCorrector.cxx.
Referenced by AliFMDCorrector::CreateOutputObjects().
Assignment operator
Definition at line 450 of file AliFMDCorrector.cxx.
Scale the histograms to the total number of events
Definition at line 493 of file AliFMDCorrector.cxx.
Referenced by AliFMDCorrector::Terminate().
Definition at line 223 of file AliFMDCorrector.h.
Referenced by AliFMDCorrector::Correct(), operator=(), RingHistos(), and AliFMDCorrector::Terminate(). | http://alidoc.cern.ch/AliPhysics/v5-09-06-01-rc1/struct_ali_f_m_d_corrector_1_1_ring_histos.html | CC-MAIN-2019-35 | en | refinedweb |
Count min sketch is a probabilistic histogram that was invented in 2003 by Graham Cormode and S. Muthukrishnan.
It’s a histogram in that it can store objects (keys) and associated counts.
It’s probabilistic in that it lets you trade space and computation time for accuracy.
The count min sketch is pretty similar to a bloom filter, except that instead of storing a single bit to say whether an object is in the set, the count min sketch allows you to keep a count per object. You can read more about bloom filters here: Estimating Set Membership With a Bloom Filter.
It’s called a “sketch” because it’s a smaller summarization of a larger data set.
Inserting an Item
The count min sketch is just a 2 dimensional array, with size of W x D. The actual data type in the array depends on how much storage space you want. It could be an unsigned char, it could be 4 bits, or it could be a uint64 (or larger!).
Each row (value of D) uses a different hash function to map objects to an index of W.
To insert an object, for each row you hash the object using that row’s hash function to get the W index for that object, then increment the count at that position.
In this blog post, I’m only going to be talking about being able to add items to the count min sketch. There are different rules / probabilities / etc for count min sketches that can have objects removed, but you can check out the links at the bottom of this post for more information about that!
Getting an Item Count
When you want to get the count of how many times an item has been added to the count min sketch, you do a similar operation as when you insert.
For each row, you hash the object being asked about with that row’s hash function to get the W index and then get the value for that row at the W index.
This will give you D values and you just return the smallest one.
The reason for this, is because due to hash collisions of various hash functions, your count in a specific slot may have been incorrectly incremented extra times due to other objects hashing to the same object. If you take the minimum value you’ve seen across all rows, you are guaranteed to be taking the value that has the least number of hash collisions, so is guaranteed to be most correct, and in fact guaranteed to be greater than or equal to the actual answer – but never lower than the actual answer.
Dot Product (Inner Product)
If you read the last post on using dot product with histograms to gauge similarity, you might be wondering if you can do a dot product between two count min sketch objects.
Luckily yes, you can! They need to have the same W x D dimensions and they need to use the same hash functions per row, but if that’s true, you can calculate a dot product value very easily.
If you have two count min sketch objects A and B that you want to calculate the dot product for, you dot product each row (D index) of the two count min sketch objects. This will leave you with D dot products and you just return the smallest one. This guarantees that the dot product value you calculate will have the fewest hash collisions (so will be most accurate), and will also guarantee that the estimate is greater that or equal to the actual answer, but will never be lower.
To Normalize Or Not To Normalize
There is a caveat here though with doing a dot product between two count min sketch objects. If you do a normalized dot product (normalize the vectors before doing a dot product, or dividing the answer by the length of the two vectors multiplied together), the guarantee that the dot product is greater than or equal to the true answer no longer holds!
The reason for this is that the formula for doing a normalized dot product is like this:
normalized dot product = dot(A,B) / (length(A)*length(B))
In a count min sketch, the dot(A,B) estimate is guaranteed to greater than or equal to the true value.
The length of a vector is also guaranteed to be greater than or equal to the length of the true vector (the vector made from the actual histogram values).
This means that the numerator and the denominator BOTH have varying levels of overestimation in them. Overestimation in the numerator makes the normalized dot product estimate larger, while overestimation in the denominator makes the normalized dot product estimate smaller.
The result is that a normalized dot product estimate can make no guarantee about being greater than or equal to the true value!
This may or may not be a problem for your situation. Doing a dot product with unnormalized vectors still gives you a value that you can use to compare “similarity values” between histograms, but it has slightly different meaning than a dot product with normalized vectors.
Specifically, if the counts are much larger in one histogram versus another (such as when doing a dot product between multiple large text documents and a small search term string), the “weight” of the larger counts will count for more.
That means if you search for “apple pie”, a 100 page novel that mentions apples 10 times will be a better match than a 1/2 page recipe for apple pie!
When you normalize histograms, it makes it so the counts are “by percentage of the total document length”, which would help our search correctly find that the apple pie recipe is more relevant.
In other situations, you might want to let the higher count weigh stronger even though the occurrences are “less dense” in the document.
It really just depends on what your usage case is.
Calculating W & D
There are two parameters (values) used when calculating the correct W and D dimensions of a count min sketch, for the desired accuracy levels. The parameters are ε (epsilon) and δ (delta).
ε (Epsilon) is “how much error is added to our counts with each item we add to the cm sketch”.
δ (Delta) is “with what probability do we want to allow the count estimate to be outside of our epsilon error rate”
To calculate W and D, you use these formulas:
W = ⌈e/ε⌉
D = ⌈ln (1/δ)⌉
Where ln is “natural log” and e is “euler’s constant”.
Accuracy Guarantees
When querying to get a count for a specific object (also called a “point query”) the accuracy guarantees are:
- True Count <= Estimated Count
- Estimated Count <= True Count + ε * Number Of Items Added
- There is a δ chance that #2 is not true
When doing an unnormalized dot product, the accuracy guarantees are:
- True Dot Product <= Estimated Dot Product
- Estimated Dot Product <= True Dot Product + ε * Number Of Items Added To A * Number Of Items Added To B
- There is a δ chance that #2 is not true
Conservative Update
There is an alternate way to implement adding an item to the cm sketch, which results in provably less error. That technique is called a “Conservative Update”.
When doing a conservative update, you first look at the values in each row that you would normally increment and keep track of the smallest value that you’ve seen. You then only increment the counters that have that smallest value.
The reason this works is because we only look at the smallest value across all rows when doing a look up. So long as the smallest value across all rows increases when you insert an object, you’ve satisfied the requirements to make a look up return a value that is greater than or equal to the true value. The reason this conservative update results in less error is because you are writing to fewer values, which means that there are fewer hash collisions happening.
While this increases accuracy, it comes at the cost of extra logic and processing time needed when doing an update, which may or may not be appropriate for your needs.
Example Runs
The example program is a lot like the program from the last post which implemented some search engine type functionality.
This program also shows you some count estimations to show you that functionality as well.
The first run of the program is with normalized vectors, the second run of the program is with unnormalized vectors, and the third run of the program, which is most accurate, is with unnormalized vectors and conservative updates.
First Run: Normalized Vectors, Regular Updates
Second Run: Unnormalized Vectors, Regular Updates
Third Run: Unnormalized Vectors, Conservative Updates
Example Code
#include
#include
#include
#include
#include
#include
#include
#include
const float c_eulerConstant = (float)std::exp(1.0);
// The CCountMinSketch class
template
class CCountMinSketch
{
public:
typedef CCountMinSketch
CCountMinSketch ()
: m_countGrid { } // init counts to zero
, m_vectorLengthsDirty(true)
{ }
static const unsigned int c_numBuckets = NUMBUCKETS;
static const unsigned int c_numHashes = NUMHASHES;
typedef TCOUNTTYPE TCountType;
void AddItem (bool conservativeUpdate, const TKEY& item, const TCOUNTTYPE& count)
{
// this count min sketch is only supporting positive counts
if (count < 0) { printf("Cound not add item, count needs to be >= 0!\n”);
return;
}
// remember that our vector lengths are inaccurate
m_vectorLengthsDirty = true;
// if doing a conservative update, only update the buckets that are necesary
if (conservativeUpdate)
{
// find what the lowest valued bucket is and calculate what our new lowest
// value should be
TCOUNTTYPE lowestValue = GetCount(item) + count;
// make sure every bucket has at least the lowest value it should have
size_t rawHash = HASHER()(item);
for (unsigned int i = 0; i < NUMHASHES; ++i) { size_t hash = std::hash
TCOUNTTYPE value = m_countGrid[i][hash%NUMBUCKETS];
if (value < lowestValue) m_countGrid[i][hash%NUMBUCKETS] = lowestValue; } } // else do a normal update else { // for each hash, find what bucket this item belongs in, and add the count to that bucket size_t rawHash = HASHER()(item); for (unsigned int i = 0; i < NUMHASHES; ++i) { size_t hash = std::hash
m_countGrid[i][hash%NUMBUCKETS] += count;
}
}
}
TCOUNTTYPE GetCount (const TKEY& item)
{
// for each hash, get the value for this item, and return the smalles value seen
TCOUNTTYPE ret = 0;
size_t rawHash = HASHER()(item);
for (unsigned int i = 0; i < NUMHASHES; ++i) { size_t hash = std::hash
if (i == 0 || ret > m_countGrid[i][hash%NUMBUCKETS])
ret = m_countGrid[i][hash%NUMBUCKETS];
}
return ret;
}
void CalculateVectorLengths ()
{
// if our vector lengths were previously calculated, no need to do anything
if (!m_vectorLengthsDirty)
return;
// calculate vector lengths of each hash
for (unsigned int hash = 0; hash < NUMHASHES; ++hash) { m_vectorLengths[hash] = 0.0f; for (unsigned int bucket = 0; bucket < NUMBUCKETS; ++bucket) m_vectorLengths[hash] += (float)m_countGrid[hash][bucket] * (float)m_countGrid[hash][bucket]; m_vectorLengths[hash] = sqrt(m_vectorLengths[hash]); } // remember that our vector lengths have been calculated m_vectorLengthsDirty = false; } friend float HistogramDotProduct (TType& A, TType& B, bool normalize) { // make sure the vector lengths are accurate. No cost if they were previously calculated A.CalculateVectorLengths(); B.CalculateVectorLengths(); // whatever hash has the smallest dot product is the most correct float ret = 0.0f; bool foundValidDP = false; for (unsigned int hash = 0; hash < NUMHASHES; ++hash) { // if either vector length is zero, don't consider this dot product a valid result // we cant normalize it, and it will be zero anyways if (A.m_vectorLengths[hash] == 0.0f || B.m_vectorLengths[hash] == 0.0f) continue; // calculate dot product of unnormalized vectors float dp = 0.0f; for (unsigned int bucket = 0; bucket < NUMBUCKETS; ++bucket) dp += (float)A.m_countGrid[hash][bucket] * (float)B.m_countGrid[hash][bucket]; // normalize dot product by dividing by the product of the vector lengths, if we should normalize if (normalize) dp /= (A.m_vectorLengths[hash] * B.m_vectorLengths[hash]); // keep the smallest dot product seen if (!foundValidDP || ret > dp)
{
ret = dp;
foundValidDP = true;
}
}
return ret;
}
private:
typedef std::array
typedef std::array
TTable m_countGrid;
bool m_vectorLengthsDirty;
std::array
};
// Calculate ideal count min sketch parameters for your needs.
unsigned int CMSIdealNumBuckets (float error)
{
return (unsigned int)std::ceil((float)(c_eulerConstant / error));
}
unsigned int CMSIdealNumHashes (float probability)
{
return (unsigned int)std::ceil(log(1.0f / probability));
}
typedef std::string TKeyType;
typedef unsigned char TCountType;
typedef CCountMinSketch
typedef std::unordered_map
//.”;
void WaitForEnter ()
{
printf(“\nPress Enter to quit”);
fflush(stdin);
getchar();
}
template);
}
}
void PopulateHistogram (THistogramEstimate &histogram, const char *text, bool conservativeUpdate)
{
ForEachWord(text, [&](const std::string &word) {
histogram.AddItem(conservativeUpdate, word, 1);
});
}
void PopulateHistogram (THistogramActual &histogram, const char *text)
{
ForEachWord(text, [&histogram](const std::string &word) {
histogram[word]++;
});
}
float HistogramDotProduct (THistogramActual &A, THistogramActual &B, bool normalize)
{
// Get all the unique keys from both histograms
std::set
std::for_each(A.cbegin(), A.cend(), [&keysUnion](const std::pair
{
keysUnion.insert(v.first);
});
std::for_each(B.cbegin(), B.cend(), [&keysUnion](const std::pair
{
keysUnion.insert(v.first);
});
// calculate and return the normalized dot product!
float dotProduct = 0.0f;
float lengthA = 0.0f;
float lengthB = 0.0f;
std::for_each(keysUnion.cbegin(), keysUnion.cend(),
[&A, &B, &dotProduct, &lengthA, &lengthB]
(const TKeyType& key)
{
// if the key isn’t found in either histogram ignore it, since it will be 0 * x which is
// always anyhow. Make sure and keep track of vector length though!
auto a = A.find(key);
auto b = B.find(key);
if (a != A.end())
lengthA += (float)(*a).second * (float)(*a).second;
if (b != B.end())
lengthB += (float)(*b).second * (float)(*b).second;
if (a == A.end())
return;
if (b == B.end())
return;
// calculate dot product
dotProduct += ((float)(*a).second * (float)(*b).second);
}
);
// if we don’t need to normalize, return the unnormalized value we have right now
if (!normalize)
return dotProduct;
// normalize if we can
if (lengthA * lengthB <= 0.0f) return 0.0f; lengthA = sqrt(lengthA); lengthB = sqrt(lengthB); return dotProduct / (lengthA * lengthB); } template(ret, “%0.2f%%”, error);
return ret;
}
int main (int argc, char **argv)
{
// settings
const bool c_normalizeDotProducts = false;
const bool c_conservativeUpdate = true;
// show settings and implication
printf(“Dot Products Normalized? %s\n”,
c_normalizeDotProducts
? “Yes! estimate could be <= or > actual”
: “No! estimate <= actual"); printf("Conservative Updates? %s\n\n", c_conservativeUpdate ? "Yes! Reduced error" : "No! normal error"); // populate our probabilistic histograms. // Allocate memory for the objects so that we don't bust the stack for large histogram sizes! std::unique_ptr
std::unique_ptr
std::unique_ptr
PopulateHistogram(*TheDinoDocEstimate, g_storyA, c_conservativeUpdate);
PopulateHistogram(*TheRobotEstimate, g_storyB, c_conservativeUpdate);
PopulateHistogram(*TheInternetEstimate, g_storyC, c_conservativeUpdate);
// populate our actual count histograms for comparison
THistogramActual TheDinoDocActual;
THistogramActual TheRobotActual;
THistogramActual TheInternetActual;
PopulateHistogram(TheDinoDocActual, g_storyA);
PopulateHistogram(TheRobotActual, g_storyB);
PopulateHistogram(TheInternetActual, g_storyC);
// report whether B or C is a closer match for A
float dpABEstimate = HistogramDotProduct(*TheDinoDocEstimate, *TheRobotEstimate, c_normalizeDotProducts);
float dpACEstimate = HistogramDotProduct(*TheDinoDocEstimate, *TheInternetEstimate, c_normalizeDotProducts);
float dpABActual = HistogramDotProduct(TheDinoDocActual, TheRobotActual, c_normalizeDotProducts);
float dpACActual = HistogramDotProduct(TheDinoDocActual, TheInternetActual, c_normalizeDotProducts);
printf(“\”The Dino Doc\” vs …\n”);
printf(” \”The Robot\” %0.4f (actual %0.4f) Error: %s\n”, dpABEstimate, dpABActual, CalculateError(dpABEstimate, dpABActual));
printf(” \”The Internet\” %0.4f (actual %0.4f) Error: %s\n\n”, dpACEstimate, dpACActual, CalculateError(dpACEstimate, dpACActual));
if (dpABEstimate > dpACEstimate)
printf(“Estimate: \”The Dino Doc\” and \”The Robot\” are more similar\n”);
else
printf(“Estimate: \”The Dino Doc\” and \”The Internet\” are more similar\n”);
if (dpABActual > dpACActual)
printf(“Actual: \”The Dino Doc\” and \”The Robot\” are more similar\n”);
else
printf(“Actual: \”The Dino Doc\” and \”The Internet\” are more similar\n”);
// let the user do a search engine style query for our stories!
char searchString[1024];
printf(“\nplease enter a search string:\n”);
searchString[0] = 0;
scanf(“%[^\n]”, searchString);
struct SSearchResults
{
SSearchResults(const std::string& pageName, float rankingEstimated, float rankingActual)
: m_pageName(pageName)
, m_rankingEstimated(rankingEstimated)
, m_rankingActual(rankingActual)
{ }
bool operator < (const SSearchResults& other)
{
return m_rankingEstimated > other.m_rankingEstimated;
}
std::string m_pageName;
float m_rankingEstimated;
float m_rankingActual;
};
std::vector
// preform our search and gather our results!
std::unique_ptr
THistogramActual searchActual;
PopulateHistogram(*searchEstimate, searchString, c_conservativeUpdate);
PopulateHistogram(searchActual, searchString);
results.push_back(
SSearchResults(
“The Dino Doc”,
HistogramDotProduct(*TheDinoDocEstimate, *searchEstimate, c_normalizeDotProducts),
HistogramDotProduct(TheDinoDocActual, searchActual, c_normalizeDotProducts)
)
);
results.push_back(
SSearchResults(
“The Robot”,
HistogramDotProduct(*TheRobotEstimate, *searchEstimate, c_normalizeDotProducts),
HistogramDotProduct(TheRobotActual, searchActual, c_normalizeDotProducts)
)
);
results.push_back(
SSearchResults(
“The Internet”,
HistogramDotProduct(*TheInternetEstimate, *searchEstimate, c_normalizeDotProducts),
HistogramDotProduct(TheInternetActual, searchActual, c_normalizeDotProducts)
)
);
std::sort(results.begin(), results.end());
// show the search results
printf(“\nSearch results sorted by estimated relevance:\n”);
std::for_each(results.begin(), results.end(), [](const SSearchResults& result) {
printf(” \”%s\” : %0.4f (actual %0.4f) Error: %s\n”,
result.m_pageName.c_str(),
result.m_rankingEstimated,
result.m_rankingActual,
CalculateError(result.m_rankingEstimated, result.m_rankingActual)
);
});
// show counts of search terms in each story (estimated and actual)
printf(“\nEstimated counts of search terms in each story:\n”);
std::for_each(searchActual.cbegin(), searchActual.cend(), [&] (const std::pair
{
// show key
printf(“\”%s\”\n”, v.first.c_str());
// the dino doc
TCountType estimate = TheDinoDocEstimate->GetCount(v.first.c_str());
TCountType actual = 0;
auto it = TheDinoDocActual.find(v.first.c_str());
if (it != TheDinoDocActual.end())
actual = it->second;
printf(” \”The Dino Doc\” %u (actual %u) Error: %s\n”, estimate, actual, CalculateError(estimate, actual));
// the robot
estimate = TheRobotEstimate->GetCount(v.first.c_str());
actual = 0;
it = TheRobotActual.find(v.first.c_str());
if (it != TheRobotActual.end())
actual = it->second;
printf(” \”The Robot\” %u (actual %u) Error: %s\n”, estimate, actual, CalculateError(estimate, actual));
// the internet
estimate = TheInternetEstimate->GetCount(v.first.c_str());
actual = 0;
it = TheInternetActual.find(v.first.c_str());
if (it != TheInternetActual.end())
actual = it->second;
printf(” \”The Internet\” %u (actual %u) Error: %s\n”, estimate, actual, CalculateError(estimate, actual));
});
// show memory use
printf(“\nThe above used %u buckets and %u hashes with %u bytes per count\n”,
THistogramEstimate::c_numBuckets, THistogramEstimate::c_numHashes, sizeof(THistogramEstimate::TCountType));
printf(“Totaling %u bytes of storage for each histogram\n\n”,
THistogramEstimate::c_numBuckets * THistogramEstimate::c_numHashes * sizeof(THistogramEstimate::TCountType));
// show a probabilistic suggestion
float error = 0.1f;
float probability = 0.01f;
printf(“You should use %u buckets and %u hashes for…\n”, CMSIdealNumBuckets(error), CMSIdealNumHashes(probability));
printf(“true count <= estimated count <= true count + %0.2f * Items Processed\nWith probability %0.2f%%\n", error, (1.0f - probability)*100.0f); WaitForEnter(); return 0; } [/cpp]
Links
If you use this in production code, you should probably use a better quality hash.
The rabbit hole on this stuff goes deeper, so if you want to know more, check out these links!
Wikipedia: Count Min Sketch
Count Min Sketch Full Paper
Count Min Sketch AT&T Research Paper
Another CMS paper
And another, with some more info like range query details
Next up I’ll be writing about hyperloglog, which does the same thing as KMV (K-Minimum Values) but is better at it! | http://blog.demofox.org/2015/02/22/count-min-sketch-a-probabilistic-histogram/ | CC-MAIN-2017-22 | en | refinedweb |
XmlSerializer Class
Serializes and deserializes objects into and from XML documents. The XmlSerializer enables you to control how objects are encoded into XML.
Assembly: System.Xml (in System.Xml.dll)
System.Xml.Serialization.XmlSerializer. For example, ASP.NET uses the XmlSerializer class to encode XML Web service messages.:". This is shown in the following.
using System; using System.Xml; using System.Xml.Serialization; using System.IO; /* The XmlRootAttribute allows you to set an alternate name (PurchaseOrder) of the XML element, the element namespace; by default, the XmlSerializer uses the class name. The attribute also allows you to set the XML namespace for the element. Lastly, the attribute sets the IsNullable property, which specifies whether the xsi:null attribute appears if the class instance is set to a null reference. */ [XmlRootAttribute("PurchaseOrder", Namespace="", IsNullable = false)] public class PurchaseOrder { public Address ShipTo; public string OrderDate; /* The XmlArrayAttribute changes the XML element name from the default of "OrderedItems" to "Items". */ [XmlArrayAttribute("Items")] public OrderedItem[] OrderedItems; public decimal SubTotal; public decimal ShipCost; public decimal TotalCost; } public class Address { /* The XmlAttribute instructs the XmlSerializer to serialize the Name field as an XML attribute instead of an XML element (the default behavior). */ [XmlAttribute] public string Name; public string Line1; /* Setting the IsNullable property to false instructs the XmlSerializer that the XML attribute will not appear if the City field is set to a null reference. */ [XmlElementAttribute(IsNullable = false)] public string City; public string State; public string Zip; } public class OrderedItem { public string ItemName; public string Description; public decimal UnitPrice; public int Quantity; public decimal LineTotal; /* Calculate is a custom method that calculates the price per item, and stores the value in a field. */ public void Calculate() { LineTotal = UnitPrice * Quantity; } } public class Test { public static void Main() { // Read and write purchase orders. Test t = new Test(); t.CreatePO("po.xml"); t.ReadPO("po.xml"); } private void CreatePO(string filename) { // Create an instance of the XmlSerializer class; // specify the type of object to serialize. XmlSerializer serializer = new XmlSerializer(typeof(PurchaseOrder)); TextWriter writer = new StreamWriter(filename); PurchaseOrder po=new PurchaseOrder(); // Create an address to ship and bill to. Address billAddress = new Address(); billAddress.Name = "Teresa Atkinson"; billAddress.Line1 = "1 Main St."; billAddress.City = "AnyTown"; billAddress.State = "WA"; billAddress.Zip = "00000"; // Set ShipTo and BillTo to the same addressee. po.ShipTo = billAddress; po.OrderDate = System.DateTime.Now.ToLongDateString(); // Create an OrderedItem object. OrderedItem i1 = new OrderedItem(); i1.ItemName = "Widget S"; i1.Description = "Small widget"; i1.UnitPrice = (decimal) 5.23; i1.Quantity = 3; i1.Calculate(); // Insert the item into the array. OrderedItem [] items = {i1}; po.OrderedItems = items; // Calculate the total cost. decimal subTotal = new decimal(); foreach(OrderedItem oi in items) { subTotal += oi.LineTotal; } po.SubTotal = subTotal; po.ShipCost = (decimal) 12.51; po.TotalCost = po.SubTotal + po.ShipCost; // Serialize the purchase order, and close the TextWriter. serializer.Serialize(writer, po); writer.Close(); }.Value + "'"); } }
Available since 8
.NET Framework
Available since 1.1
Portable Class Library
Supported in: portable .NET platforms
Silverlight
Available since 2.0
Windows Phone Silverlight
Available since 7.0
Windows Phone
Available since 8.1
This type is thread safe.
XmlAttributeOverrides
XmlAttributes
XmlSerializer
XmlText
XmlAttributes
System.Xml.Serialization Namespace
Introducing | https://technet.microsoft.com/en-us/library/system.xml.serialization.xmlserializer.aspx | CC-MAIN-2017-22 | en | refinedweb |
Yesterday I got a review copy of Automate the Boring Stuff with Python. It explains, among other things, how to manipulate PDFs from Python. This morning I needed to rotate some pages in a PDF, so I decided to try out the method in the book.
The sample code uses PyPDF2. I’m using Conda for my Python environment, and PyPDF2 isn’t directly available for Conda. I searched Binstar with
binstar search -t conda pypdf2
The first hit was from JimInCO, so I installed PyPDF2 with
conda install -c pypdf2
I scanned a few pages from a book to PDF, turning the book around every other page, so half the pages in the PDF were upside down. I needed a script to rotate the even numbered pages. The script counts pages from 0, so it rotates the odd numbered pages from its perspective.
import PyPDF2 pdf_in = open('original.pdf', 'rb') pdf_reader = PyPDF2.PdfFileReader(pdf_in) pdf_writer = PyPDF2.PdfFileWriter() for pagenum in range(pdf_reader.numPages): page = pdf_reader.getPage(pagenum) if pagenum % 2: page.rotateClockwise(180) pdf_writer.addPage(page) pdf_out = open('rotated.pdf', 'wb') pdf_writer.write(pdf_out) pdf_out.close() pdf_in.close()
It worked as advertised on the first try.
One thought on “Rotating PDF pages with Python”
For comparison, the Perl version: | https://www.johndcook.com/blog/2015/05/01/rotating-pdf-pages-with-python/ | CC-MAIN-2017-22 | en | refinedweb |
Step 1: JAVA - Getting the Tools
STEP #1) Go to the website here and click the green download button
STEP #2) Save it to your desktop and click "OK"
STEP #3) When it finishes downloading, right click the file and select "Extract all"
STEP #4) You should see a new folder appear on your desktop, and make sure it has the file "eclipse.exe" in it.
Now double click on the eclipse.exe file with the icon of a solar eclipse. It will ask you to create a workspace when it opens. Enter "myWork" in the name bar, and click OK. You should then see a welcome screen, and in the top right corner click the "workbench" button.Now you should see something like the 1st image at the bottom.
After that click "File" > "New" > "Java Project".
In the name box, type "myProj", and click next, and then finish. Now, in the project explorer(left of screen) you should see a folder called "myProj". The project explorer is where you can see all of your files.The area in the middle is the mainstage(coding section) and the right part is the Library, which gives us a list of functions and classes.(Will talk about classes and functions later). The bottom part is the error list, if we have any run-time or code problems, they will be there. It is also the console window where output is displayed.
Finaly, right-click the "myProj" folder we created and go to "New" > "Class". In the name bar type "myFirst". Click finish. You should see the 2nd picture at the bottom for a closer look.
Now you are ready to start writing code in JAVA. In the next step we will write your first program, and discuss some JAVA elements.
Step 2: JAVA - Getting to Work With JAVA
In JAVA, everything is based on classes, sections of code with commands to execute. There are also these things called methods,smaller sections of code that contain functions too.Usually there are multiple methods in a class,that interact with each other based on values of certain variables and return a value. Those methods are packed into a class, and then classes with methods can interact with other classes and print the return value on the screen. There is
also something called a main method, the method the compiler searches for first. Based on the instructions the main method gives, the compiler can move to different classes to execute different methods,or just stay in the main method.
For now lets just create a main method. In your "myFirst" class type the code in bold:
public class myFirst {
public static void main(String[] args)
{
}
}
Now lets discuss this code. Each method is based on the following syntax:
[accessSpecifier] [returnType] [methodName] ( [parameters] )
{
[methodBody]
}
The access specifiers in this case are "public" and "static". Any method can be "public" or "private". "Public" means the method can be accessed by any class. "Private" means that the method can be accessed by only the class it belongs to. I will explain the "Static" key word later.Here we made a public static main method with the name main, and parameters of "String[] args"(I won't explain the parameters now). In the method body we type all of the commands we wan't to execute. The method body's and class body's are always located between the curly braces.
NOTE: JAVA is a case sensitive language, so when you type commands, you must type them exactly as specified, or you will get an error!!!!!!
Now type the code in bold into your main method:
public class myFirst {
public static void main(String[] args)
{
System.out.println("Hello world!");
}
}
By now you should have the code in the 1st picture. Now go to "Run" > "Run", and click "OK" when the dialog box appears, and at the bottom(console window) you should see the text "Hello world!" printed. Check the second image for reference.
Here we used the command System.out.println to print a line on the screen. The "System", is a class containing many functions. The "out" was that we wanted to print OUT to the screen(or output) and the method "println" means; print line. Then in brackets, and in quotation marks(because this is a string value(value containing words)) we included the text we wanted to print,and ended the line with a semi-colon(;). NOTE: All lines in JAVA must end in semi-colons, except lines when we declare classes or methods.
We can also use "print", but the difference between "print" and "println" is that "print" prints text on a line, but "println" means to print the text, and end the line, meaning that if the next command is "print", the text will be printed on a new line.
At this point, I would like to apologize for the bad quality of my images.I have included some SELF-CHECK questions at the bottom. In the next step I will include the answers to them.In the next step I will also introduce you to the basic value types.
SELF-CHECK:
#1) Write a program to print the word "cheese" letter by letter.
HINT: Use the "print" command
#2) Use the "print" and "println" commands to experiment.
#3) What is wrong with this line of code:
System.out.println(Hello world!);
#4) What will you get if you run these lines of code:
System.out.print("h");
System.out.print("i");
System.out.println("per-");
System.out.print("son");
Step 3: JAVA - Basic Variable Types
#1) System.out.print("c");
System.out.print("h");
System.out.print("e");
System.out.print("e");
System.out.print("s");
System.out.print("e");
#2) No definite answer.
#3) The text in brackets was not in quotation marks.
#4) hi per-
son
There will also be self check questions at the end of this step.
There are many data types. In this instructable we will go over only the basic ones, and it will still take a couple of steps.
All variables work on the syntax below.
[dataType] [variableName] = [value];
ex.
int myNum = 8;
int type:
The "int" type, means integer. Works on the same syntax as above. There are no quotes needed to hold the value for any numerical type. Any int variables range from a minimum of -2,147,483,648 to a maximum value of 2,147,483,647. Most common integers will fit in this range, but if they don't use "long" instead.
ex.
int nine = 9;
long type:
The "long" type is a long version of the "int" command. Ranges from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
float type:
The "float" type is a floating-point number, which means it contains a decimal value.
double type:
The "double" type is a floating-point number, which can hold a bigger value.
string type:
The "string" type holds a text value. The text(value) must be inclosed in double quotes.
ex.
String greeting = "Hi blank";
Those were the basic data types. To print any of them just write the variable name in the parameters of the "println" method without quotes.
ex.
int myNum = 52930;
System.out.println(myNum + "Is the value of myNum");
The code above would print "52930 Is the value of myNum" on the screen. And by the way we used there a plus sign to combine a String to the line we were printing, so it would print a String value after the value of myNum. You can use the plus sign to add variables in the "println" command and add string values. Check out the two pictures at the bottom to see what I did.
This is section 1/2 of the number types, in the next section I will teach you some simple mathematical operators you can use on the variables.
Step 4: JAVA - Mathematical Operators
ex.
int sum = 5 + 579;
It is also used to combine strings in the "println" method.
ex.
System.out.println("This is " + "three strings " + "combined.");
Notice that before adding another string on the first and second strings I used a space at the end to make it look normal.
There is also the "-" sign as you have guessed, and it is used only to subtract numbers.
ex.
int subtraction = 9 - 6;
Also there is the multiplication operator, which is represented by a "*" in java(asterisk). It is used to multiply numbers.
ex.
int multiplication = 756 * 15;
And there is the division operator, which is represented by the "/"(slash). It is used to divide numbers.
ex.
int division = 50 / 5
Also there is a modulo operator, which is represented by the "%". Modulo is used to focus on the remainder of two numbers, if there is any.
ex.
int modulo = 10 % 9;
You do not need to add quotes for the numbers if you use the numbers in the "println" method, or they will be interpreted as string values.
ex.
System.out.println(6 + 7);
COMMON ERROR 1:
System.out.println("6" + "7" );
The code above returns 67, not 13. To avoid this delete the quotes.
The variable names can be used to identify values. Such as:
int myNum = 9;
System.out.println("The value of myNum is " + myNum);
As long as "myNum" doesn't have any variables around it, the program will print "The value of myNum is 9". You can also use the operators to perform operations in the "println" method to return quick results.
ex.
System.out.println(8 * 10);
My pictures will be basicly on everything we covered in this section, but don't forget to check them out. In the next step there will be little new material, but there will be a test that covers everything we learned so far. Here are the self check questions:
SELF-CHECK #1:
Write a program to calculate the modulo of 789 to 2, and print the result on the screen.
SELF-CHECK #2:
Describe the "int" data type, with at least the basic characteristic.
SELF-CHECK #3:
Create a string variable called "greeting" with a friendly message in it leaving out the name(ex. Hello _______). Then create a string called "name" with the value of your name. Then combine these variables and you should get your final message.
SELF-CHECK #4:
How do you represent multiplication in JAVA?(What sign do you use)
Step 5: JAVA - 1st Test / Commenting
#1) System.out.println(789 % 2);
#2) The "int" data type holds an integer.
#3) String greeting = "Hello ";
String name = "JAVA Teacher"
System.out.println(greeting + name);
#4) You use an "*"(asterisk)
OK, now for this instructable I will only include a little new material, and the link to my test.
In JAVA there is something called "commenting". That means to comment your work.
There are 2 types of comments you can make a single-line comment(see ex. 1) and a multi-line comment(see ex. 2) . The examples for these comments are included. For a single-line comment you have to put 2 slashes before the text, everything to the right of the slashes is considered a comment, and ignored by the JAVA compiler. A simple multi-line comment is in between the slash and 2 asterisks, and ends with the asterisk and a slash. An advanced multi-line comment discribes a method, we will go over this later.
JAVA ADVICE:
I suggest you to comment everything, even the simplest things. Because if someone is going through your work and may have trouble understanding your code. It might not be obvious that the variable d stands for dollars . And I also suggest you to save your work frequently.(I lost a lot of code because of this once)
ex. 1
int num2 = 78; //Create an integer, "num2" with the value of 78
ex. 2
/**
Create an integer, "num2" with the
value of 78
*/
int num2 = 78;
OK, good luck on the test. :-) (LINK AT BOTTOM, READ NOTE)
NOTE:
I really rushed through making the quiz, so on #2 I marked the wrong answer as right. The correct answer for that one was the last option. I am very sorry for this inconvenience.
The link to the test is here. There's a picture at the bottom of the welcome screen of the test too.Good luck and don't forget to read my next tutorial! :-) | http://www.instructables.com/id/JAVA-Introduction/ | CC-MAIN-2017-22 | en | refinedweb |
I. One of the major gaps for me was the
<use> element, as most SVG icon systems are built with
<use>.
I asked Michael if he thought better support might be coming for some of these features, but he showed me a much better way of working with it, circumventing this method entirely. We'll go over this technique so that you can get started writing scalable SVG Icon Systems in React, as well as some tricks I'd propose could work nicely, too.
Note: It's worth saying that use support was recently improved, but I've noticed it's spotty at best and there are other routing and XML issues. We'll show you another, cleaner way here.
What is
<use>?
For those not familiar how SVG icon systems are typically built, it works a little like this. The
<use> element clones a copy of any other SVG shape element with the ID you reference in the
xlink:href attribute, and still manipulate it without reiterating all of the path data. You may wonder why one wouldn't just use an SVG as an
<img> tag. You could, but then every icon would be an individual request and you wouldn't have access change parts of the SVG, such as the
fill color.
Using
<use> allows us to keep the path data and basic appearance of our icons defined in one place so that they could be updated once and change everywhere, while still giving us the benefit of updating them on the fly.
Joni Trythall has a great article about use and SVG icons, and Chris Coyier wrote another awesome article here on CSS-Tricks as well.
Here's a small example if you'd like to see what the markup looks like:
See the Pen bc5441283414ae5085f3c19e2fd3f7f2 by Sarah Drasner (@sdras) on CodePen.
Why bother with SVG Icons?
Some of you at this point might be wondering why we would use an SVG icon system rather than an icon font to begin with. We have our own comparison on that subject. Plus there are a ton of people writing and speaking about this right now
Here are some of the more compelling reasons, in my mind:
-.
If you’re like me and updating an enormous codebase, where in order to move over from an icon font to SVG you’d have to update literally hundreds of instances of markup, I get it. I do. It might not be worth the time in that instance. But if you’re rewriting your views and updating them with React, it’s worth revisiting an opportunity here.
Tl;dr: You don’t need
<use> in React
After Michael patiently listened to me explain how we use
<use> and had me show him an example icon system, his solution was simple: it’s not really necessary.
Consider this: the only reason we were defining icons to then reuse them (usually as
<symbol>s in
<defs>) was so that we didn’t have to repeat ourselves and could just update the SVG paths in one spot. But React already allows for that. We simply create the component:
// Icon const IconUmbrella = React.createClass({ render() { return ( <svg className="umbrella" xmlns="" width="32" height="32" viewBox="0 0 32 32" aria- <title id="title">Umbrella Icon</title> <path> ) } }); // which makes this reusable component for other views <IconUmbrella />
See the Pen SVG Icon in React by Sarah Drasner (@sdras) on CodePen.
And we can use it again and again, but unlike the older
<use> way, we don’t have an additional HTTP request.
Two SVG-ish things you might notice from the above example. One, I don’t have this kind of output:
<?xml version="1.0" encoding="utf-8"?> <!-- Generated by IcoMoon.io --> <!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "">
Or even this on the SVG tag itself:
<svg version="1.1" xmlns="" …
That’s because I’ve made certain to optimize my SVGs with SVGOMG or SVGO before adding the markup everywhere. I strongly suggest you do as well, as you can reduce the size of your SVG by a respectable amount. I usually see percentages around 30% but can go as high as 60% or more.
Another thing you may notice is I’m adding a title and ARIA tag. This is going to help screen readers speak the icon for people who are using assistive technologies.
<svg className="umbrella" xmlns="" width="32" height="32" viewBox="0 0 32 32" aria- <title id="title">Umbrella Icon</title>
Since this id has to be unique, we can make pass props to our instances of the icon and it will propagate to both the title and aria tag like so:
// App const App = React.createClass({ render() { return ( <div> <div className="switcher"> <IconOffice iconTitle="animatedOffice" /> </div> <IconOffice iconTitle="orangeBook" bookfill="orange" bookside="#39B39B" bookfront="#76CEBD"/> <IconOffice iconTitle="biggerOffice" width="200" height="200"/> </div> ) } }); // Icon const IconOffice = React.createClass({ ... render() { return ( <svg className="office" xmlns="" width={this.props.width} height={this.props.height} viewBox="0 0 188.5 188.5" aria-labelledby={this.props.iconTitle}> <title id={this.props.iconTitle}>Office With a Lamp</title> ... </svg> ) } }); ReactDOM.render(<App/>, document.querySelector("#main"));
The best part, perhaps
Here's a really cool part of this whole thing: aside from not needing additional HTTP requests, I can also completely update the shape of the SVG in the future without any need for markup changes, since the component is self-contained. Even better than that, I don't need to load the entire icon font (or SVG sprite) on every page. With all of the icons componentized, I can use something like webpack to "opt-in" to whatever icons I need for a given view. With the weight of fonts, and particularly heavy icon font glyphs, that's a huge possibility for a performance boon.
All of that, plus: we can mutate parts of the icon on the fly with color or animation in a very simple way with SVG and props.
Mutating it on the fly
One thing here you might have noticed is we’re not yet adjusting it on the fly, which is part of the reason we’re using SVG in the first place, right? We can declare some default props on the icon and then change them, like so:
// App const App = React.createClass({ render() { return ( <div> <IconOffice /> <IconOffice width="200" height="200"/> </div> ) } }); // Icon const IconOffice = React.createClass({ getDefaultProps() { return { width: '100', height: '200' }; }, render() { return ( <svg className="office" width={this.props.width} height={this.props.height} <title id="title">Office Icon</title> ... </svg> ) } }); ReactDOM.render(<App />, document.querySelector("#main"));
See the Pen SVG Icon in React with default props by Sarah Drasner (@sdras) on CodePen.
Let's take it a step further, and change out some of the appearance based on the instance. We can use
props for this, and declare some default props.
I love SVG because we now have a navigable DOM, so below let's change the color of multiple shapes on the fly with
fill. Keep in mind that if you're used to dealing with icon fonts, you're no longer changing the color with
color, but rather with
fill instead. You can check the second example below to see this in action, the books have changed their color. I also love the ability to animate these pieces on the fly, below we've wrapped it in a div to animate it very easily with CSS (you may need to hit rerun to see the animation play):
See the Pen SVG Icon in React with default props and animation by Sarah Drasner (@sdras) on CodePen.
// App const App = React.createClass({ render() { return ( <div> <div className="switcher"> <IconOffice /> </div> <IconOffice bookfill="orange" bookside="#39B39B" bookfront="#76CEBD" /> <IconOffice width="200" height="200" /> </div> ) } }); // Icon const IconOffice = React.createClass({ getDefaultProps() { return { width: '100', height: '200', bookfill: '#f77b55', bookside: '#353f49', bookfront: '#474f59' }; }, render() { return ( <svg className="office" xmlns="" width={this.props.width} height={this.props={this.props.bookside} <path fill={this.props.bookfront} <path className="cls-7" d="M60.7 69.8h38.9v7.66H60.7z"/> <path className="cls-5" d="M60.7 134.7h38.9v7.66H60.7z"/> ... </svg> ) } }); ReactDOM.render(<App />, document.querySelector("#main"));
.switcher .office { #bulb { animation: switch 3s 4 ease both; } #background { animation: fillChange 3s 4 ease both; } } @keyframes switch { 50% { opacity: 1; } } @keyframes fillChange { 50% { fill: #FFDB79; } }
One of my awesome coworkers at Trulia, Mattia Toso, also recommended a really nice, much more clean way of declaring all of these props. We can reduce repetition of the
this.props here by declaring const for all our uses, and then just simply apply the variable instead:
render() { const { height, width, bookfill, bookside, bookfront } = this.props; return ( <svg className="office" xmlns="" width={width} height=={bookside} <path fill={bookfront}
We can also make this even more awesome by declaring
propTypes on the props we are using. PropTypes are super helpful because they are like living docs for the props we are reusing.
propTypes: { width: string, height: string, bookfill: string, bookside: string, bookfront: string },
That way if we use them improperly, like in the example below, we will get a console error that won't stop our code from running, but alerts other people we might be collaborating with (or ourselves) that we're using props incorrectly. Here, I'm using a number instead of a string for my props.
<IconOffice bookfill={200}
And I get the following error:
See the Pen SVG Icon in React with spread with error by Sarah Drasner (@sdras) on CodePen.
Even more slender with React 0.14+
In newer versions of React, we can reduce some of this cruft and simplify our code even more, but only if it's a very "dumb" component, e.g. it doesn't take lifecycle methods. Icons are a pretty good use case for this, since we're mostly just rendering, so let's try it out. We can be rid of
React.createClass and write our components as simple functions. This is pretty sweet if you've been using JavaScript for a long time but are less familiar with React itself- it reads like the functions we're all used to. Let's clean up our props even further and reuse the umbrella icon just as we would on a website.
// App function App() { return ( <div> <Header /> <IconUmbrella /> <IconUmbrella umbrellafill="#333" /> <IconUmbrella umbrellafill="#ccc" /> </div> ) } // Header function Header() { return ( <h3>Hello, world!</h3> ) } // Icon function IconUmbrella(props) { const umbrellafill = props.umbrellafill || 'orangered' return ( <svg className="umbrella" xmlns="" width="32" height="32" viewBox="0 0 32 32" aria- <title id="title">Umbrella</title> <path fill={umbrellafill}> ) } ReactDOM.render(<App />, document.querySelector("#main"));
See the Pen SVG Icon in React by Sarah Drasner (@sdras) on CodePen.
SVG icon systems are beautifully simple and easily extendable in React, have less HTTP requests, and are easy to maintain in the future, due to the fact that we can completely update the output in the future without any repetitive markup changes..
Uh, so you’re recommending using JavaScript to insert an entire SVG literally every single time that an icon is used? So, if you have a table with 100 records, and each record has a pencil icon, a trash icon, and a status icon, you’re going to have JavaScript literally insert 300 SVGs, each complete with all of the path information for that icon?
And how, exactly, is this better than
<use>(let the browser do the duplicating internally instead of emulating it with JS) or even an old-school background-image SVG spritesheet (forget about duplicating anything, just render an image that’s already in memory, simply at a different offset)?
I mean, I get it, it makes sprites easy to customize, but I simply don’t see this ever scaling to any kind of real use. I hope I never have to maintain a codebase that decided to wrap all of its individual sprites in JavaScript components.
No, this is not what I’m recommending. I’m showing you how to use SVG Icon Systems if you’re using React already. Not to convert a view that’s not in React to React in order to use SVG Icons.
I also explain why we’re not using use in this instance in the article. And also why, if you’re using React the way that I’m describing, you don’t have to load every instance for every SVG Icon into the page.
After re-reading the criticism brought up by Agop about 15 times now, I’m not sure Sarah actually addressed it.
As I am reading, Agop seems to be asking how valid this sort of mechanism would be in a real-world website. The example provided, which I’m sure we’ve all designed before, is a tabular set of records with icons for indicating actions (edit, delete, etc.
This is a usage example where sprites or even font icons shine. How well would the this article’s solution function here? It seems like this method would require the SVG to be rendered uniquely for every iteration… so it seems like there would be a scaling issue if your website / app / whatever was heavily reliant on SVG-generted icons as a UI/UX element.
Or is the argument that one would simply not use React for a situation such as the one presented by Agop?
Just as food-for-thought here, GitHub recently switched to SVG icons:
And as an aside, I’m a little concerned about the tone here. There are ways to bring up concerns without a rude tone. I have some friends who have said great things about using a sunlight desk lamp, in how it can help their mood thoughout the day.
This is really about:
<svg><use>vs
<svg><path>
It would be interesting to create some TEST PAGES! 300 icons sounds like a reasonable number to test. Maybe 10 unique icons, used 30 times each.
Page 1: SVG sprite with
<symbol>s for each unique icon, and
<use>used all 300 times to draw them
Page 2: All 300 icons individually drawn with
<svg><path> ...
But then what are the questions? Maybe…
Uh, why would anyone recommend doing that?
Ok, let’s say you have different icons for edit, post, delete. If you’re using use in SVG icon, you’re still rendering the path data in the shadow DOM, even with use. If you’re using an icon font, you’re rendering all of the glyphs for the whole set onto that view whether you need them or not. This is typically many many more icons than you typically need to render that table.
Each technique comes with its correct uses and overhead and it is the developer’s job to pick the tool accordingly. It depends on how many icons your site has and how big that icon font would be. It also depends on how many times you’re using the icons on a given view. I would say the table is actually a smaller use-case across the web than using icons in a menu or a view, and each of these would warrant different techniques. In the case of the table, it may be that an SVG sprite is better, if you don’t need to change the color. If you need to change the color or animate it, that wouldn’t be a great use case anymore. We, as web developers, should be using the right tool for the job, but we should understand the parameters around each before deciding.
Agop mentions: “And how, exactly, is this better than use (let the browser do the duplicating internally instead of emulating it with JS)” – which is addressed first and foremost in the article, which is why I didn’t go into more detail here- it’s already covered. I also explain that with something like webpack you wouldn’t need to load all of the svg and js data into every view in the article.
One thing to understand here is that the article is not prescribing that we throw out icon fonts or SVG use in order to create new SVGs in React- it’s proposing a solution for the lack of support of use in React and how to overcome that in order to create an icon system.
This isn’t valid anymore. I think as of 0.14 or something like that. See
<use>usage within React in my jsperf tests below.
Ago, I understand your concerns about using JavaScript to create views that could and should have already been created in HTML. One thing that is great about JS (and react) is that all this can be rendered with JS on the server. This is a win for reusable code and fast server rendering. Using the technique explain in this article doesn’t prevent you from server rendering.
I have recently done something very similar. But not wrapped the SVG in JS but got react to import the SVG file. This requires more ‘webpack’ setup (SVG-inline-loader) but means the SVG stays within an .svg file.
This combination means that we get the benefits of inline SVG (Style and transition changes), server-side rendering and with client-side caching, which I don’t think has been so eloquently possible before.
If you are using
<svg><use>with svg4everybody to ajax your svg sprites from a CDN and get around CORS issues (as recommended here), you end up writing a bunch of SVGs to the DOM anyway. I like the suggestion of building it into the JS payload and removing svg4everybody from the equation.
If you happen to be using browserify…
I somewhat agree with Agop that this seems like a better solution for more complex, individual illustrations rather than icons used all over a page.
In a React project I’m working on, we’re using a more generic icon component that takes an icon name and size as props (like so:
<Icon name="umbrella" size="big" />), and internally uses the
usetag, which is no problem in React 0.14 (and before that with
dangerouslySetInnerHTML, ahem). For cases that require more control than that, your solution looks like a great approach.
Also, a little heads-up: The default values for width & height in your
getDefaultProps()function include the “px” CSS unit, which isn’t valid if you’re going to use it in an HTML attribute.
Hi diondiondion,
Great point about the px! Thanks, updated.
So, aside from any use issues, the point of interest here is in use vs path is whether use is really necessary. It seems to me that if you’re still rendering the shadow DOM with use, to discern whether or not it’s not more work for the same thing. Consider this: if you load an SVG spritesheet and then also describe the svg icon, you could be loading more than what is necessary. If you’re loading just the svg icon component, you can theoretically opt-in to the icons you need for a given view.
In case you’re curious if other people are using SVGs inline in this manner, GitHub came out with an article recently detailing how they did just that:.
I certainly think there are instances where this approach isn’t ideal, but I’m not sure that the reasons stated here are taking into considerations the common advantages and disadvantages, and I do still think this could offer a performance and workflow boon, in the case of React and your typical website. But, above all test your use case! Test all the things!
Hi Sarah,
it’s true that in some cases you might end up loading resources more efficiently by including the path directly. In practice I doubt that either of the two approaches is much better or worse in terms of performance than the other. A potentially big advantage of
useis that modern browsers can cache external svg sprites for future page loads. In browsers that don’t support that (most notably IE & early Edge), they can be inserted into the page using a polyfill like svgxuse.
But that actually reminds me of what might just be the biggest advantage of using
path: its simplicity. It’ll “just work” in all reasonably capable browsers, with no need for polyfills or other workarounds, which is pretty sweet.
There’s another advantage of using
usewith external sprites which is quite specific to the product I’m working on, but I’ll mention it anyway: Being able to simply point to a different svg file to change all icons without having to recompile the app. Our app has a different theme, logo & sometimes icon set for each customer, so keeping these things separate & “abstracted away” makes sense for us, but obviously it’s not a requirement shared by many products.
It really seems to be mostly a matter of personal preference about workflow and how tightly you want to couple your icons & react components.
Hey diondiondion,
For sure, caching is probably the strongest argument for use here. That’s definitely a good thing to bring up. Is it better than only having to load a couple of icons inline per view at typically >1kb apiece? I’m not sure. That, it seems, would totally depend on the system at hand.
Your use case sounds pretty interesting! With all things being equal, a workflow and abstraction boon while loading a whole different set per customer seems totally appropriate. I can imagine a scenario where you’d be able to bundle components for each customer as well, but that might take more configuration than would make sense. In your case, the sprite sheet might be less maintenance overall, so agreed, it would be a better choice.
I adore this post, Sarah!
One of the reasons I’ve been so bullish about SVG icons is their versatility. I think it’s awesome (and super healthy) for developers to have a wealth of options… client-side via
use, server-side (a la GitHub’s latest iteration of octicons), or integrated into a project’s JavaScript framework (as you’ve detailed so helpfully). Heck, projects with modest icon needs might be just fine using
img!
It’s easy to poke holes in any of these solutions for one reason or another. Posts like this one help teams like mine determine what’s best for any given project. SVG is just too complex, powerful and fun to settle for any “one size fits all” solution!
Thanks Tyler! I couldn’t agree more.
I think this is what you were going for. The
idattribute was missing; pretty sure it’s required ()
Oh, interesting. Thank you, I didn’t realize that. Updated.
Great article, but the accessibility part is a bit fucked up right now.
You’re using
aria-labelledbybut it’s not necessary and you’re using it in a broken way: you’re using one
idvalue (“title”) and React will not change it for each use of the component, so you end up with duplicate
ids all over your page (if you use that icon component more than once, and/or if you use the same id in other icon components).
My advice: remove the
idattribute and the
aria-labelledbyattribute altogether, because current screen readers do not need it to read the
<title>element for inline SVG.
Also, adding a
<title>element to your SVG icon does not cater for accessibility. Your icon component should be able to do two things:
Hide the icon completely, because the meaning is already spelled out in the neighboring text. That would be:
<svg aria-<path /></svg>
Provide accessible text for the icon, with the ability to change that text for each use, and allowing for internationalization:
<svg><title>[This text must not be always the same]</title></svg>.
This can be done by outputting the
aria-hiddenand the
<title>elements conditionally, depending on whether the code using the component provides accessible text or not (no accessible text provided:
aria-hidden="true").
As a general rule, accessible text is not a description of the graphics, so “Umbrella icon” does not work well as relevant accessible text. It should instead sum up the meaning you’re trying to get across. For instance, if you have
<button><IconUmbrella/></button>, what is the role of that icon-only button? Does it show a weather forecast? Then you should have:
<button aria-<IconUmbrella/></button>or
<button><IconUmbrella customthingforalttext="Show weather forecast" /></button>. You want a screen reader saying “Show weather forecast, button”, not “Umbrella icon, button”.
Also since you removed the alt text from the graphical element, you can translate it if needed, or change it if you’re using the same graphical element in different places with different meanings. You don’t want a screen reader to read “Umbrella Icon” when you actually need to convey “Afficher les prévisions météo”.
Most of what @fvsch says is true. Unfortunately, this is not (yet):
Which is why most accessibility-oriented SVG examples use a redundant
aria-labelledbyattribute. The value of this attribute must be one or more valid ID references to other elements, and of course each element’s ID must be unique to the page. For an icon with no meaningful child content, you usually also want to give it an explicit
role="img"so that the browser treats it as a single block, hiding the child markup.
Further information on the other points:
SVG 2 allows you to incorporate internationalization in your SVG code with multiple
<title>elements distinguished by their
langattribute. However, as far as I know that isn’t implemented yet in any browsers, so authors still need to select the correct language variant themselves.
Even if multi-lingual SVG alternative text was supported, it wouldn’t address the other issue fvsch brings up: the correct accessible text for an icon depends on how that icon is being used in the page. If the icon is inside a button/link, it is more important to describe that element’s function than the icon itself.
Thank you both! I’m learning a ton here. Ok, so here’s what I’m going to do- I’ve updated the codepen with unique props for the title and aria-labelledby and will update the article as well. I’ll probably close the comment thread to this post soon and then write a new article pointing back to this one with everything I’ve learned here through the comments in this post.
I appreciate the feedback very much.
Amelia, do you know which screen readers need the
aria-labelledbyattribute to read the
<title>element? In my tests with fall 2015 JAWS, NVDA and VoiceOver, it wasn’t needed? Windows-Eyes or ZoomText maybe, or older versions of JAWS, NVDA or VoiceOver?
Since
<title>support was good in my tests for in-page SVG elements (separated SVG documents is a different story), I tend to avoid adding extra
aria-labelledby, which get wrong or outdated values really quickly in practice.
If one wants to add the extra
aria-labelledbyand
id, it’s probably better to generate a random UUID in
componentWillMountand use that, so each instance has a working
aria-labelledby.
Keep the React posts coming Sarah! Your article ‘ I Learned How to be Productive in React in a Week and You Can, Too‘ jumped started me into learning the library.
Thanks Marc! Happy that it’s been useful :)
Guys, let’s take a step back here and really look at what we’re doing.
If you render the SVGs directly on the page, say, server-side (like GitHub, yay!), then yes – there might not be much of a difference. The browser might be more efficient doing its own thing with the shadow DOM and
<use>, or it might be quicker with the full SVGs already in place. I’m placing my bets on it being more efficient with
<use>because it would know that the structure of the SVG is the same each time, allowing for reuse of existing resources related to parsing and rendering. But let’s not diverge…
This article is about using SVGs with React, which is why my other comment opened like this:
Using JavaScript is the problem here. React just happens to be one way of using JavaScript to do this. And, in fact, it is a particularly terrible way. Why is it a terrible way?
It’s a terrible way because using React like some kind of SVG template system is awfully slow, especially for complex SVGs! Here, I cobbled together a quick little test on jsperf:
Notice how the
<use>method is way, way faster (like, 500% faster). This should not be any sort of surprise. The
<use>method creates one element. The full SVG method creates, well, a lot.
Again, this isn’t necessarily about using a full SVG vs. using
<use>in regards to writing HTML – it’s simply about using React to construct an SVG. It makes sense to do that when you need a heavily customized SVG here and there (just treat it like a component, duh!), but not when it comes to using SVG icons in general, regardless of whether or not you also happen to be using React.
This is primarily a response to this part of the article, which seems to imply “hey, if you’re using React, no need for
<use>, just use React components!”:
Apologies if the tone of my comments comes off as rude. It’s not written to come off as rude – simply as strong reminder that:
Just because you can, doesn’t mean you should.
:)
Great tests, thanks for your hard work there, those are interesting indeed!
Ok, so that makes a lot more sense than what I thought you were saying previously. And it brings up a decent and thoughtful point.
That is certainly a lot faster. But here’s where I have a concern, and I’ll just talk about the use case I have so you can see where I’m coming from. Let’s say you manage a huge site with 50 or so icons. Let’s say you have a view that only needs 3 of said icons. I can see an instance here where not loading all of the 50 either in an SVG spritesheet or in an icon font would come in handy, and the savings there might make it worthwhile.
I can also see an instance, where, like above, I have a more complex SVG that I want to change, either with animation or just to update pieces of it. Let’s say all of my views are in React. I think, in that instance, this approach might still work nicely.
Your tests do provide a really great demonstration of why we shouldn’t do it this way, though. I think you make a great and solid point here.
I can see it both ways, and would still say, choose the right tool for the job.
Sarah, I think we’re pretty much on the same page :)
Like you said, if you have a special SVG – not a fire-and-forget icon, which could be on the page hundreds of times – by all means, make it a component. I’m all for that.
As for the other use case:
Yes, this could certainly make sense. Just keep in mind that an SVG spritesheet with 50 or so icons, after gzip compression, could be mere kilobytes. And, with fingerprinting, you could serve this spritesheet just once with a far-future expiration header. The browser would cache it basically forever, or until you change it (and thus, the filename changes). Then the question becomes: what’s faster, server rendering full inline SVGs and browser parsing each and every one, or using a single SVG which the browser already has in its cache?
Again, like you said: use the right tool for the job :)
I think you make solid points about the caching, and we could probably go back and forth on this forever, but, if you’re properly caching and gzipping your pages and views as well, I’m still not sold. It’s hard to portray a 50 SVG spritesheet as mere kilobytes without also giving a nod to the fact that a typical inline SVG icon, even rendered with React, is likely less than 1kb. If you have a site that goes between a search view and details view without typically needing all of the other icons, I’m still not sure :) At this point we might also be debating a very small difference in task on performance.
@Sarah
Here’s my lazy web question. Are there any tools you can recommend for automatically generating the individual icon components ( preferably with webpack )?
I have been avoiding trying to figure out in React for a while so this post is perfect! Thanks
Recently, just find a way to do this.
Demo:
svgo-loaderor
svg-simplifycould cleanup the svg before using the transformer.
For single color icon it will be easy to use.
but for colorful icon like this post shown, may have to compose shapes by yourself.
THIS IS GREAT…. :)
Great info!
This is the best React-centered article I’ve read yet!
I say this because I’m a Frontender who is interested in React but not yet tried messing around with it much. While I may not setup my icon system like this I feel this blog post ranks very high in terms of communicating some basic React concepts that actually gets me excited about React. The writing has opened my mind up to the library… Thanks Sarah ;)
Hey, please check our React SVG icon live generator that created colleague of mine. Any feedback welcome
The aforementioned Github article sparked a discussion for our team to look into using SVG as well.
From my initial tests direct SVG rendering is so much better than icon fonts. What I am doing in our case is just including the full SVG on the page without any Javascript. We use an “include” in the templating language we use (Jade), so syntactically we don’t see the full SVG code on the page. It looks a bit like this:
I am happy with this because if you export a 16×16 SVG from Sketch and you make sure it renders as a 16×16 block with CSS you are 100% sure that it is sharp. Whereas if you export a 16×16 SVG and then mangle it through an icon font generator you are never really sure that it’s sharp.
I think me and my team have spent hours wrestling with icon fonts. This ranges from tasks like debugging icon font output, to telling people how to set up their system to include fontforge/fontcustom to tweaking SVG code to render well as an icon font . I hope these woes will be over now with this new solution.
Some tips: it’s best that you export your SVG icons at a certain size and define it explicitly:
You can then “color” SVGs with CSS as follows:
Great write up Sarah! I’m using a similar approach in some Rails projects with
I did an SVG icon system recently, that had to be compatible with React and plain JavaScript (React is used for multiple SPAs embedded in pages, there’s also legacy JS code). I ended up using webpack-svgstore-plugin for generating the SVG bundle and then consuming it from React like:
<SvgSymbol name={‘icon-name-is-the-src-file-name’} />
or from server side templates (Jinja2 based) like:
{% svg_symbol(‘icon-name’) %}
Helper tags simply emit <svg><use…> stuff and reference icon like “/path/to/bundle.svg#icon-name”.
This can be easily extended to support multiple bundles via optional param bundle={‘bundle-name’}
A problem with directly importing SVGs in React and relying on Webpack or something to bundle them into JS is the resulting icon system is usable only from React land. I see vendor lock-in for such trivial thing as a bigger problem than the performance issues mentioned above.
I think all my Pure components had they’re feelings hurt when you called then “dumb” ;(
Hey Sarah,
thanks for the writeup and the discussions with Agop.
One quick point about #a11y here. In the described scenario you’re bringing in several SVG elements which all include a title element with the id “title”.
I just tried it with VoiceOver. Assuming you’ve got several different icons included like this in a page and all of them include
VoiceOver will read the title of the first appearing icon for all of them, as ID’s should be included only once inside of a document. Setting unique ID’s is needed for this in combination with
aria-labelledbyto work properly.
Thanks. :)
Hi Stephan,
Yeah, if you look up in the comments and the post, this was all addressed and the post was updated. I can update all of the other codepens, though, to avoid future confusion. I only updated the one that was the reference for the accessibility.
Thanks for looking out!
Hi Sarah Thanks for the share | https://css-tricks.com/creating-svg-icon-system-react/ | CC-MAIN-2017-22 | en | refinedweb |
Is there a way to display the current function in the statusbar? There are times where I'm jumping between parts of a file and it would be handy to know which function I'm currently working within.
If that's not currently available, how hard would it be to make a plugin that does this?
it's not currently possible but can be done very easily with a plugin. If you're looking to make it yourself, here's ST2's api:
I'm not that familiar with Python, but maybe this would be a good reason to get into it some more. It's going to be slow going, but I may have the plugin ready by the end of the year.
Sounds good. If you need help, just ask
OK, first question. I've figured out how to display a string in the statusbar (easiest part of the plugin), but I'm completely lost on how to gather functions for the given view. I'd have to figure out the language and then scan for functions based on that language.
Would it be possible to piggyback on SublimeCodeIntel somehow? Keep in mind I'm a complete Python noob.
Since you're new to Python, don't try to go big. It doesn't have to work with every language right away. What language do you use primarily?
Right now I'm mostly concerned with JavaScript and PHP.
So I did some playing around and made this:
import sublime, sublime_plugin
class FunctionInStatusListener(sublime_plugin.EventListener):
def on_deactived(self,view):
erase_status('function name')
def on_selection_modified(self, view):
sel = view.sel()[0]
functionRegs = view.find_by_selector('entity.name.function.js')
for r in reversed(functionRegs):
if r.a < sel.a:
view.set_status('function name', view.substr(r))
break
It only works for javascript at the moment and is very hackish but you can play around with it. I'm going to bed
Very nice and very much appreciated! I took what you started and changed it to:
import sublime, sublime_plugin
class FunctionInStatusListener(sublime_plugin.EventListener):
def on_deactived(self, view):
view.erase_status('function name')
def on_close(self, view):
view.erase_status('function name')
def on_activated(self, view):
cf = self.get_current_function(view)
if cf is None:
view.erase_status('function name')
else:
view.set_status('function name', 'Function: ' + cf)
def on_selection_modified(self, view):
cf = self.get_current_function(view)
if cf is None:
view.erase_status('function name')
else:
view.set_status('function name', 'Function: ' + cf)
def get_current_function(self, view):
sel = view.sel()[0]
functionRegs = view.find_by_selector('entity.name.function')
cf = None
for r in reversed(functionRegs):
if r.a < sel.a:
cf = view.substr(r)
break
return cf
That seems to work for JS, PHP and Python for me. It's rough, but it does the job well enough for me!
Very nice nizur - I can see me using some of this code in my own plugin now to link todo's to function names!
Feel free to use any of the code, tanepiper, since I just built off of C0D312's example. I'm a complete Python noob too, so the code could be better I'm sure.
Thanks for making this! I've been wanting this ever since switching to ST2 last summer.
docs.python.org/library/bisect.html
on_selection_modified is a listener you should try to avoid.
Add a print and look into the console, you will notice this listener is dispatched hundred of times when selecting big portions of text making the editor to crawl.
I suggest you listen the click event. Inspired on your code I use this:
...\Sublime Text 2\Packages\User\Default.sublime-mousemap
{
"button": "button1", "count": 1,
"press_command": "drag_select",
"command": "status_bar_function"
}
]
...\Sublime Text 2\Packages\User\status_bar_function.py
import sublime_plugin
class StatusBarFunctionCommand(sublime_plugin.TextCommand):
def run(self, edit):
view = self.view
region = view.sel()[0]
functionRegs = view.find_by_selector('entity.name.function')
for r in reversed(functionRegs):
if r.a < region.a:
view.set_status('function', view.substr(r))
return
view.erase_status('function')
I love this -- you should package it up as a full-blown plug-in! It's one of the things I missed most after moving over from Notepad++
github.com/SublimeText/CurrentFunction
Feel free to improve it.
Ironically, I created github.com/akrabat/SublimeFunctionNameDisplay Saturday too...
Regards,
Rob...
Good. Let's join forces =) using your package.
Thanks for the updates. I'm learning more about ST2 plugin development just by looking at your code!
Rob..
Nice, I'd like to expand this to css selectors as well. Especially when in scss files, the nesting can get out of control! | https://forum.sublimetext.com/t/displaying-current-function-in-the-statusbar/3805/17 | CC-MAIN-2017-22 | en | refinedweb |
view raw
We are using custom fonts in our project. It works well in Xcode 5. In Xcode 6, it works in plain text, attributed string in code. But those attributed strings set in storyboard all revert to Helvetica when running on simulator or device, although they look all right in storyboard.
I'm not sure if it's a bug of Xcode 6 or iOS 8 SDK, or the way to use custom fonts is changed in Xcode 6 / iOS 8?
The fix for me was to use an
IBDesignable class:
import UIKit @IBDesignable class TIFAttributedLabel: UILabel { @IBInspectable var fontSize: CGFloat = 13.0 @IBInspectable var fontFamily: String = "DIN Light" override func awakeFromNib() { var attrString = NSMutableAttributedString(attributedString: self.attributedText) attrString.addAttribute(NSFontAttributeName, value: UIFont(name: self.fontFamily, size: self.fontSize)!, range: NSMakeRange(0, attrString.length)) self.attributedText = attrString } }
Giving you this in the Interface Builder:
You can set up your attributedstring just as you normal do, but you'll have to set your fontsize and fontfamily once again in the new available properties.
As the Interface Builder is working with the custom font by default, this results in a what you see is what you get, which I prefer when building apps.
Note
The reason I'm using this instead of just the plain version is that I'm setting properties on the attributed label like the linespacing, which are not available when using the plain style. | https://codedump.io/share/hFYm2CleqlVz/1/attributed-string-with-custom-fonts-in-storyboard-does-not-load-correctly | CC-MAIN-2017-22 | en | refinedweb |
view raw
I've been trying to add a has_many, through association between two models; 'Space' and 'Question'. Within space, you are able to add questions which will be listed to add. I created a spaceQuestion model for the association.
Currently, I am able to see a list of all the questions to add to a space, but when I try adding a space I get: undefined method `id' for nil:NilClass and it complains about this line: @space_question = SpaceQuestion.new(question_id: params[:question_id], space_id: @space.id)
Here's my code:
spaces_controller.rb:
def questions
@space_questions = @space.questions
@other_questions = (Question.all - @space_questions)
end
def add_question
@space_question = SpaceQuestion.new(question_id: params[:question_id], space_id: @space.id)
respond_to do |format|
if @space_question.save
format.html { redirect_to questions_tenant_space_url(id: @space.id, tenant_id: @space.tenant_id)
#notice: "User was successfully added to space"
}
else
format.html { redirect_to questions_tenant_space_url(id: @space.id, tenant_id: @space.tenant_id),
error: "Question was not added to space" }
end
end
end
class Space < ActiveRecord::Base
belongs_to :tenant
belongs_to :department
has_many :artifacts, dependent: :destroy
has_many :user_spaces, dependent: :destroy
has_many :users, through: :user_spaces
has_many :space_questions, dependent: :destroy
has_many :questions, through: :space_questions
class Question < ActiveRecord::Base
belongs_to :user
belongs_to :department
has_many :space_questions
has_many :spaces, through: :space_questions
validates_presence_of :title, :details, :department
end
class SpaceQuestion < ActiveRecord::Base
belongs_to :space
belongs_to :question
end
<% @other_questions.each do |other_question| %>
<tr>
<td><%= other_question.department.name %></td>
<td><%= link_to other_question.title, question_path(other_question) %></td>
<td><%= other_question.user.id %></td>
<td>
<%= link_to 'Add',
add_question_tenant_space_path(id: @space.id, tenant_id: @space.tenant_id, question_id: other_question.id),
:method => :put,
:class => 'btn btn-xs btn-success' %>
</td>
</tr>
<% end %>
unless you've created a before hook in that controller, you need to define the
@space variable, which is not being done in the
add_question method.
The error you're seeing is precisely what it means.
@space is
nil and you're calling
@space.id; since the NilClass doesnt have a method
id, it throws an error.
if you do have a hook defining that variable, please edit that code in | https://codedump.io/share/3wL5aDW27fRb/1/hasmany-through--undefined-method-id39-for-nilnilclass | CC-MAIN-2017-22 | en | refinedweb |
. These are some extensions that provide an identity policy:
- more.jwtauth
- Token based authentication system using JSON Web Token (JWT).
- more.itsdangerous
- Cookie based identity policy using isdangerous.
- more.basicauth
- Identity policy based on the HTTP Basic Authentication.
Choose the one of your choice, install it and follow the instructions in the README. You can also create your own identity policy.
For basic authentication for instance it will
extract the username and password. The claimed identity can be
accessed by looking at the
morepath.Request.identity attribute
on the request object.
You use the
morepath.App.identity_policy() directive to install
an identity policy into a Morepath app:
from more.basicauth import BasicAuthIdentityPolicy @App.identity_policy() def get_identity_policy(): return BasicAuthIdentityPolicy()
If you want to create your own identity policy, see the
morepath.IdentityPolicy API documentation to see
what methods you need to implement.
Verify identity¶
The identity policy only establishes who someone is claimed to be. It doesn’t verify whether that person is actually who they say they are. For identity policies where the browser repeatedly sends the username/password combination to the server, such as with basic authentication, implemented by more.basicauth and cookie-based authentication like more.itsdangerous, we need to check each time whether the claimed identity is actually a real identity.
By default, Morepath will reject any claimed identities. To let your
application verify identities, you need to use
morepath.App token based identity verification¶
If you use an identity policy based on the session (which you’ve made secure otherwise), or on a cryptographic token based authentication system such as the one implemented by more.jwtauth, out. When the user logs in we need to remember their identity on the response, and when the user logs) request.app.remember_identity(response, request, identity)
This is enough for session-based or cryptographic token-based authentication.
For cookie-based authentication where the password is sent as a cookie
to the server for each request, we need to make sure to include the
password the user used to log in, so that
remember can then place
it in the cookie so that it can be sent back to the server:
@request.after def remember(response): identity = morepath.Identity(username, password=password) request.app.remember_identity(response, request, identity)
When you construct the identity using
morepath.Identity, you can include any data you want
in the identity object by using keyword parameters.
Logging out¶
Logging out is easy to implement and will work for any kind of
authentication except for basic auth. You simply call
morepath.App.forget_identity() somewhere in the logout view:
@request.after def forget(response): request.app.forget_identity(response, request)
This will cause the login information (in cookie-form) to be removed from the response.if the identity has
ViewPermissionon the
Documentmodel.
- Only allow allow access to
document_editif the identity has
EditPermissionon the
Documentmodel.
Permission rules¶
Now that we give people a claimed identity and we have guarded our
views with permissions, we need to establish who has what permissions
where using some rules. We can use the
morepath.App ... | http://morepath.readthedocs.io/en/latest/security.html | CC-MAIN-2017-22 | en | refinedweb |
This is the mail archive of the binutils@sourceware.org mailing list for the binutils project.
Hi Alan, Thanks for taking the time to reply. While there is a small part of me that would like to tilt against this particular windmill, the lack of any specific point of non-compliance means that I have no firm ground to stand-on. In the end, it's would be about differences in interpretation of an ambiguous text based on historical precedents - which is an almost textbook recipe for a religious war. I have no desire to cast the first stone, so I'm going to let this sleeping dog lie (and also stop mixing metaphors) :) Craig Sent from my iPhone On 16/05/2011, at 10:15 AM, Alan Modra <amodra@gmail.com> wrote: > On Sun, May 15, 2011 at 08:14:19PM +1000, Craig Southeren wrote: >> At the heart of the issue is the timing of initialising statics at >> the global/namespace level. > > You won't get much traction on this issue here on the binutils list. > We did have a ld bug that affected you but that has now been fixed. > Further discussion should go to one of the gcc lists. If you can get > agreement that functions declared with __attribute__ ((constructor)) > ought to be treated exactly as standard C++ namespace scope > constructors regarding initialisation order, then it would be good to > have your testcase added to the g++ testsuite. That should ensure > both g++ and ld do not regress. > > FWIW, I think your testcase is quite reasonable. The main reason I > wanted the testcase removed from the ld testsuite because I found > the testcase failed using commonly available versions of g++, and > therefore a C++ testcase wasn't the best way to test ld behaviour. > > -- > Alan Modra > Australia Development Lab, IBM | http://sourceware.org/ml/binutils/2011-05/msg00202.html | CC-MAIN-2017-22 | en | refinedweb |
Containerized Applications that need access to relational database systems typically leverage ODBC drivers to do so. This means that if you want to connect to Postgres from an IIS web application, you must install the appropriate ODBC drivers as well as creating the appropriate data source name or DSN. in a Windows environment this is typically achieved by going into the control panel and clicking through a set up program, thus leveraging the user interface of the operating system.
Typically installed on virtual machines
Generally speaking, ODBC drivers get installed on VMs, making them available to all apps that run on the VM.
But in the world of containers, that is different because each container needs to be self-contained with zero dependencies on the host OS on top of which it runs. This means we need to figure out a way to get the ODBC drivers to install on image, not on its host operating system running on the virtual machine.
What is post is about
Just to be clear what we mean by image is, in fact, a Docker image. building a darker image is achieved by running a build command and using a text file to act as a blueprint when building the image.
This post is about how you would automate the installation of ODBC drivers onto a Docker image. to build a darker image using the docker build command along with a Dockerfile is a completely automated process with no ability for a user interface to be used to install programs onto that image. Thus, we need a fully automated way to install the necessary ODBC drivers along with the creation of a data source name.
I had to build my own tooling
Unfortunately, I could not find a solution to programmatically perform the tasks needed here. So I wrote my own and that’s what this post is about – just to be clear.
In a nutshell, all this post will show you how to do is to create a commandline utility that programmatically installs an ODBC driver without any user intervention whatsoever. This is exactly what is needed we are building up your Docker images.
The metadata needed to connect to PostGres
This metadata includes:
- The driver type (Postgres will be the example in this post)
- The server domain name or IP address
- The port number
- The database name
- Username and password
Must be 100% programmable
Dockerfiles are text files that are used to build up an image that will eventually become a running container. Dockerfiles represents the blueprint that takes a base image and adds the appropriate software layers upon this base image.
Notice in the code below that there’s a section that states that commands are needed to install the ODBC drivers.
The challenge – I provided code that is needed
The problem is that there does not exist some easy to use commands to do this and that’s what this post is about.
Questions that need answering (Imagine that you want to install Postgres):
- What binaries are needed on the Docker host to begin the process?
- what code do we need to write to automate the provisioning of not just the ODBC driver, but also the creation of the data source name
- The data source name will be needed by the code we write for our IIS web application
Sample Dockerfile
Note the dockerfile below. it’s a very simple Docker file that begins with a core Windows server operating system. It then uses dism.exe to install IIS. From there it creates a simple index.html file which represents a “Hello World” webpage
What you don’t see in this file is an implementation of installing an ODBC driver. That’s the purpose of this post – to show you how to do that.
# Sample Dockerfile # Indicates that the windowsservercore image will be used as the base image. FROM windowsservercore # ************************************************************************************** # NEEDED COMMANDS TO INSTALL ODBC DRIVERS AND CONFIGURE DSNs IS WHAT THIS POST IS ABOUT # [commands go here] # ************************************************************************************** # Metadata indicating an image maintainer. MAINTAINER bterkaly@microsoft" ]
Dockerfile
Building an image
Assuming that you are in a directory that contains this Dockerfile, you typically issue a command like this to build an image:
docker build .
The docker build will construct an image using the declarative syntax within a Dockerfile. There is no user interface to perform this operation. This means that when you build an image it must be 100% programmable, without any user interface whatsoever.
The figure below depicts the workflow needed to get a running container. As explained previously, the combination of the docker build command along with the corresponding Dockerfile is how you produce a docker image. From there, you can use the Docker run* command along with that image to finally get to your running container**.
Click image for full size
Figure 1: dockerbuild.png
Creating a commandline utility in Visual Studio
Let’s begin building out our command line utility that we can use inside of our Dockerfile. We will start up Visual Studio. You can use the version I’m using, 2015, but practically any version will work here. We will create a console application that leverages the Win32 API. I was going to write this in C (because of the Win32 API) but figured that C# might be more accessible to most.
You can begin by clicking New project from the Start page.
Click image for full size
Figure 2: Creating a new project
At this point you will be able to select from the available project types. we are going to choose Console application.
Click image for full size
Figure 3: Selecting a console application as a project type and specifying a project name
At this point we are going to add a class module that will do all the heavy lifting. It will be called ODBCManager and will contain all the necessary code to modify the Windows registry in addition to the ODBC.ini file.
Click image for full size
Figure 4: adding a class
Our class module will need a name. It is called ODBCManager.
Click image for full size
Figure 5: Naming the class
The key take away and this diagram is the fact that the ODBC driver resides within the containerized web application, not at the virtual machine that is hosting the container. This is a great example of the power of containerization, the fact that the containerized web application includes all its dependencies.
Containers can be run practically anywhere because they contain all of their dependencies. whereas previously the virtual machine needed to have the ODBC drivers installed to be able to support applications that access relational databases, now the container comes fully self-contained, capable running on a generic virtual machine or even a bare metal machine.
Click image for full size
Figure 6: Showing that the ODBC driver exists inside the container
Paste in this code to ODBCManager.cs
At this point we are ready to paste in the code that does the setting up of a data source name. The data source name is an abstraction that allows other code to connect up to a data source, which, as stated earlier, will be Postgres in our case.
In the figure below you will note that the code is connecting to PostGres by leveraging the data source name.
Click image for full size
Figure 7: Example of a client application using a data source name to connect up to PostGres
Code to create the data source name
In the code snippet below we actually do a few things. At a physical level we are modifying the registry as well as a ODBC.ini file.
When you add a data source name those of the two things that get modified on a Windows system.
using Microsoft.Win32; using System; using System.Runtime.InteropServices; namespace SetupODBC { public static class SetupODBC { [DllImport("ODBCCP32.DLL")] private static extern bool SQLConfigDataSource(IntPtr hwndParent, RequestFlags fRequest, string lpszDriver, string lpszAttributes); [DllImport("Kernel32.dll", SetLastError = true)] public static extern long WritePrivateProfileSection(string strSection, string strValue, string strFilePath); [DllImport("Kernel32.dll", SetLastError = true)] public static extern long WritePrivateProfileString(string strSection, string strKey, string strValue, string strFilePath); private enum RequestFlags : int { ODBC_ADD_DSN = 1, // Add a new user data source. ODBC_CONFIG_DSN = 2, // Configure (modify) an existing user data source. ODBC_REMOVE_DSN = 3, // Remove an existing user data source. ODBC_ADD_SYS_DSN = 4, // Add a new system data source. ODBC_CONFIG_SYS_DSN = 5, // Modify an existing system data source. ODBC_REMOVE_SYS_DSN = 6, // Remove an existing system data source. ODBC_REMOVE_DEFAULT_DSN = 7 // Remove the default data source specification section from the system information. } public static bool Add(string DSName, string DB, string Server, string Port, string uid, string password) { // Clean up ODBC.ini file WritePrivateProfileString(DSName, null, null, @"c:\windows\odbc.ini"); Console.WriteLine(@"Cleaning up c:\widnows\odbc.ini"); // Delete registry entry for DSN var software = Registry.CurrentUser.OpenSubKey("Software"); if (software == null) return false; using (var odbc = software.OpenSubKey("ODBC", true)) { using (var odbcini = odbc.OpenSubKey("ODBC.INI", true)) { using (var subkey = odbcini.OpenSubKey(DSName, true)) { if(subkey != null) odbcini.DeleteSubKey(DSName); } Console.WriteLine(@"Cleanup the registry HKEY_CURRENT_USER\SOFTWARE\ODBC\ODBC.INI\{0}", DSName); } } // Create DSN from scratch string strAttributes = "Dsn=" + DSName + "\0Server=" + Server + "\0Port=" + Port + "\0Database=" + DB + "\0Uid=" + uid + "\0pwd=" + password + "\0"; bool lngRet = CreateDataSource((IntPtr)0, 1, "PostgreSQL ANSI\0", strAttributes); lngRet = CreateDataSource((IntPtr)0, 2, "PostgreSQL ANSI\0", strAttributes); Console.WriteLine("Created data source = {0}", DSName); return lngRet; } [DllImport("ODBCCP32.dll")] private static extern bool SQLConfigDataSource(IntPtr hwndParent, int fRequest, string lpszDriver, string lpszAttributes); public static bool CreateDataSource(IntPtr hwndParent, int fRequest, string lpszDriver, string lpszAttributes) { return SQLConfigDataSource(hwndParent, fRequest, lpszDriver, lpszAttributes); } } }
ODBCManager.cs
The main Program.cs looks like this. as you can see parameters will be passed in the defined the necessary metadata.
As explained previously, the metadata that will be passed in includes:
- The driver type (Postgres will be the example in this post)
- The server domain name or IP address
- The port number
- The database name
- Username and password
Completing the code
namespace SetupODBC { class Program { static void Main(string[] args) { string dsnName = args[0]; string db = args[1].Trim(); string port = args[2].Trim(); string ip = args[3].Trim(); string uid = args[4].Trim(); string pwd = args[5].Trim(); SetupODBC.Add(dsnName, db, port, ip, uid, pwd); } } }
Program.cs
Building the ODBC installation and configuration application
Now that we’ve created a project with all the necessary code, all we have to do next is compile the application and tested on our local laptop. We can prove that it works by verifying that the ODBC driver isn’t on the system and that our web application is unable to connect to the database.
Clearly, we can’t connect to Postgres.
Click image for full size
Figure 8: Proof that we cannot connect up to the database
Downloading the driver for PostGres
There are two aspects to the work we are doing here. The first task is to actually install the ODBC driver itself. The second task is to define a data source name (DSN). The data source name is where we actually enter the specific connection metadata that is needed to connect to our specific instance of Postgres.
Downloading the Postgres ODBC Driver
Here is the URL for the downloads:
Below you can see the appropriate zip file. Notice that I am installing the x86 version. I had difficulty getting the x64 to work but it may work on other systems.
Click image for full size
Figure 9: Downloading the ODBC driver in zip format
Once the download is complete you may want to unzip the contents into some type of install folder for testing.
We are testing on an ordinary laptop right now. Testing within the context of the container and a Dockerfile can be found in other posts. the purpose of this post is simply right the install script and demonstrate its correct use.
Click image for full size
_Figure 10: The MSI file for the ODBC/PostGres installation.
Now you are ready to begin the install. Notice in the image below that I am executing with the /passive flag. you can also use the /quiet flag.
Click image for full size
Figure 11: Installing the PostGres ODBC Driver
To verify correct installation you can go into control panel and see that the psqlODBC driver has been installed.
Click image for full size
Figure 12: Verifying correct installation
Compiling our console application and testing it
We have accomplished the following:
- Created our console application that will create a data source name
- Installed our ODBC driver for PostGres
The work that remains is:
- To compile our console application
- To retrieve the necessary metadata for our Postgres database that is running in Azure on the Ubuntu Linux virtual machine
- To run our console application and physically create the data source name
- Test that everything is working by connecting up to post grass from a web application
To compile our console application
We will begin by rebuilding our solution which will produce SetupODBC.exe.
Click image for full size
Figure 13: Compiling SetupODBC.exe
For convenience sake let’s copy the SetupODBC.exe into our local install folder.
Click image for full size
Figure 14: Copying ODBCSetup.exe to the c:\install folder
To retrieve the necessary metadata for our Postgres database that is running in Azure on the Ubuntu Linux virtual machine
In order for the ODBC driver to connect a web application to the underlying Postgres database, we will need to understand some information about the virtual machine that runs on, as seen below.
Click image for full size
Figure 15: Information needed to create a data source name
Click image for full size
Figure 16: Using the portal to collect the necessary metadata about PostGres
The assumption that you’ve installed on a VM somewhere so in my case it is on a VM called VMNAME.
To run our console application and physically create the data source name
Running our console application can be seen below. Notice that the metadata that describes the connection information to our PostGres database.
Click image for full size
Figure 17: Running SetupODBC
You can verify that the correct entries took place.
Click image for full size
Figure 18: Locating the place in the registry for the Data Source Name
You can validate all the attributes here.
Click image for full size
Figure 19: The details on the command line properly in place
Test that everything is working by connecting up to PostGres from a web application
We are now ready to test our connection.
We successfully passed the connection.Open(); command.
Click image for full size
Figure 20: Successful test of connecting to PostGres
Conclusion
This post showed you had to overcome one of the core challenges when working with containers. It addressed the need to support dependencies that an application needs to run. These dependencies should be part of the container itself, not part of the virtual machine in which the container runs.
By putting dependency directly in the container it is now possible to run the container anywhere, as all the dependencies are bundled up along with the application.
But the challenge is automating the installation of the dependency in the imager container. Because of the way the building of images is highly automated, it is necessary to install dependent functionality without any user interaction.
In the case of this post we showed you how to deploy ODBC drivers in an automated fashion. And what made this post very useful is the fact that sometimes you need to build your own tooling to accomplish this. Installing dependencies doesn’t always come with the ability for automated and silent installation. | https://blogs.msdn.microsoft.com/allthingscontainer/2016/09/23/installing-odbc-drivers-into-windows-containers-and-calling-into-database-systems-that-are-hosted-on-a-linux-vm-2/ | CC-MAIN-2017-22 | en | refinedweb |
19 January 2012 14:06 [Source: ICIS news]
HOUSTON (ICIS)--PPG Industries’ 2011 fourth-quarter net income rose 5.4% year on year to $216m (€168m) but volumes were flat because of global economic uncertainties, the US-based chemicals and coatings producer said on Thursday.
Customers curtailed inventory and remained cautious with their ordering, CEO Charles Bunch said.
This trend was most evident in ?xml:namespace>
PPG’s fourth-quarter sales rose 4% year on year to $3.5bn.
For the full 12 months of 2011, PPG’s net income was $1.1bn, compared with $769m in 2010, as sales rose 11% to $14.9m.
“During the year, we experienced uneven economic conditions, persistent raw material inflation, and continued anaemic construction activity in developed regions,” Bunch said.
“However, the geographic and end-use market diversity of our business portfolio continued to be an important benefit in 2011,” he added.
Looking ahead, Bunch expects 2012 first-quarter growth to remain uneven by region and varied by industry, similar to the fourth quarter of 2011, he said.
Regionally,
However, in the
($1 = €0.78) | http://www.icis.com/Articles/2012/01/19/9525400/us-ppgs-q4-net-income-rises-5.4-to-216m-amid-flat-volumes.html | CC-MAIN-2014-41 | en | refinedweb |
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
#include <sys/lock.h> int plock(int op);
The plock() function allows the calling process to lock or unlock into memory its text segment (text lock), its data segment (data lock), or both its text and data segments (process lock). Locked segments are immune to all routine swapping. The effective user ID of the calling process must be super-user to use this call.
The plock() function performs the function specified by op:
Lock text and data segments into memory (process lock).
Lock text segment into memory (text lock).
Lock data segment into memory (data lock).
Remove locks.
Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
The plock() function fails and does not perform the requested operation if:
Not enough memory.
The op argument is equal to PROCLOCK and a process lock, a text lock, or a data lock already exists on the calling process; the op argument is equal to TXTLOCK and a text lock or a process lock already exists on the calling process; the op argument is equal to DATLOCK and a data lock or a process lock already exists on the calling process; or the op argument is equal to UNLOCK and no lock exists on the calling process.
The {PRIV_PROC_LOCK_MEMORY} privilege is not asserted in the effective set of the calling process.
The mlock(3C) and mlockall(3C) functions are the preferred interfaces for process locking.
See attributes(5) for descriptions of the following attributes:
exec(2), exit(2), fork(2), memcntl(2), mlock(3C), mlockall(3C), attributes(5)
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also | http://docs.oracle.com/cd/E19082-01/819-2243/6n4i09999/index.html | CC-MAIN-2014-41 | en | refinedweb |
This preview has intentionally blurred parts. Sign up to view the full documentView Full Document
- Download Document
-
-
- Showing pages 1 - 2 of 151
- Word Count: 58260
Unformatted Document Excerpt
FINANCIAL Appendix A SPECIMEN STATEMENTS: PepsiCo, Inc. T HE ANNUAL REPORT Once each year a corporation communicates to its stockholders and other interested parties by issuing a complete set of audited financial statements.The annual report, as this communication is called, summarizes the financial results of the companys corporations accounting system. The content and organization of corporate annual reports have become fairly standardized. Excluding the public relations part of the report (pictures, products, etc.), the following are the traditional financial portions of the annual report: Financial Highlights Letter to the Stockholders Managements Discussion and Analysis Financial Statements Notes to the Financial Statements Managements Report on Internal Control Management Certification of Financial Statements Auditors Report Supplementary Financial tenepsiCos Annual Report is shown on page A-2. The financial information herein is reprinted with permission from the PepsiCo, Inc. 2005 Annual Report. The complete financial statements are available through a link at the books companion website. A1 A2 Appendix A Specimen Financial Statements: PepsiCo, Inc. Financial Highlights PepsiCo, Inc. and Subsidiaries ($ in millions except per share amounts; all per share amounts assume dilution) Net Revenue Total: $32,562 PepsiCo International 35% 5% Quaker Foods North America Division Operating Profit Total: $6,710 PepsiCo International 24% 8% Quaker Foods North America 28% 32% Frito-Lay North America PepsiCo Beverages North America 30% 38% Frito-Lay North America PepsiCo Beverages North America 2005 Summary of Operations Total net revenue Division operating profit Total operating profit Net income(b) Earnings per share(b) Other Data Management operating cash flow(c) Net cash provided by operating activities Capital spending Common share repurchases Dividends paid Long-term debt $5,852 $1,736 $3,012 $1,642 $2,313 $4,204 $32,562 $6,710 $5,922 $4,536 $2.66 2004 $29,261 $6,098 $5,259 $4,004 $2.32 % Chg(a) 11 10 13 13 15 $3,705 $5,054 $1,387 $3,028 $1,329 $2,397 13 16 25 (0.5) 24 (3.5) (a) Percentage changes above and in text are based on unrounded amounts. (b) In 2005, excludes the impact of AJCA tax charge, the 53rd week and restructuring charges. In 2004, excludes certain prior year tax benefits, and restructuring and impairment charges. See page 76 for reconciliation to net income and earnings per share on a GAAP basis. (c) Includes the impact of net capital spending. Also, see Our Liquidity, Capital Resources and Financial Position in Managements Discussion and Analysis. L ETTER TO THE STOCKHOLDERS Nearly every annual report contains a letter to the stockholders from the chairman of the board or the president, or both. This letter typically discusses the companys accomplishments during the past year and highlights significant events such as mergers and acquisitions, new products, operating achievements, business philosophy, changes in officers or directors, financing commitments, expansion plans, and Financial Statements and Accompanying Notes A3 future prospects. The letter to the stockholders is signed by Steve Reinemund, Chairman of the Board and Chief Executive Officer, of PepsiCo. Only a short summary of the letter is provided below. The full letter can be accessed at the books companion website at. MANAGEMENTS DISCUSSION AND ANALYSIS The managements discussion and analysis (MD&A) section covers three financial aspects of a company: its results of operations, its ability to pay near-term obligations, and its ability to fund operations and expansion. Management must highlight favorable or unfavorable trends and identity significant events and uncertainties that affect these three factors. This discussion obviously involves a number of subjective estimates and opinions. In its MD&A section, PepsiCo breaks its discussion into three major headings: Our Business, Our Critical Accounting Policies, and Our Financial Results. PepsiCos MD&A section is 22 pages long. You can access that section at. FINANCIAL STATEMENTS AND A CCOMPANYING NOTES The standard set of financial statements consists of: (1) a comparative income statement for 3 years, (2) a comparative statement of cash flows for 3 years, (3) a comparative balance sheet for 2 years, (4) a statement of stockholders equity for 3 years, and (5) a set of accompanying notes that are considered an integral part of the financial statements. The auditors report, unless stated otherwise, covers the financial statements and the accompanying notes. PepsiCos financial statements and accompanying notes plus supplementary data and analyses follow. A4 Appendix A Specimen Financial Statements: PepsiCo, Inc. Consolidated Statement of Income PepsiCo, Inc. and Subsidiaries Fiscal years ended December 31, 2005, December 25, 2004 and December 27, 2003 (in millions except per share amounts) Net Revenue........................................................................................................................... Cost of sales........................................................................................................................... Selling, general and administrative expenses ........................................................................ Amortization of intangible assets........................................................................................... Restructuring and impairment charges.................................................................................. Merger-related costs............................................................................................................... Operating Profit ..................................................................................................................... Bottling equity income............................................................................................................ Interest expense...................................................................................................................... Interest income....................................................................................................................... Income from Continuing Operations before Income Taxes ................................................. Provision for Income Taxes................................................................................................... Income from Continuing Operations ..................................................................................... Tax Benefit from Discontinued Operations ........................................................................... Net Income ............................................................................................................................ Net Income per Common Share Basic Continuing operations ....................................................................................................... Discontinued operations.................................................................................................... Total .................................................................................................................................. Net Income per Common Share Diluted Continuing operations ....................................................................................................... Discontinued operations.................................................................................................... Total .................................................................................................................................. * Based on unrounded amounts. See accompanying notes to consolidated financial statements. 2005 $32,562 14,176 12,314 150 5,922 557 (256) 159 6,382 2,304 4,078 $ 4,078 $2.43 $2.43 $2.39 $2.39 2004 $29,261 12,674 11,031 147 150 5,259 380 (167) 74 5,546 1,372 4,174 38 $ 4,212 $2.45 0.02 $2.47 $2.41 0.02 $2.44* 2003 $26,971 11,691 10,148 145 147 59 4,781 323 (163) 51 4,992 1,424 3,568 $ 3,568 $2.07 $2.07 $2.05 $2.05 Net Revenue $32,562 $26,971 $29,261 Operating Profit $5,922 $5,259 $4,781 2003 2004 2005 2003 2004 2005 Income from Continuing Operations $4,174 $3,568 Net Income per Common Share Continuing Operations $2.41 $4,078 $2.05 $2.39 2003 2004 2005 2003 2004 2005 Financial Statements and Accompanying Notes A5 Consolidated Statement of Cash Flows PepsiCo, Inc. and Subsidiaries Fiscal years ended December 31, 2005, December 25, 2004 and December 27, 2003 (in millions) Operating Activities Net income................................................................................................................................. Adjustments to reconcile net income to net cash provided by operating activities Depreciation and amortization ............................................................................................. Stock-based compensation expense..................................................................................... Restructuring and impairment charges ............................................................................... Cash payments for merger-related costs and restructuring charges ................................... Tax benefit from discontinued operations............................................................................. Pension and retiree medical plan contributions ................................................................... Pension and retiree medical plan expenses.......................................................................... Bottling equity income, net of dividends .............................................................................. Deferred income taxes and other tax charges and credits ................................................... Merger-related costs............................................................................................................. Other non-cash charges and credits, net ............................................................................. Changes in operating working capital, excluding effects of acquisitions and divestitures Accounts and notes receivable........................................................................................ Inventories ...................................................................................................................... Prepaid expenses and other current assets .................................................................... Accounts payable and other current liabilities................................................................ Income taxes payable...................................................................................................... Net change in operating working capital.............................................................................. Other..................................................................................................................................... Net Cash Provided by Operating Activities .............................................................................. Investing Activities Snack Ventures Europe (SVE) minority interest acquisition ....................................................... Capital spending ....................................................................................................................... Sales of property, plant and equipment..................................................................................... Other acquisitions and investments in noncontrolled affiliates ................................................ Cash proceeds from sale of PBG stock ...................................................................................... Divestitures................................................................................................................................ Short-term investments, by original maturity More than three months purchases ................................................................................ More than three months maturities ................................................................................ Three months or less, net ..................................................................................................... Net Cash Used for Investing Activities ..................................................................................... Financing Activities Proceeds from issuances of long-term debt .............................................................................. Payments of long-term debt ...................................................................................................... Short-term borrowings, by original maturity More than three months proceeds................................................................................... More than three months payments ................................................................................. Three months or less, net ..................................................................................................... Cash dividends paid .................................................................................................................. Share repurchases common ................................................................................................. Share repurchases preferred ................................................................................................ Proceeds from exercises of stock options................................................................................... Net Cash Used for Financing Activities .................................................................................... Effect of exchange rate changes on cash and cash equivalents ............................................... Net Increase/(Decrease) in Cash and Cash Equivalents ......................................................... Cash and Cash Equivalents, Beginning of Year ....................................................................... Cash and Cash Equivalents, End of Year ................................................................................. See accompanying notes to consolidated financial statements. 2005 $ 4,078 1,308 311 (22) (877) 464 (411) 440 145 (272) (132) (56) 188 609 337 79 5,852 (750) (1,736) 88 (345) 214 3 (83) 84 (992) (3,517) 25 (177) 332 (85) 1,601 (1,642) (3,012) (19) 1,099 (1,878) (21) 436 1,280 $ 1,716 2004 $ 4,212 1,264 368 150 (92) (38) (534) 395 (297) (203) 166 (130) (100) (31) 216 (268) (313) (24) 5,054 (1,387) 38 (64) 52 (44) 38 (963) (2,330) 504 (512) 153 (160) 1,119 (1,329) (3,028) (27) 965 (2,315) 51 460 820 $ 1,280 2003 $ 3,568 1,221 407 147 (109) (605) 277 (276) (286) 59 101 (220) (49) 23 (11) 182 (75) (101) 4,328 (1,345) 49 (71) 46 (38) 28 (940) (2,271) 52 (641) 88 (115) 40 (1,070) (1,929) (16) 689 (2,902) 27 (818) 1,638 $ 820 A6 Appendix A Specimen Financial Statements: PepsiCo, Inc. Consolidated Balance Sheet PepsiCo, Inc. and Subsidiaries December 31, 2005 and December 25, 2004 (in millions except per share amounts) ASSETS Current Assets Cash and cash equivalents................................................................................................................................... Short-term investments ........................................................................................................................................ Accounts and notes receivable, net....................................................................................................................... Inventories............................................................................................................................................................. Prepaid expenses and other current assets........................................................................................................... Total Current Assets ....................................................................................................................................... Property, Plant and Equipment, net .................................................................................................................... Amortizable Intangible Assets, net ...................................................................................................................... Goodwill................................................................................................................................................................. Other nonamortizable intangible assets................................................................................................................ Nonamortizable Intangible Assets.................................................................................................................. Investments in Noncontrolled Affiliates .............................................................................................................. Other Assets ......................................................................................................................................................... Total Assets................................................................................................................................................ LIABILITIES AND SHAREHOLDERS EQUITY Current Liabilities Short-term obligations .......................................................................................................................................... Accounts payable and other current liabilities...................................................................................................... Income taxes payable............................................................................................................................................ Total Current Liabilities .................................................................................................................................. Long-Term Debt Obligations................................................................................................................................. Other Liabilities .................................................................................................................................................... Deferred Income Taxes ........................................................................................................................................ Total Liabilities ................................................................................................................................................ Commitments and Contingencies Preferred Stock, no par value ............................................................................................................................. Repurchased Preferred Stock ............................................................................................................................. Common Shareholders Equity Common stock, par value 1 2/3 per share (issued 1,782 shares)....................................................................... Capital in excess of par value............................................................................................................................... Retained earnings ................................................................................................................................................. Accumulated other comprehensive loss ................................................................................................................ Less: repurchased common stock, at cost (126 and 103 shares, respectively) ................................................... Total Common Shareholders Equity .............................................................................................................. Total Liabilities and Shareholders Equity ................................................................................................ See accompanying notes to consolidated financial statements. 2005 2004 $ 1,716 3,166 4,882 3,261 1,693 618 10,454 8,681 530 4,088 1,086 5,174 3,485 3,403 $31,727 $ 1,280 2,165 3,445 2,999 1,541 654 8,639 8,149 598 3,909 933 4,842 3,284 2,475 $27,987 $ 2,889 5,971 546 9,406 2,313 4,323 1,434 17,476 41 (110) $ 1,054 5,599 99 6,752 2,397 4,099 1,216 14,464 41 (90) 30 614 21,116 (1,053) 20,707 (6,387) 14,320 $31,727 30 618 18,730 (886) 18,492 (4,920) 13,572 $27,987 Financial Statements and Accompanying Notes A7 Consolidated Statement of Common Shareholders Equity PepsiCo, Inc. and Subsidiaries Fiscal years ended December 31, 2005, December 25, 2004 and December 27, 2003 (in millions) Common Stock Capital in Excess of Par Value Balance, beginning of year........................................... Stock-based compensation expense............................. Stock option exercises(a) ............................................... Balance, end of year..................................................... Retained Earnings Balance, beginning of year........................................... Net income ................................................................... Cash dividends declared common .......................... Cash dividends declared preferred ......................... Cash dividends declared RSUs ............................... Other ............................................................................ Balance, end of year..................................................... Accumulated Other Comprehensive Loss Balance, beginning of year .......................................... Currency translation adjustment.................................. Cash flow hedges, net of tax: Net derivative gains/(losses) .................................. Reclassification of (gains)/losses to net income .... Minimum pension liability adjustment, net of tax ............................................................... Unrealized gain on securities, net of tax ...................... Other ............................................................................ Balance, end of year..................................................... Repurchased Common Stock Balance, beginning of year........................................... Share repurchases........................................................ Stock option exercises .................................................. Other ............................................................................ Balance, end of year..................................................... Total Common Shareholders Equity ................................ (103) (54) 31 (126) Shares 1,782 2005 Amount $ 30 618 311 (315) 614 18,730 4,078 (1,684) (3) (5) 21,116 (886) (251) 54 (8) 16 24 (2) (1,053) (4,920) (2,995) 1,523 5 (6,387) $14,320 2005 (77) (58) 32 (103) Shares 1,782 2004 Amount $ 30 548 368 (298) 618 15,961 4,212 (1,438) (3) (2) 18,730 (1,267) 401 (16) 9 (19) 6 (886) (3,376) (2,994) 1,434 16 (4,920) $13,572 2004 $4,212 401 (7) (19) 6 $4,593 (60) (43) 26 (77) Shares 1,782 2003 Amount $ 30 207 407 (66) 548 13,489 3,568 (1,082) (3) (11) 15,961 (1,672) 410 (11) (1) 7 1 (1) (1,267) (2,524) (1,946) 1,096 (2) (3,376) $11,896 2003 $3,568 410 (12) 7 1 (1) $3,973 Comprehensive Income Net income .................................................................. Currency translation adjustment.................................. Cash flow hedges, net of tax........................................ Minimum pension liability adjustment, net of tax ....... Unrealized gain on securities, net of tax ...................... Other ............................................................................ Total Comprehensive Income ........................................... (a) Includes total tax benefit of $125 million in 2005, $183 million in 2004 and $340 million in 2003. See accompanying notes to consolidated financial statements. $4,078 (251) 46 16 24 (2) $3,911 A8 Appendix A Specimen Financial Statements: PepsiCo, Inc. Notes to Consolidated Financial Statements Note 1 Basis of Presentation and Our Divisions Basis of Presentation Our%. Our share of the net income of noncontrolled bottling affiliates is reported in our income statement as bottling equity income. Bottling equity income also includes any changes in our ownership interests of these affiliates. In 2005, bottling equity income includes $126 million of pre-tax gains on our sales of PBG stock. See Note 8 for additional information on our noncontrolled bottling affiliates. Our share of other noncontrolled affiliates is included in division operating profit. Intercompany balances and transactions are eliminated. In 2005, we had an additional week of results (53rd week). Our fiscal year ends on the last Saturday of each December, resulting in an additional week of results every five or six years. In connection with our ongoing BPT initiative, we aligned certain accounting policies across our divisions in 2005. We conformed our methodology for calculating our bad debt reserves and modified our policy for recognizing revenue for products shipped to customers by third-party carriers. Additionally, we conformed our method of accounting for certain costs, primarily warehouse and freight. These changes reduced our net revenue by $36 million and our operating profit by $60 million in 2005. We also made certain reclassifications on our Consolidated Statement of Income in the fourth quarter of 2005 from cost of sales to selling, general and administrative expenses in connection with our BPT initiative. These reclassifications resulted in reductions to cost of sales of $556 million through the third quarter of 2005, $732 million in the full year 2004 and $688 million in the full year 2003, with corresponding increases to selling, general and administrative expenses in those periods. These reclassifications had no net impact on operating profit and have been made to all periods presented for comparability., future cash flows associated with impairment testing for perpetual brands and goodwill, useful lives for intangible assets, tax reserves, stock-based compensation and pension and retiree medical accruals. Actual results could differ from these estimates. See Our Divisions below and for additional unaudited information on items affecting the comparability of our consolidated results, see Items Affecting Comparability in Managements Discussion and Analysis. Tabular dollars are in millions, except per share amounts. All per share amounts reflect common per share amounts, assume dilution unless noted, and are based on unrounded amounts. Certain reclassifications were made to prior years amounts to conform to the 2005 presentation. Our Divisions We manufacture or use contract manufacturers, market and sell a variety of salty, sweet and grain-based snacks, carbonated and non-carbonated beverages, and foods through our North American and international business divisions. Our North American divisions include the United States and Canada. The accounting policies for the divisions are the same as those described in Note 2, except for certain allocation methodologies for stock-based compensation expense and pension and retiree medical expense, as described in the unaudited information in Our Critical Accounting Policies. Additionally, beginning in the fourth quarter of 2005, we began centrally managing commodity derivatives on behalf of our divisions. Certain of the commodity derivatives, primarily those related to the purchase of energy for use by our divisions, do not qualify for hedge accounting treatment. These derivatives hedge underlying commodity price risk and were not entered into for speculative purposes. Such derivatives are marked to market with the resulting gains and losses recognized as a component of corporate unallocated expense. These gains and losses are reflected in division results when the divisions take delivery of the underlying commodity. Therefore, division results reflect the contract purchase price of the energy or other commodities. Division results are based on how our Chairman and Chief Executive Officer evaluates our divisions. Division results exclude certain Corporate-initiated restructuring and impairment charges, mergerrelated costs and divested businesses. For additional unaudited information on our divisions, see Our Operations in Managements Discussion and Analysis. Financial Statements and Accompanying Notes A9 Frito-Lay North America (FLNA) PepsiCo Beverages North America (PBNA) PepsiCo International (PI) Quaker Foods North America (QFNA) 2005 FLNA...................................................................... PBNA..................................................................... PI ......................................................................... QFNA ..................................................................... Total division ........................................................ Divested businesses ............................................. Corporate .............................................................. Restructuring and impairment charges................ Merger-related costs............................................. Total...................................................................... $10,322 9,146 11,376 1,718 32,562 32,562 $32,562 2004 Net Revenue $ 9,560 8,313 9,862 1,526 29,261 29,261 $29,261 2003 $ 9,091 7,733 8,678 1,467 26,969 2 26,971 $26,971 2005 $2,529 2,037 1,607 537 6,710 (788) 5,922 $5,922 2004 Operating Profit $2,389 1,911 1,323 475 6,098 (689) 5,409 (150) $5,259 2003 $2,242 1,690 1,061 470 5,463 26 (502) 4,987 (147) (59) $4,781 Division Net Revenue QFNA 5% FLNA 32% Division Operating Profit QFNA 8% FLNA 38% PI 35% PI 24% PBNA 28% PBNA 30% Divested Businesses During 2003, we sold our Quaker Foods North America Mission pasta business. The results of this business are reported as divested businesses. Corporate Corporate includes costs of our corporate headquarters, centrally managed initiatives, such as our BPT initiative, unallocated insurance and benefit programs, foreign exchange transaction gains and losses, and certain commodity derivative gains and losses, as well as profit-in-inventory elimination adjustments for our noncontrolled bottling affiliates and certain other items. Restructuring and Impairment Charges and Merger-Related Costs See Note 3. A10 Appendix A Specimen Financial Statements: PepsiCo, Inc. Other Division Information 2005 FLNA PBNA PI QFNA Total division Corporate(a) Investments in bottling affiliates $ 5,948 6,316 9,983 989 23,236 5,331 3,160 $31,727 2004 Total Assets $ 5,476 6,048 8,921 978 21,423 3,569 2,995 $27,987 2003 $ 5,332 5,856 8,109 995 20,292 2,384 2,651 $25,327 2005 $ 512 320 667 31 1,530 206 $1,736 2004 Capital Spending $ 469 265 537 33 1,304 83 $1,387 2003 $ 426 332 521 32 1,311 34 $1,345 (a) Corporate assets consist principally of cash and cash equivalents, short-term investments, and property, plant and equipment. Total Assets Capital Spending QFNA 2% Other 27% FLNA 19% Other 12% FLNA 30% Net Revenue Canada 4% United Kingdom 6% Other 19% QFNA 3% PI 31% PBNA 20% PI 38% PBNA 18% Mexico 10% United States 61% FLNA PBNA PI QFNA Total division Corporate 2005 2004 2003 Amortization of Intangible Assets $3 $3 $3 76 75 75 71 68 66 1 1 150 147 145 $150 $147 $145 2004 2003 Net Revenue(a) $19,937 $18,329 $17,377 3,095 2,724 2,642 1,821 1,692 1,510 1,509 1,309 1,147 6,200 5,207 4,295 $32,562 $29,261 $26,971 2005 2005 2004 2003 Depreciation and Other Amortization $ 419 $ 420 $ 416 264 258 245 420 382 350 34 36 36 1,137 1,096 1,047 21 21 29 $1,158 $1,117 $1,076 2005 2004 2003 Long-Lived Assets(b) $10,723 $10,212 $ 9,907 902 878 869 1,715 1,896 1,724 582 548 508 3,948 3,339 3,123 $17,870 $16,873 $16,131 Long-Lived Assets Other 22% Canada 3% United Kingdom 10% United States 60% Mexico 5% U.S. Mexico United Kingdom Canada All other countries (a) Represents net revenue from businesses operating in these countries. (b) Long-lived assets represent net property, plant and equipment, nonamortizable and net amortizable intangible assets and investments in noncontrolled affiliates. These assets are reported in the country where they are primarily used. Financial Statements and Accompanying Notes A11 Note 2 Our Significant Accounting Policies Revenue Recognition We recognize revenue upon shipment or delivery to our customers based on written sales terms that do not allow for a right of return. However, our policy for direct-storedelivery (DSD) and chilled products is to remove and replace damaged and out-ofdate products from store shelves to ensure that our consumers receive the product quality and freshness that they expect. Similarly, our policy for warehouse distributed products is to replace damaged and out-of-date products. Based on our historical experience with this practice, we have reserved for anticipated damaged and outof-date products. For additional unaudited information on our revenue recognition and related policies, including our policy on bad debts, see Our Critical Accounting Policies in Managements Discussion and Analysis. We are exposed to concentration of credit risk by our customers, Wal-Mart and PBG. Wal-Mart represents approximately 9% of our net revenue, including concentrate sales to our bottlers which are used in finished goods sold by them to Wal-Mart; and PBG represents approximately 10%. We have not experienced credit issues with these customers. Sales Incentives and Other Marketplace Spending We offer sales incentives and discounts through various programs to our customers and consumers. Sales incentives and discounts are accounted for as a reduction of revenue and totaled $8.9 billion in 2005, $7.8 billion in 2004 and $7.1 billion in 2003. While most of these incentive arrangements have terms of no more than one year, certain arrangements extend beyond one year. For example, fountain pouring rights may extend up to 15 years. Costs incurred to obtain these arrangements are recognized over the contract period and the remaining balances of $321 million at December 31, 2005 and $337 million at December 25, 2004 are included in current assets and other assets in our Consolidated Balance Sheet. For additional unaudited information on our sales incentives, see Our Critical Accounting Policies in Managements Discussion and Analysis. Other marketplace spending includes the costs of advertising and other marketing activities and is reported as selling, general and administrative expenses. Advertising expenses were $1.8 billion in 2005, $1.7 billion in 2004 and $1.6 billion in 2003. Deferred advertising costs are not expensed until the year first used and consist of: media and personal service prepayments, promotional materials in inventory, and production costs of future media advertising. Deferred advertising costs of $202 million and $137 million at year-end 2005 and 2004, respectively, are classified as prepaid expenses in our Consolidated Balance Sheet. Distribution Costs Distribution costs, including the costs of shipping and handling activities, are reported as selling, general and administrative expenses. Shipping and handling expenses were $4.1 billion in 2005, $3.9 billion in 2004 and $3.6 billion in 2003. Cash Equivalents Cash equivalents are investments with original maturities of three months or less which we do not intend to rollover beyond three months. Software Costs We capitalize certain computer software and software development costs incurred in connection with developing or obtaining computer software for internal use. Capitalized software costs are included in property, plant and equipment on our Consolidated Balance Sheet and amortized on a straight-line basis over the estimated useful lives of the software, which generally do not exceed 5 years. Net capitalized software and development costs were $327 million at December 31, 2005 and $181 million at December 25, 2004. Commitments and Contingencies We are subject to various claims and contingencies related to lawsuits, taxes and environmental matters, as well as commitments under contractual and other commercial obligations. We recognize liabilities for contingencies and commitments when a loss is probable and estimable. For additional information on our commitments, see Note 9. Other Significant Accounting Policies Our other significant accounting policies are disclosed as follows: Property, Plant and Equipment and Intangible Assets Note 4 and, for additional unaudited information on brands and goodwill, see Our Critical Accounting Policies in Managements Discussion and Analysis. Income Taxes Note 5 and, for additional unaudited information, see Our Critical Accounting Policies in Managements Discussion and Analysis. Stock-Based Compensation Expense Note 6 and, for additional unaudited information, see Our Critical Accounting Policies in Managements Discussion and Analysis. Pension, Retiree Medical and Savings Plans Note 7 and, for additional unaudited information, see Our Critical Accounting Policies in Managements Discussion and Analysis. Risk Management Note 10 and, for additional unaudited information, see Our Business Risks in Managements Discussion and Analysis. There have been no new accounting pronouncements issued or effective during 2005 that have had, or are expected to have, a material impact on our consolidated financial statements. A12 Appendix A Specimen Financial Statements: PepsiCo, Inc. Note 3 Restructuring and Impairment Charges and Merger-Related Costs 2005 Restructuring Charges In the fourth quarter of 2005, we incurred a charge of $83 million ($55 million aftertax or $0.03 per share) in conjunction with actions taken to reduce costs in our operations, principally through headcount reductions. Of this charge, $34 million related to FLNA, $21 million to PBNA, $16 million to PI and $12 million to Corporate (recorded in corporate unallocated expenses). Most of this charge related to the termination of approximately 700 employees. We expect the substantial portion of the cash payments related to this charge to be paid in 2006. 2004 and 2003 Restructuring and Impairment Charges In the fourth quarter of 2004, we incurred a charge of $150 million ($96 million after-tax or $0.06 per share) in conjunction with the consolidation of FLNAs manufacturing network as part of its ongoing productivity program. Of this charge, $93 million related to asset impairment, primarily reflecting the closure of four U.S. plants. Production from these plants was redeployed to other FLNA facilities in the U.S. The remaining $57 million included employee-related costs of $29 million, contract termination costs of $8 million and other exit costs of $20 million. Employee-related costs primarily reflect the termination costs for approximately 700 employees. Through December 31, 2005, we have paid $47 million and incurred non-cash charges of $10 million, leaving substantially no accrual. In the fourth quarter of 2003, we incurred a charge of $147 million ($100 million after-tax or $0.06 per share) in conjunction with actions taken to streamline our North American divisions and PepsiCo International. These actions were taken to increase focus and eliminate redundancies at PBNA and PI and to improve the efficiency of the supply chain at FLNA. Of this charge, $81 million related to asset impairment, reflecting $57 million for the closure of a snack plant in Kentucky, the retirement of snack manufacturing lines in Maryland and Arkansas and $24 million for the closure of a PBNA office building in Florida. The remaining $66 million included employeerelated costs of $54 million and facility and other exit costs of $12 million. Employee-related costs primarily reflect the termination costs for approximately 850 sales, distribution, manufacturing, research and marketing employees. As of December 31, 2005, all terminations had occurred and substantially no accrual remains. Merger-Related Costs In connection with the Quaker merger in 2001, we recognized merger-related costs of $59 million ($42 million after-tax or $0.02 per share) in 2003. Note 4 Property, Plant and Equipment and Intangible Assets Average Useful Life Property, plant and equipment, net Land and improvements 10 30 yrs. Buildings and improvements 20 44 Machinery and equipment, including fleet and software 5 15 Construction in progress Accumulated depreciation Depreciation expense Amortizable intangible assets, net Brands Other identifiable intangibles Accumulated amortization Amortization expense 5 40 3 15 2005 $ 685 3,736 11,658 1,066 17,145 (8,464) $ 8,681 $1,103 $1,054 257 1,311 (781) $ 530 $150 2004 $ 646 3,605 10,950 729 15,930 (7,781) $ 8,149 $1,062 $1,008 225 1,233 (635) $ 598 $147 $145 $1,020 2003 Depreciation and amortization are recognized on a straight-line basis over an assets estimated useful life. Land is not depreciated and construction in progress is not depreciated until ready for service. Amortization of intangible assets for each of the next five years, based on average 2005 foreign exchange rates, is expected to be $152 million in 2006, $35 million in 2007, $35 million in 2008, $34 million in 2009 and $33 million in 2010. Managements Discussion and Analysis. Financial Statements and Accompanying Notes A13 Nonamortizable Intangible Assets Perpetual brands and goodwill are assessed for impairment at least annually to ensure that discounted future cash flows continue to exceed the related book value. A perpetual brand is impaired if its book value exceeds its fair value. Goodwill is evaluated for impairment if the book value of its reporting unit exceeds its fair value. A reporting unit can be a division or business within a division. If the fair value of an evaluated asset is less than its book value, the asset is written down based on its discounted future cash flows to fair value. No impairment charges resulted from the required impairment evaluations. The change in the book value of nonamortizable intangible assets is as follows: Balance, Beginning 2004 Frito-Lay North America Goodwill PepsiCo Beverages North America Goodwill Brands PepsiCo International Goodwill Brands Quaker Foods North America Goodwill Corporate Pension intangible Total goodwill Total brands Total pension intangible $ 130 2,157 59 2,216 1,334 808 2,142 175 2 3,796 867 2 $4,665 Acquisition $ 29 29 29 $29 Translation and Other $8 4 4 72 61 133 3 84 61 3 $148 Balance, End of 2004 $ 138 2,161 59 2,220 1,435 869 2,304 175 5 3,909 928 5 $ 4,842 Acquisition $ 278 263 541 278 263 $541 Translation and Other $ 7 3 3 (109) (106) (215) (4) (99) (106) (4) $(209) Balance, End of 2005 $ 145 2,164 59 2,223 1,604 1,026 2,630 175 1 4,088 1,085 1 $5,174 A14 Appendix A Specimen Financial Statements: PepsiCo, Inc. Note 5 Income Taxes 2005 Income before income taxes continuing operations U.S.................................................................................................................................................... Foreign.............................................................................................................................................. Provision for income taxes continuing operations Current: U.S. Federal....................................................................................................................... Foreign .............................................................................................................................. State ................................................................................................................................. Deferred: U.S. Federal ....................................................................................................................... Foreign .............................................................................................................................. State ................................................................................................................................. $3,175 3,207 $6,382 $1,638 426 118 2,182 137 (26) 11 122 $2,304 35.0% 1.4 7.0 (6.5) (0.8) 36.1% $ 993 772 863 135 35 169 2,967 608 426 400 342 520 2,296 (532) 1,764 $1,203 $231 $1,434 $564 (28) (4) $532 2004 $2,946 2,600 $5,546 $1,030 256 69 1,355 11 5 1 17 $1,372 35.0% 0.8 (5.4) (4.8) (0.9) 24.7% $ 850 857 669 153 46 157 2,732 666 402 402 379 460 2,309 (564) 1,745 $ 987 $229 $1,216 $438 118 8 $564 $487 (52) 3 $438 2003 $3,267 1,725 $4,992 $1,326 341 80 1,747 (274) (47) (2) (323) $1,424 35.0% 1.0 (5.5) (2.2) 0.2 28.5% Tax rate reconciliation continuing operations U.S. Federal statutory tax rate .......................................................................................................... State income tax, net of U.S. Federal tax benefit.............................................................................. Taxes on AJCA repatriation................................................................................................................ Lower taxes on foreign results .......................................................................................................... Settlement of prior years audit ........................................................................................................ Other, net.......................................................................................................................................... Annual tax rate ................................................................................................................................. Deferred tax liabilities Investments in noncontrolled affiliates ............................................................................................ Property, plant and equipment ......................................................................................................... Pension benefits ............................................................................................................................... Intangible assets other than nondeductible goodwill....................................................................... Zero coupon notes ............................................................................................................................ Other................................................................................................................................................. Gross deferred tax liabilities............................................................................................................. Deferred tax assets Net carryforwards ............................................................................................................................. Stock-based compensation............................................................................................................... Retiree medical benefits................................................................................................................... Other employee-related benefits....................................................................................................... Other................................................................................................................................................. Gross deferred tax assets ................................................................................................................. Valuation allowances........................................................................................................................ Deferred tax assets, net.................................................................................................................... Net deferred tax liabilities ................................................................................................................ Deferred taxes included within: Prepaid expenses and other current assets.................................................................................. Deferred income taxes .................................................................................................................. Analysis of valuation allowances Balance, beginning of year............................................................................................................... (Benefit)/provision........................................................................................................................ Other (deductions)/additions........................................................................................................ Balance, end of year......................................................................................................................... Financial Statements and Accompanying Notes A15 For additional unaudited information on our income tax policies, including our reserves for income taxes, see Our Critical Accounting Policies in Managements Discussion and Analysis. Carryforwards, Credits and Allowances Operating loss carryforwards totaling $5.1 billion at year-end 2005 are being carried forward in a number of foreign and state jurisdictions where we are permitted to use tax operating losses from prior periods to reduce future taxable income. These operating losses will expire as follows: $0.1 billion in 2006, $4.1 billion between 2007 and 2025 and $0.9 billion may be carried forward indefinitely. In addition, certain tax credits generated in prior periods of approximately $39.4 million are available to reduce certain foreign tax liabilities through 2011. We establish valuation allowances for our deferred tax assets when the amount of expected future taxable income is not likely to support the use of the deduction or credit. Undistributed International Earnings The AJCA created a one-time incentive for U.S. corporations to repatriate undistributed international earnings by providing an 85% dividends received deduction. As approved by our Board of Directors in July 2005, we repatriated approximately $7.5 billion in earnings previously considered indefinitely reinvested outside the U.S. in the fourth quarter of 2005. In 2005, we recorded income tax expense of $460 million associated with this repatriation. Other than the earnings repatriated, we intend to continue to reinvest earnings outside the U.S. for the foreseeable future and, therefore, have not recognized any U.S. tax expense on these earnings. At December 31, 2005, we had approximately $7.5 billion of undistributed international earnings. Reserves A number of years may elapse before a particular matter, for which we have established a reserve, is audited and finally resolved. The number of years with open tax audits varies depending on the tax jurisdiction. During 2004, we recognized $266 million of tax benefits related to the favorable resolution of certain open tax issues. In addition, in 2004, we recognized a tax benefit of $38 million upon agreement with the IRS on an open issue related to our discontinued restaurant operations. At the end of 2003, we entered into agreements with the IRS for open years through 1997. These agreements resulted in a tax benefit of $109 million in the fourth quarter of 2003. As part of these agreements, we also resolved the treatment of certain other issues related to future tax years. The IRS has initiated their audits of our tax returns for the years 1998 through 2002. Our tax returns subsequent to 2002 have not yet been examined. While it is often difficult to predict the final outcome or the timing of resolution of any particular tax matter, we believe that our reserves reflect the probable outcome of known tax contingencies. Settlement of any particular issue would usually require the use of cash. Favorable resolution would be recognized as a reduction to our annual tax rate in the year of resolution. Our tax reserves, covering all federal, state and foreign jurisdictions, are presented in the balance sheet within other liabilities (see Note 14), except for any amounts relating to items we expect to pay in the coming year which are included in current income taxes payable. For further unaudited information on the impact of the resolution of open tax issues, see Other Consolidated Results. Note 6 Stock-Based Compensation Our stock-based compensation program is a broad-based program designed to attract and retain employees while also aligning employees interests with the interests of our shareholders. Employees at all levels participate in our stock-based compensation program. In addition, members of our Board of Directors participate in our stockbased compensation program in connection with their service on our Board. Stock options and RSUs are granted to employees under the shareholder-approved 2003 Long-Term Incentive Plan (LTIP), our only active stock-based plan. Stock-based compensation expense was $311 million in 2005, $368 million in 2004 and $407 million in 2003. Related income tax benefits recognized in earnings were $87 million in 2005, $103 million in 2004 and $114 million in 2003. At yearend 2005, 51 million shares were available for future executive and SharePower grants. For additional unaudited information on our stock-based compensation program, see Our Critical Accounting Policies in Managements Discussion and Analysis. SharePower Grants SharePower options are awarded under our LTIP to all eligible employees, based on job level or classification, and in the case of international employees, tenure as well. All stock option grants have an exercise price equal to the fair market value of our common stock on the day of grant and generally have a 10-year term with vesting after three years. Executive Grants All senior management and certain middle management are eligible for executive grants under our LTIP. All stock option grants have an exercise price equal to the fair market value of our common stock on the day of grant and generally have a 10-year term with vesting after three years. There have been no reductions to the exercise price of previously issued awards, and any repricing of awards would require approval of our shareholders. Beginning in 2004, executives who are awarded long-term incentives based on their performance are offered the choice of stock options or RSUs. RSU expense is based on the fair value of PepsiCo stock on the date of grant and is amortized over the vesting period, generally three years. Each restricted stock unit can be settled in a share of our stock after the vesting period. Executives who elect RSUs receive one RSU for every four stock options that would have otherwise been granted. Senior officers do not have a choice and are granted 50% stock options and 50% RSUs. Vesting of RSU awards for senior officers is contingent upon the achievement of pre-established performance targets. We granted 3 million RSUs in both 2005 and 2004 with weighted-average intrinsic values of $53.83 and $47.28, respectively. A16 Appendix A Specimen Financial Statements: PepsiCo, Inc. Method of Accounting and Our Assumptions We account for our employee stock options under the fair value method of accounting using a Black-Scholes valuation model to measure stock-based compensation expense at the date of grant. We adopted SFAS 123R, Share-Based Payment, under the modified prospective method in the first quarter of 2006. We do not expect our adoption of SFAS 123R to materially impact our financial statements. Our Stock Option Activity(a) Our weighted-average Black-Scholes fair value assumptions include: Expected life Risk free interest rate Expected volatility Expected dividend yield 2005 6 yrs. 3.8% 23% 1.8% 2004 6 yrs. 3.3% 26% 1.8% 2003 6 yrs. 3.1% 27% 1.15% Outstanding at beginning of year Granted Exercised Forfeited/expired Outstanding at end of year Exercisable at end of year 2005 Options Average Price(b) 174,261 $40.05 12,328 53.82 (30,945) 35.40 (5,495) 43.31 150,149 42.03 89,652 40.52 Options 198,173 14,137 (31,614) (6,435) 174,261 94,643 2004 Average Price(b) $38.12 47.47 30.57 43.82 40.05 36.41 2003 Options Average Price(b) 190,432 $36.45 41,630 39.89 (25,833) 26.74 (8,056) 43.56 198,173 38.12 97,663 32.56 Stock options outstanding and exercisable at December 31, 2005(a) Range of Exercise Price $14.40 to $21.54 $23.00 to $33.75 $34.00 to $43.50 $43.75 to $56.75 Options Outstanding Options Average Price(b) Average Life(c) 905 $ 20.01 3.56 yrs. 14,559 30.46 3.07 82,410 39.44 5.34 52,275 49.77 7.17 150,149 42.03 5.67 Options Exercisable Options Average Price(b) Average Life(c) 905 $20.01 3.56 yrs. 14,398 30.50 3.05 48,921 39.19 4.10 25,428 49.48 6.09 89,652 40.52 4.45 (a) Options are in thousands and include options previously granted under Quaker plans. No additional options or shares may be granted under the Quaker plans. (b) Weighted-average exercise price. (c) Weighted-average contractual life remaining. Our RSU Activity(a) Outstanding at beginning of year Granted Converted Forfeited/expired Outstanding at end of year (a) RSUs are in thousands. (b) Weighted-average intrinsic value. (c) Weighted-average contractual life remaining. RSUs 2,922 3,097 (91) (259) 5,669 2005 Average Intrinsic Value(b) $47.30 53.83 48.73 50.51 50.70 Average Life(c) 1.8 yrs. RSUs 3,077 (18) (137) 2,922 2004 Average Intrinsic Value(b) $ 47.28 47.25 47.25 47.30 Average Life(c) 2.2 yrs. Other stock-based compensation data Weighted-average fair value of options granted Total intrinsic value of options/RSUs exercised/converted(a) Total intrinsic value of options/RSUs outstanding(a) Total intrinsic value of options exercisable(a) (a) In thousands. 2005 $13.45 $632,603 $2,553,594 $1,662,198 Stock Options 2004 $12.04 $667,001 $2,062,153 $1,464,926 RSUs 2003 $11.21 $466,719 $1,641,505 $1,348,658 2005 $4,974 $334,931 2004 $914 $151,760 At December 31, 2005, there was $315 million of total unrecognized compensation cost related to nonvested share-based compensation grants. This unrecognized compensation is expected to be recognized over a weighted-average period of 1.6 years. Financial Statements and Accompanying Notes A17 Note 7 Pension, Retiree Medical and Savings Plans Our pension plans cover full-time employees in the U.S. and certain international employees. Benefits are determined based on either years of service or a combination of years of service and earnings. U.S. retirees are also eligible for medical and life insurance benefits (retiree medical) if they meet age and service requirements. Generally, our share of retiree medical costs is capped at specified dollar amounts, which vary based upon years of service, with retirees contributing the remainder of the costs. We use a September 30 measurement date and all plan assets and liabilities are generally reported as of that date. The cost or benefit of plan changes that increase or decrease benefits for prior employee service (prior service cost) is included in expense on a straight-line basis over the average remaining service period of employees expected to receive benefits. The Medicare Act was signed into law in December 2003 and we applied the provisions of the Medicare Act to our plans in 2005 and 2004. The Medicare Act provides a subsidy for sponsors of retiree medical plans who offer drug benefits equivalent to those provided under Medicare. As a result of the Medicare Act, our 2005 and 2004 retiree medical costs were $11 million and $7 million lower, respectively, and our 2005 and 2004 liabilities were reduced by $136 million and $80 million, respectively. We expect our 2006 retiree medical costs to be approximately $18 million lower than they otherwise would have been as a result of the Medicare Act. For additional unaudited information on our pension and retiree medical plans and related accounting policies and assumptions, see Our Critical Accounting Policies in Managements Discussion and Analysis. 2005 Weighted-average assumptions Liability discount rate........................................................ Expense discount rate........................................................ Expected return on plan assets ......................................... Rate of compensation increases........................................ Components of benefit expense Service cost....................................................................... Interest cost...................................................................... Expected return on plan assets ........................................ Amortization of prior service cost/(benefit)....................... Amortization of experience loss......................................... Benefit expense................................................................. Settlement/curtailment loss ............................................. Special termination benefits............................................. Total.................................................................................. 2004 U.S. 6.1% 6.1% 7.8% 4.5% Pension 2003 2005 2004 2003 International 6.1% 6.1% 8.0% 3.9% 6.1% 6.4% 8.0% 3.8% Retiree Medical 2005 2004 2003 5.7% 6.1% 7.8% 4.4% 6.1% 6.7% 8.3% 4.5% 5.1% 6.1% 8.0% 4.1% 5.7% 6.1% 6.1% 6.1% 6.1% 6.7% $ 213 296 (344) 3 106 274 21 $ 295 $ 193 271 (325) 6 81 226 4 19 $ 249 $ 153 245 (305) 6 44 143 4 $ 147 $ 32 55 (69) 1 15 34 $ 34 $ 27 47 (65) 1 9 19 1 1 $ 21 $ 24 39 (54) 5 14 $ 14 $ 40 78 (11) 26 133 2 $135 $ 38 72 (8) 19 121 4 $125 $ 33 73 (3) 13 116 $116 A18 Appendix A Specimen Financial Statements: PepsiCo, Inc. 2005 U.S. Change in projected benefit liability Liability at beginning of year Service cost Interest cost Plan amendments Participant contributions Experience loss/(gain) Benefit payments Settlement/curtailment loss Special termination benefits Foreign currency adjustment Other Liability at end of year Liability at end of year for service to date Change in fair value of plan assets Fair value at beginning of year Actual return on plan assets Employer contributions/funding Participant contributions Benefit payments Settlement/curtailment loss Foreign currency adjustment Other Fair value at end of year $4,968 213 296 517 (241) 21 (3) $5,771 $4,783 $4,152 477 699 (241) (1) $5,086 2004 Pension 2005 2004 International $ 952 32 55 3 10 203 (28) (68) 104 $1,263 $1,047 $ 838 142 104 10 (28) (61) 94 $1,099 $(164) 17 474 4 $ 331 $367 1 (41) 4 $331 $194 2 7 (73) (15) (22) $ 93 $(65) $(84) $33 $758 27 47 1 9 73 (29) (2) 1 67 $952 $779 $687 77 37 9 (29) (2) 59 $838 $(113) 13 380 7 $ 287 $294 5 (37) 25 $287 $4 65 4 (12) (9) 26 $ 78 $(191) $(227) $161 Retiree Medical 2005 2004 $4,456 193 271 (17) 261 (205) (9) 18 $4,968 $4,164 $3,558 392 416 (205) (9) $4,152 $ (817) 9 2,013 5 $1,210 $1,572 (387) 25 $1,210 $ 196 65 (67) (81) (5) $108 $1,319 40 78 (8) (45) (74) 2 $1,312 $1,264 38 72 (41) 58 (76) 4 $1,319 $ 74 (74) $ $(1,312) (113) 402 19 $(1,004) $ (1,004) $(1,004) $ 61 (54) (26) (52) $(71) $(1,312) $(1,312) $ $ 76 (76) $ $(1,319) (116) 473 19 $ (943) $ (943) $(943) $ 109 31 (19) (82) $ 39 $(1,319) $(1,319) $ Funded status as recognized in our Consolidated Balance Sheet Funded status at end of year $ (685) 5 Unrecognized prior service cost/(benefit) 2,288 Unrecognized experience loss Fourth quarter benefit payments 5 Net amounts recognized $1,613 Net amounts as recognized in our Consolidated Balance Sheet Other assets $2,068 Intangible assets Other liabilities (479) Accumulated other comprehensive loss 24 Net amounts recognized $1,613 Components of increase in unrecognized experience loss Decrease in discount rate $ 365 57 Employee-related assumption changes 95 Liability-related experience different from assumptions (133) Actual asset return different from expected return (106) Amortization of losses Other, including foreign currency adjustments and 2003 Medicare Act (3) Total $ 275 Selected information for plans with liability for service to date in excess of plan assets Liability for service to date $ (374) $(320) Projected benefit liability $ (815) $(685) Fair value of plan assets $8 $11 Of the total projected pension benefit liability at year-end 2005, $765 million relates to plans that we do not fund because the funding of such plans does not receive favorable tax treatment. Financial Statements and Accompanying Notes A19 Future Benefit Payments Our estimated future benefit payments are as follows: Pension Retiree medical 2006 $235 $85 2007 $255 $90 2008 $275 $90 2009 $300 $95 2010 $330 $100 2011-15 $2,215 $545 These future benefits to beneficiaries include payments from both funded and unfunded pension plans. Pension Assets The expected return on pension plan assets is based on our historical experience, our pension plan investment guidelines, and our expectations for long-term rates of return. We use a market-related value method that recognizes each years asset gain or loss over a five-year period. Therefore, it takes five years for the gain or loss from any one year to be fully included in the value of pension plan assets that is used to calculate the expected return. Our pension plan investment guidelines are established based upon an evaluation of market conditions, tolerance for risk and cash requirements for benefit payments. Our investment objective is to ensure that funds are available to meet the plans benefit obligations when they are due. Our investment strategy is to prudently invest plan assets in high-quality and diversified equity and debt securities to achieve our long-term return expectation. Our target allocation and actual pension plan asset allocations for the plan years 2005 and 2004, are below. Pension assets include approximately 5.5 million shares of PepsiCo common stock with a market value of $311 million in 2005, and 5.5 million shares with a market value of $267 million in 2004. Our investment policy limits the investment in PepsiCo stock at the time of investment to 10% of the fair value of plan assets. Asset Category Equity securities Debt securities Other, primarily cash Total Target Allocation 60% 40% 100% Actual Allocation 2005 2004 60% 60% 39% 39% 1% 1% 100% 100% Retiree Medical Cost Trend Rates An average increase of 10% in the cost of covered retiree medical benefits is assumed for 2006. This average increase is then projected to decline gradually to 5% in 2010: 2005 service and interest cost components 2005 benefit liability 1% Increase $3 $38 1% Decrease $(2) $(33) Savings Plans Our U.S. employees are eligible to participate in 401(k) savings plans, which are voluntary defined contribution plans. The plans are designed to help employees accumulate additional savings for retirement. We make matching contributions on a portion of eligible pay based on years of service. In 2005 and 2004, our matching contributions were $52 million and $35 million, respectively. Note 8 Noncontrolled Bottling Affiliates Our most significant noncontrolled bottling affiliates are PBG and PAS. Approximately 10% of our net revenue in 2005, 2004 and 2003 reflects sales to PBG. The Pepsi Bottling Group In addition to approximately 41% and 42% of PBGs outstanding common stock that we own at year-end 2005 and 2004, respectively, we own 100% of PBGs class B common stock and approximately 7% of the equity of Bottling Group, LLC, PBGs principal operating subsidiary. This gives us economic ownership of approximately 45% and 46% of PBGs combined operations at year-end 2005 and 2004, respectively. In 2005, bottling equity income includes $126 million of pre-tax gains on our sales of PBG stock. A20 Appendix A Specimen Financial Statements: PepsiCo, Inc. PBGs summarized financial information is as follows: Current assets Noncurrent assets Total assets Current liabilities Noncurrent liabilities Minority interest Total liabilities Our investment Net revenue Gross profit Operating profit Net income 2005 $ 2,412 9,112 $11,524 $2,598 6,387 496 $9,481 $1,738 $11,885 $5,632 $1,023 $466 2004 $ 2,183 8,754 $10,937 $1,725 6,818 445 $8,988 $1,594 $10,906 $5,250 $976 $457 2003 $10,265 $5,050 $956 $416 Our investment in PBG, which includes the related goodwill, was $400 million and $321 million higher than our ownership interest in their net assets at year-end 2005 and 2004, respectively. Based upon the quoted closing price of PBG shares at year-end 2005 and 2004, the calculated market value of our shares in PBG, excluding our investment in Bottling Group, LLC, exceeded our investment balance by approximately $1.5 billion and $1.7 billion, respectively. PepsiAmericas At year-end 2005 and 2004, we owned approximately 43% and 41% of PepsiAmericas, respectively, and their summarized financial information is as follows: Current assets Noncurrent assets Total assets Current liabilities Noncurrent liabilities Total liabilities Our investment Net revenue Gross profit Operating profit Net income 2005 $ 598 3,456 $4,054 $ 722 1,763 $2,485 $968 $3,726 $1,562 $393 $195 2004 $ 530 3,000 $3,530 $ 521 1,386 $1,907 $924 $3,345 $1,423 $340 $182 2003 $3,237 $1,360 $316 $158 Our investment in PAS, which includes the related goodwill, was $292 million and $253 million higher than our ownership interest in their net assets at year-end 2005 and 2004, respectively. Based upon the quoted closing price of PAS shares at year-end 2005 and 2004, the calculated market value of our shares in PepsiAmericas exceeded our investment balance by approximately $364 million and $277 million, respectively. In January 2005, PAS acquired a regional bottler, Central Investment Corporation. The table above includes the results of Central Investment Corporation from the transaction date forward. Related Party Transactions Our significant related party transactions involve our noncontrolled bottling affiliates. We sell concentrate to these affiliates, which is used in the production of carbonated soft drinks and non-carbonated bever- ages. We also sell certain finished goods to these affiliates and we receive royalties for the use of our trademarks for certain products. Sales of concentrate and finished goods are reported net of bottler funding. For further unaudited information on these bottlers, see Our Customers in Managements Discussion and Analysis. These transactions with our bottling affiliates are reflected in our consolidated financial statements as follows: Net revenue Selling, general and administrative expenses Accounts and notes receivable Accounts payable and other current liabilities Such amounts are settled on terms consistent with other trade receivables and payables. See Note 9 regarding our guarantee of certain PBG debt. In addition, we coordinate, on an aggregate basis, the negotiation and purchase of sweeteners and other raw materials 2005 $4,633 $143 $178 $117 2004 $ 4,170 $114 $157 $95 2003 $3,699 $128 requirements for certain of our bottlers with suppliers. Once we have negotiated the contracts, the bottlers order and take delivery directly from the supplier and pay the suppliers directly. Consequently, these transactions are not reflected in our consolidated financial statements. As the contracting party, we could be liable to these suppliers in the event of any nonpayment by our bottlers, but we consider this exposure to be remote. Financial Statements and Accompanying Notes A21 Note 9 Debt Obligations and Commitments 2005 Short-term debt obligations Current maturities of long-term debt Commercial paper (3.3% and 1.6%) Other borrowings (7.4% and 6.6%) Amounts reclassified to long-term debt Long-term debt obligations Short-term borrowings, reclassified Notes due 2006-2026 (5.4% and 4.7%) Zero coupon notes, $475 million due 2006-2012 (13.4%) Other, due 2006-2014 (6.3% and 6.2%) Less: current maturities of long-term debt obligations The interest rates in the above table reflect weighted-average rates as of year-end. 2004 $ 160 1,287 357 (750) $1,054 $ 750 1,274 321 212 2,557 (160) $2,397 $ 143 3,140 356 (750) $2,889 $ 750 1,161 312 233 2,456 (143) $2,313 At December 31, 2005, approximately 78% of total debt, after the impact of the associated interest rate swaps, was exposed to variable interest rates, compared to 67% at December 25, 2004. In addition to variable rate long-term debt, all debt with maturities of less than one year is categorized as variable for purposes of this measure. Cross Currency Interest Rate Swaps In 2004, we entered into a cross currency interest rate swap to hedge the currency exposure on U.S. dollar denominated debt of $50 million held by a foreign affiliate. The terms of this swap match the terms of the debt it modifies. The swap matures in 2008. The unrecognized gain related to this swap was less than $1 million at December 31, 2005, resulting in a U.S. dollar liability of $50 million. At December 25, 2004, the unrecognized loss related to this swap was $3 million, resulting in a U.S. dollar liability of $53 million. We have also entered into cross currency interest rate swaps to hedge the currency exposure on U.S. dollar denominated intercompany debt of $125 million. The terms of the swaps match the terms of the debt they modify. The swaps mature over the next two years. The net unrecognized gain related to these swaps was $5 million at December 31, 2005. The net unrecognized loss related to these swaps was less than $1 million at December 25, 2004. Short-term borrowings are reclassified to long-term when we have the intent and ability, through the existence of the unused lines of credit, to refinance these borrowings on a long-term basis. At year-end 2005, we maintained $2.1 billion in corporate lines of credit subject to normal banking terms and conditions. These credit facilities support short-term debt issuances and remained unused as of December 31, 2005. Of the $2.1 billion, $1.35 billion expires in May 2006 with the remaining $750 million expiring in June 2009. In addition, $181 million of our debt was outstanding on various lines of credit maintained for our international divisions. Long-Term Contractual Commitments These lines of credit are subject to normal banking terms and conditions and are committed to the extent of our borrowings. Interest Rate Swaps We entered into interest rate swaps in 2004 to effectively convert the interest rate of a specific debt issuance from a fixed rate of 3.2% to a variable rate. The variable weighted-average interest rate that we pay is linked to LIBOR and is subject to change. The notional amount of the interest rate swaps outstanding at December 31, 2005 and December 25, 2004 was $500 million. The terms of the interest rate swaps match the terms of the debt they modify. The swaps mature in 2007. Payments Due by Period Long-term debt obligations(a) .......................................................... Operating leases ............................................................................. Purchasing commitments(b) ............................................................ Marketing commitments.................................................................. Other commitments......................................................................... (a) Excludes current maturities of long-term debt of $143 million which are classified within current liabilities. Total $2,313 769 4,533 1,487 99 $9,201 $ 2006 187 1,169 412 82 $1,850 2007-2008 $1,052 253 1,630 438 10 $3,383 2009-2010 2011 and beyond $ 876 $ 385 132 197 775 959 381 256 6 1 $2,170 $1,798 (b) Includes approximately $13 million of long-term commitments which are reflected in other liabilities in our Consolidated Balance Sheet. The above table reflects non-cancelable commitments as of December 31, 2005 based on year-end foreign exchange rates. A22 Appendix A Specimen Financial Statements: PepsiCo, Inc. Most long-term contractual commitments, except for our long-term debt obligations, are not recorded in our Consolidated Balance Sheet. Non-cancelable operating leases primarily represent building leases. Non-cancelable purchasing commitments are primarily for oranges and orange juices to be used for our Tropicana brand beverages. Non-cancelable marketing commitments primarily are for sports marketing and with our fountain customers. Bottler funding is not reflected in our long-term contractual commitments as it is negotiated on an annual basis. See Note 7 regarding our pension and retiree medical obligations and discussion below regarding our commitments to noncontrolled bottling affiliates and former restaurant operations. Off-Balance Sheet Arrangements It is not our business practice to enter into off-balance sheet arrangements, other than in the normal course of business, nor is it our policy to issue guarantees to our bottlers, noncontrolled affiliates or third parties. However, certain guarantees were necessary to facilitate the separation of our bottling and restaurant operations from us. In connection with these transactions, we have guaranteed $2.3 billion of Bottling Group, LLCs long-term debt through 2012 and $28 million of YUM! Brands, Inc. (YUM) outstanding obligations, primarily property leases, through 2020. The terms of our Bottling Group, LLC debt guarantee are intended to preserve the structure of PBGs separation from us and our payment obligation would be triggered if Bottling Group, LLC failed to perform under these debt obligations or the structure significantly changed. Our guarantees of certain obligations ensured YUMs continued use of certain properties. These guarantees would require our cash payment if YUM failed to perform under these lease obligations. See Our Liquidity, Capital Resources and Financial Position in Managements Discussion and Analysis for further unaudited information on our borrowings. Note 10 Risk Management We are exposed to the risk of loss arising from adverse changes in: commodity prices, affecting the cost of our raw materials and energy, foreign exchange risks, interest rates, stock prices, and discount rates affecting the measurement of our pension and retiree medical liabilities. In the normal course of business, we manage these risks through a variety of strategies, including the use of derivatives. Certain derivatives are designated as either cash flow or fair value hedges and qualify for hedge accounting treatment, while others do not qualify and are marked to market through earnings. See Our Business Risks in Managements Discussion and Analysis for further unaudited information on our business risks. our. If the derivative instrument is terminated, we continue to defer the related gain or loss and include it as a component of the cost of the underlying hedged item. Upon determination that the underlying hedged item will not be part of an actual transaction, we recognize the related gain or loss in net income in that period. We also use derivatives that do not qualify for hedge accounting treatment. We account for such derivatives at market value with the resulting gains and losses reflected in our income statement. We do not use derivative instruments for trading or speculative purposes and we limit our exposure to individual counterparties to manage credit risk. Commodity Prices We are subject to commodity price risk because our ability to recover increased costs through higher pricing may be limited in the competitive environment in which we operate. This risk is managed through the use of fixed-price purchase orders, pricing agreements, geographic diversity and derivatives. We use derivatives, with terms of no more than two years, to economically hedge price fluctuations related to a portion of our anticipated commodity purchases, primarily for natural gas and diesel fuel. For those derivatives that are designated as cash flow hedges, any ineffectiveness is recorded immediately. However, our commodity cash flow hedges have not had any significant ineffectiveness for all periods presented. We classify both the earnings and cash flow impact from these derivatives consistent with the underlying hedged item. During the next 12 months, we expect to reclassify gains of $24 million related to cash flow hedges from accumulated other comprehensive loss into net income. Foreign Exchange Our operations outside of the U.S. generate over a third of our net revenue of foreign exchange rates. Ineffectiveness on these hedges has not been material. Interest Rates We centrally manage our debt and investment portfolios considering investment opportunities and risks, tax consequences and overall financing strategies. We may use interest rate and cross currency interest rate swaps to manage our overall interest expense and foreign exchange risk. These instruments effectively change the interest rate and currency of specific debt issuances. These swaps are entered into Financial Statements and Accompanying Notes A23 concurrently with the issuance of the debt that they are intended to modify. The notional amount, interest payment and maturity date of the swaps match the principal, interest payment and maturity date of the related debt. These swaps are entered into only with strong creditworthy counterparties, are settled on a net basis and are of relatively short duration. Stock Prices The portion of our deferred compensation liability that is based on certain market indices and on our stock price is subject to market risk. We hold mutual fund investments and prepaid forward contracts to manage this risk. Changes in the fair value of these investments and contracts are recognized immediately in earnings and are offset by changes in the related compensation liability. Fair Value All derivative instruments are recognized in our Consolidated Balance Sheet at fair value. The fair value of our derivative instruments is generally based on quoted market prices. Book and fair values of our derivative and financial instruments are as follows: 2005 Book Value Assets Cash and cash equivalents(a) .................................................................................. Short-term investments(b) ........................................................................................ Forward exchange contracts(c) ................................................................................. Commodity contracts(d) ............................................................................................ Prepaid forward contract(e) ...................................................................................... Cross currency interest rate swaps(f) ....................................................................... Liabilities Forward exchange contracts(c) ................................................................................. Commodity contracts(d) ............................................................................................ Debt obligations....................................................................................................... Interest rate swaps(g) ............................................................................................... Cross currency interest rate swaps(f) ...................................................................... (a) Book value approximates fair value due to the short maturity. 2004 Fair Value $1,716 $3,166 $19 $41 $107 $6 $15 $3 $5,378 $9 $ Book Value $1,280 $2,165 $8 $7 $120 $ $35 $8 $3,451 $1 $3 Fair Value $1,280 $2,165 $8 $7 $120 $ $35 $8 $3,676 $1 $3 $1,716 $3,166 $19 $41 $107 $6 $15 $3 $5,202 $9 $ Included in our Consolidated Balance Sheet under the captions noted above or as indicated below. In addition, derivatives are designated as accounting hedges unless otherwise noted below. (b) Principally short-term time deposits and includes $124 million at December 31, 2005 and $118 million at December 25, 2004 of mutual fund investments used to manage a portion of market risk arising from our deferred compensation liability. (c) 2005 asset includes $14 million related to derivatives not designated as accounting hedges. Assets are reported within current assets and other assets and liabilities are reported within current liabilities and other liabilities. (d) 2005 asset includes $2 million related to derivatives not designated as accounting hedges and the liability relates entirely to derivatives not designated as accounting hedges. Assets are reported within current assets and other assets and liabilities are reported within current liabilities and other liabilities. (e) Included in current assets and other assets. (f) Asset included within other assets and liability included in long-term debt. (g) Reported in other liabilities. This table excludes guarantees, including our guarantee of $2.3 billion of Bottling Group, LLCs long-term debt. The guarantee had a fair value of $47 million at December 31, 2005 and $46 million at December 25, 2004 based on an external estimate of the cost to us of transferring the liability to an independent financial institution. See Note 9 for additional information on our guarantees. Note 11 Net Income per Common Share from Continuing Operations and RSUs and preferred shares were converted into common shares. Options to purchase 3.0 million shares in 2005, 7.0 million shares in 2004 and 49.0 million shares in 2003 were not included in the calculation of diluted earnings per common share because these options were out-of-the-money. Out-of-themoney options had average exercise prices of $53.77 in 2005, $52.88 in 2004 and $48.27 in 2003. A24 Appendix A Specimen Financial Statements: PepsiCo, Inc. The computations of basic and diluted net income per common share from continuing operations are as follows: 2005 Net income Preferred shares: Dividends Redemption premium Net income available for common shareholders Basic net income per common share Net income available for common shareholders Dilutive securities: Stock options and RSUs ESOP convertible preferred stock Unvested stock awards Diluted Diluted net income per common share (a) Weighted-average common shares outstanding. 2004 Shares(a) Income $4,174 (3) (22) $4,149 $2.45 1,669 35 2 1,706 $4,149 24 $4,173 $2.41 1,696 31 2 1,729 Shares(a) Income $3,568 (3) (12) $3,553 $2.07 $3,553 15 $3,568 $2.05 2003 Shares(a) Income $4,078 (2) (16) $4,060 $2.43 $4,060 18 $4,078 $2.39 1,669 1,696 1,718 1,718 17 3 1 1,739 Note 12 Preferred and Common Stock As of December 31, 2005 and December 25, 2004, there were 3.6 billion shares of common stock and 3 million shares of convertible preferred stock authorized. The preferred stock was issued only for an employee stock ownership plan (ESOP) established by Quaker and these shares are redeemable for common stock by the ESOP participants. The preferred stock accrues dividends at an annual rate of $5.46 per share. At year-end 2005 and 2004, there were 803,953 preferred shares issued and 354,853 and 424,853 shares outstanding, respectively. Each share is convertible at the option of the holder into 4.9625 shares of common stock. The preferred shares may be called by us upon written notice at $78 per share plus accrued and unpaid dividends. As of December 31, 2005, 0.3 million outstanding shares of preferred stock with a fair value of $104 million and 17 million shares of common stock were held in the accounts of ESOP participants. As of December 25, 2004, 0.4 million outstanding shares of preferred stock with a fair value of $110 million and 18 million shares of common stock were held in the accounts of ESOP participants. Quaker made the final award to its ESOP plan in June 2001. 2005 Preferred stock Repurchased preferred stock Balance, beginning of year Redemptions Balance, end of year *Does not sum due to rounding. 2004 Amount $41 $ 90 19 $110* Shares 0.8 0.3 0.1 0.4 Amount $41 $63 27 $90 Shares 0.8 0.2 0.1 0.3 2003 Amount $41 $48 15 $63 Shares 0.8 0.4 0.1 0.5 Note 13 Accumulated Other Comprehensive Loss Comprehensive income is a measure of income which includes both net income and other comprehensive income or loss. Other comprehensive loss results from items deferred on the balance sheet in shareholders equity. Other comprehensive (loss)/income was $(167) million in 2005, $381 million in 2004, and $405 million in 2003. The accumulated balances for each component of other comprehensive loss were as follows: Currency translation adjustment Cash flow hedges, net of tax(a) Minimum pension liability adjustment(b) Unrealized gain on securities, net of tax Other Accumulated other comprehensive loss 2005 $ (971) 27 (138) 31 (2) $(1,053) 2004 $(720) (19) (154) 7 $(886) 2003 $(1,121) (12) (135) 1 $(1,267) (a) Includes net commodity gains of $55 million in 2005. Also includes no impact in 2005, $6 million gain in 2004 and $8 million gain in 2003 for our share of our equity investees accumulated derivative activity. Deferred gains/(losses) reclassified into earnings were $8 million in 2005, $(10) million in 2004 and no impact in 2003. (b) Net of taxes of $72 million in 2005, $77 million in 2004 and $67 million in 2003. Also, includes $120 million in 2005, $121 million in 2004 and $110 million in 2003 for our share of our equity investees minimum pension liability adjustments. Financial Statements and Accompanying Notes A25 Note 14 Supplemental Financial Information 2005 Accounts receivable Trade receivables ..................................................... Other receivables ..................................................... Allowance, beginning of year ................................... Net amounts (credited)/charged to expense ........ Deductions(a) ........................................................ Other(b) ................................................................. Allowance, end of year ............................................. Net receivables ........................................................ Inventory(c) Raw materials.......................................................... Work-in-process ....................................................... Finished goods ......................................................... Accounts payable and other current liabilities Accounts payable ..................................................... Accrued marketplace spending................................ Accrued compensation and benefits ........................ Dividends payable.................................................... Insurance accruals .................................................. Other current liabilities............................................ Other liabilities Reserves for income taxes........................................ Other ........................................................................ Other supplemental information Rent expense............................................................ Interest paid ............................................................ Income taxes paid, net of refunds............................ Acquisitions(d) Fair value of assets acquired............................... Cash paid and debt issued.................................. SVE minority interest eliminated.......................... Liabilities assumed.............................................. (a) Includes accounts written off. (b) Includes collections of previously written-off accounts and currency translation effects. (c) Inventories are valued at the lower of cost or market. Cost is determined using the average, first-in, first-out (FIFO) or last-in, first-out (LIFO) methods. Approximately 17% in 2005 and 15% in 2004 of the inventory cost was computed using the LIFO method. The differences between LIFO and FIFO methods of valuing these inventories were not material. (d) In 2005, these amounts include the impact of our acquisition of General Mills, Inc.s 40.5% ownership interest in SVE for $750 million. The excess of our purchase price over the fair value of net assets acquired is $250 million and is included in goodwill. We also reacquired rights to distribute global brands for $263 million which is included in other nonamortizable intangible assets. 2004 $2,505 591 3,096 105 18 (25) (1) 97 $2,999 $ 665 156 720 $1,541 $1,731 1,285 961 387 131 1,104 $5,599 $1,567 2,532 $4,099 $245 $137 $1,833 $ 78 (64) $ 14 2003 $2,718 618 3,336 97 (1) (22) 1 75 $3,261 $ 738 112 843 $1,693 $1,799 1,383 1,062 431 136 1,160 $5,971 $1,884 2,439 $4,323 $228 $213 $1,258 $ 1,089 (1,096) 216 $ 209 $116 32 (43) $105 $231 $147 $1,530 $178 (71) $107 A26 Appendix A Specimen Financial Statements: PepsiCo, Inc. ADDITIONAL INFORMATION In addition to the financial statements and accompanying notes, companies are required to provide a report on internal control over financial reporting and to have an auditors report on the financial statements. In addition, PepsiCo has provided a report indicating that financial reporting is managements responsibility. Finally, PepsiCo also provides selected financial data it believes is useful. The two required reports are further explained below. Managements Report on Internal Control over Financial Reporting The Sarbanes-Oxley Act of 2002 requires managers of publicly traded companies to establish and maintain systems of internal control over the companys financial reporting processes. In addition, management must express its responsibility for financial reporting, and it must provide certifications regarding the accuracy of the financial statements. Auditors Report All publicly held corporations, as well as many other enterprises and organizations engage the services of independent certified public accountants for the purpose of obtaining an objective, expert report on their financial statements. Based on a comprehensive examination of the companys accounting system, accounting records, and the financial statements, the outside CPA issues the auditors report. The standard auditors report identifies who and what was audited and indicates the responsibilities of management and the auditor relative to the financial statements. It states that the audit was conducted in accordance with generally accepted auditing standards and discusses the nature and limitations of the audit. It then expresses an informed opinion as to (1) the fairness of the financial statements and (2) their conformity with generally accepted accounting principles. It also expresses an opinion regarding the effectiveness of the companys internal controls. All of this additional information for PepsiCo is provided on the following pages. Additional Information A27 Managements Responsibility for Financial Reporting To Our Shareholders: At PepsiCo, our actions the actions of all our associates are governed by our Worldwide Code of Conduct. This code is clearly aligned with our stated values a commitment to sustained growth, through empowered people, operating with responsibility and building trust. Both the code and our core values enable us to operate with integrity both within the letter and the spirit of the law. Our code of conduct is reinforced consistently at all levels and in all countries. We have maintained strong governance policies and practices for many years. The management of PepsiCo is responsible for the objectivity and integrity of our consolidated financial statements. The Audit Committee of the Board of Directors has engaged independent registered public accounting firm, KPMG LLP, to audit our consolidated financial statements and they have expressed an unqualified opinion. We are committed to providing timely, accurate and understandable information to investors. Our commitment encompasses the following: Maintaining strong controls over financial reporting. Our system of internal control is based on the control criteria framework of the Committee of Sponsoring Organizations of the Treadway Commission published in their report titled, Internal Control Integrated Framework. The system is designed to provide reasonable assurance that transactions are executed as authorized and accurately recorded; that assets are safeguarded; and that accounting records are sufficiently reliable to permit the preparation of financial statements that conform in all material respects with accounting principles generally accepted in the U.S. We maintain disclosure controls and procedures designed to ensure that information required to be disclosed in reports under the Securities Exchange Act of 1934 is recorded, processed, summarized and reported within the specified time periods. We monitor these internal controls through self-assessments and an ongoing program of internal audits. Our internal controls are reinforced through our Worldwide Code of Conduct, which sets forth our commitment to conduct business with integrity, and within both the letter and the spirit of the law. Exerting rigorous oversight of the business. We continuously review our business results and strategies. This encompasses financial discipline in our strategic and daily business decisions. Our Executive Committee is actively involved from understanding strategies and alternatives to reviewing key initiatives and financial performance. The intent is to ensure we remain objective in our assessments, constructively challenge our approach to potential business opportunities and issues, and monitor results and controls. Engaging strong and effective Corporate Governance from our Board of Directors. We have an active, capable and diligent Board that meets the required standards for independence, and we welcome the Boards oversight as a representative of our shareholders. Our Audit Committee comprises independent directors with the financial literacy, knowledge and experience to provide appropriate oversight. We review our critical accounting policies, financial reporting and internal control matters with them and encourage their direct communication with KPMG LLP, with our General Auditor, and with our General Counsel. In 2005, we named a senior compliance officer to lead and coordinate our compliance policies and practices. Providing investors with financial results that are complete, transparent and understandable. The consolidated financial statements and financial information included in this report are the responsibility of management. This includes preparing the financial statements in accordance with accounting principles generally accepted in the U.S., which require estimates based on managements best judgment. PepsiCo has a strong history of doing whats right. We realize that great companies are built on trust, strong ethical standards and principles. Our financial results are delivered from that culture of accountability, and we take responsibility for the quality and accuracy of our financial reporting. Peter A. Bridgman Senior Vice President and Controller Indra K. Nooyi President and Chief Financial Officer Steven S Reinemund Chairman of the Board and Chief Executive Officer A28 Appendix A Specimen Financial Statements: PepsiCo, Inc. Managements Report on Internal Control over Financial Reporting To Our Shareholders: Integrated Framework issued by the Committee of Sponsoring Organizations of the Treadway Commission. Based on that evaluation, our management concluded that our internal control over financial reporting is effective as of December 31, 2005. KPMG LLP, an independent registered public accounting firm, has audited the consolidated financial statements included in this Annual Report and, as part of their audit, has issued their report, included herein, (1) on our managements assessment of the effectiveness of our internal controls over financial reporting and (2) on the effectiveness of our internal control over financial reporting. Peter A. Bridgman Senior Vice President and Controller Indra K. Nooyi President and Chief Financial Officer Steven S Reinemund Chairman of the Board and Chief Executive Officer Additional Information A29 Report of Independent Registered Public Accounting Firm Board of Directors and Shareholders PepsiCo, Inc.: We have audited the accompanying Consolidated Balance Sheet of PepsiCo, Inc. and Subsidiaries as of December 31, 2005 and December 25, 2004 and the related Consolidated Statements of Income, Cash Flows and Common Shareholders Equity for each of the years in the three-year period ended December 31, 2005. We have also audited managements assessment, included in Managements Report on Internal Control over Financial Reporting, that PepsiCo, Inc. and Subsidiaries maintained effective internal control over financial reporting as of December 31, 2005, based on criteria established in Internal Control Integrated Framework issued by the Committee of Sponsoring Organizations of the Treadway Commission (COSO). PepsiCo, Inc.s management is responsible for these consolidated financial statements, for maintaining effective internal control over financial reporting, and for its assessment of the effectiveness of internal control over financial reporting. Our responsibility is to express an opinion on these consolidated financial statements, an opinion on managements assessment, and an opinion on the effectiveness of PepsiCo, Inc audit, evaluating managements assessment, PepsiCo, Inc. and Subsidiaries as of December 31, 2005 and December 25, 2004, and the results of their operations and their cash flows for each of the years in the three-year period ended December 31, 2005, in conformity with United States generally accepted accounting principles. Also, in our opinion, managements assessment that PepsiCo, Inc. maintained effective internal control over financial reporting as of December 31, 2005, is fairly stated, in all material respects, based on criteria established in Internal Control Integrated Framework issued by COSO. Furthermore, in our opinion, PepsiCo, Inc. maintained, in all material respects, effective internal control over financial reporting as of December 31, 2005, based on criteria established in Internal Control Integrated Framework issued by COSO. KPMG LLP New York, New York February 24, 2006 A30 Appendix A Specimen Financial Statements: PepsiCo, Inc. Selected Financial Data Quarterly Net revenue 2005 2004 Gross profit(a) 2005 2004 2005 restructuring charges(b) 2005 2004 restructuring and impairment charges(c) 2004 AJCA tax charge(d) 2005 Net income(e) 2005 2004 Net income per common share basic(e) 2005 2004 Net income per common share diluted(e) 2005 2004 Cash dividends declared per common share 2005 2004 2005 stock price per share(f) High Low Close 2004 stock price per share(f) High Low Close (in millions except per share amounts, unaudited) First Second Third Fourth Quarter Quarter Quarter Quarter $6,585 $6,131 $3,715 $3,466 $7,697 $7,070 $4,383 $4,039 $8,184 $10,096 $7,257 $8,803 $4,669 $4,139 $5,619 $4,943 $83 Five-Year Summary 2005 2004 2003 Net revenue $32,562 $29,261 $26,971 Income from continuing operations $4,078 $4,174 $3,568 Net income $4,078 $4,212 $3,568 Income per common share basic, continuing operations $2.43 $2.45 $2.07 Income per common share diluted, $2.39 $2.41 $2.05 continuing operations Cash dividends declared per common share $1.01 $0.850 $0.630 $31,727 $27,987 $25,327 Total assets $2,313 $2,397 $1,702 Long-term debt 22.7% 27.4% 27.5% Return on invested capital(a) Five-Year Summary (Cont.) Net revenue Net income Income per common share basic Income per common share diluted Cash dividends declared per common share Total assets Long-term debt Return on invested capital(a) 2002 2001 $25,112 $23,512 $3,000 $2,400 $1.69 $1.35 $1.68 $1.33 $0.595 $0.575 $23,474 $21,695 $2,187 $2,651 25.7% 22.1% $912 $804 $1,194 $1,059 $468 $864 $1,364 $150 $(8) $1,108 $985 $0.54 $0.47 $0.71 $0.62 $0.52 $0.80 $0.66 $0.58 $0.53 $0.46 $0.70 $0.61 $0.51 $0.79 $0.65 $0.58 (a) Return on invested capital is defined as adjusted net income divided by the sum of average shareholders equity and average total debt. Adjusted net income is defined as net income plus net interest expense after tax. Net interest expense after tax was $62 million in 2005, $60 million in 2004, $72 million in 2003, $93 million in 2002, and $99 million in 2001. As a result of the adoption of SFAS 142, Goodwill and Other Intangible Assets, and the consolidation of SVE in 2002, the data provided above is not comparable. $0.23 $0.16 $55.71 $51.34 $52.62 $53.00 $45.30 $50.93 $0.26 $0.23 $57.20 $51.78 $55.52 $55.48 $50.28 $54.95 $0.26 $0.23 $56.73 $52.07 $54.65 $55.71 $48.41 $50.84 $0.26 $0.23 $60.34 $53.55 $59.08 $53.00 $47.37 $51.94 Includes restructuring and impairment charges of: 2005 Pre-tax After-tax Per share Includes Quaker merger-related costs of: 2003 Pre-tax After-tax Per share $59 $42 $0.02 2002 $224 $190 $0.11 2001 $356 $322 $0.18 $83 $55 $0.03 2004 $150 $96 $0.06 2003 $147 $100 $0.06 2001 $31 $19 $0.01 The 2005 fiscal year consisted of fifty-three weeks compared to fifty-two weeks in our normal fiscal year. The 53rd week increased 2005 net revenue by an estimated $418 million and net income by an estimated $57 million or $0.03 per share. Cash dividends per common share in 2001 are those of pre-merger PepsiCo prior to the effective date of the merger. In the fourth quarter of 2004, we reached agreement with the IRS for an open issue related to our discontinued restaurant operations which resulted in a tax benefit of $38 million or $0.02 per share. The first, second, and third quarters consist of 12 weeks and the fourth quarter consists of 16 weeks in 2004 and 17 weeks in 2005. (a) Reflects net reclassifications in all periods from cost of sales to selling, general and administrative expenses related to the alignment of certain accounting policies in connection with our ongoing BPT initiative. See Note 1. (b) The 2005 restructuring charges were $83 million ($55 million or $0.03 per share after-tax). See Note 3. (c) The 2004 restructuring and impairment charges were $150 million ($96 million or $0.06 per share after-tax). See Note 3. (d) Represents income tax expense associated with the repatriation of earnings in connection with the AJCA. See Note 5. (e) Fourth quarter 2004 net income reflects a tax benefit from discontinued operations of $38 million or $0.02 per share. See Note 5. (f) Represents the composite high and low sales price and quarterly closing prices for one share of PepsiCo common stock. Appendix B THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED STATEMENTS OF INCOME S PECIMEN FINANCIAL STATEMENTS: The Coca-Cola Company Year Ended December 31, (In millions except per share data) 2005 2004 2003 NET OPERATING REVENUES Cost of goods sold GROSS PROFIT Selling, general and administrative expenses Other operating charges OPERATING INCOME Interest income Interest expense Equity income net Other loss net Gains on issuances of stock by equity investees INCOME BEFORE INCOME TAXES Income taxes NET INCOME BASIC NET INCOME PER SHARE DILUTED NET INCOME PER SHARE AVERAGE SHARES OUTSTANDING Effect of dilutive securities AVERAGE SHARES OUTSTANDING ASSUMING DILUTION $ 23,104 8,195 14,909 8,739 85 6,085 235 240 680 (93) 23 6,690 1,818 $ $ $ 4,872 2.04 2.04 2,392 1 2,393 $ 21,742 7,674 14,068 7,890 480 5,698 157 196 621 (82) 24 6,222 1,375 $ $ $ 4,847 2.00 2.00 2,426 3 2,429 $ 20,857 7,776 13,081 7,287 573 5,221 176 178 406 (138) 8 5,495 1,148 $ $ $ 4,347 1.77 1.77 2,459 3 2,462 Refer to Notes to Consolidated Financial Statements. The financial information herein is reprinted with permission from The Coca-Cola Company 2005 Annual Report. The accompanying Notes are an integral part of the consolidated financial statements. The complete financial statements are available through a link at the books companion website. B1 B2 Appendix B Specimen Financial Statements: The Coca-Cola Company THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED BALANCE SHEETS December 31, (In millions except par value) 2005 2004 ASSETS CURRENT ASSETS Cash and cash equivalents Marketable securities Trade accounts receivable, less allowances of $72 and $69, respectively Inventories Prepaid expenses and other assets TOTAL CURRENT ASSETS INVESTMENTS Equity method investments: Coca-Cola Enterprises Inc. Coca-Cola Hellenic Bottling Company S.A. Coca-Cola FEMSA, S.A. de C.V. Coca-Cola Amatil Limited Other, principally bottling companies; Issued 3,507 and 3,500 shares, respectively Capital surplus Reinvested earnings Accumulated other comprehensive income (loss) Treasury stock, at cost 1,138 and 1,091 shares, respectively TOTAL SHAREOWNERS EQUITY TOTAL LIABILITIES AND SHAREOWNERS EQUITY $ 4,701 66 2,281 1,424 1,778 10,250 $ 6,707 61 2,244 1,420 1,849 12,281 1,731 1,039 982 748 2,062 360 6,922 2,648 5,786 1,946 1,047 828 $ 29,427 1,569 1,067 792 736 1,733 355 6,252 2,981 6,091 2,037 1,097 702 $ 31,441 $ 4,493 4,518 28 797 9,836 1,154 1,730 352 877 5,492 31,299 (1,669) (19,644) 16,355 $ 4,403 4,531 1,490 709 11,133 1,157 2,814 402 875 4,928 29,105 (1,348) (17,625) 15,935 $ 31,441 $ 29,427 Refer to Notes to Consolidated Financial Statements. Specimen Financial Statements: The Coca-Cola Company B3 THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED STATEMENTS OF CASH FLOWS Year Ended December 31, (In millions) 2005 2004 2003 OPERATING ACTIVITIES Net income Depreciation and amortization Stock-based compensation expense Deferred income taxes Equity income or loss, net of dividends Foreign currency adjustments Gains on issuances of stock by equity investees Gains on sales of assets, including bottling interests Other operating charges Other items Net change in operating assets and liabilities Net cash provided by operating activities INVESTING ACTIVITIES Acquisitions and investments, principally trademarks and bottling companies Purchases of investments and other assets Proceeds from disposals of investments and other assets Purchases of property, plant and equipment Proceeds from disposals of property, plant and equipment Other investing activities Net cash used in investing activities FINANCING ACTIVITIES Issuances of debt Payments of debt Issuances of stock Purchases of stock for treasury Dividends Net cash used in financing activities EFFECT OF EXCHANGE RATE CHANGES ON CASH AND CASH EQUIVALENTS CASH AND CASH EQUIVALENTS Net increase (decrease) during the year Balance at beginning of year Balance at end of year $ 4,872 $ 4,847 $ 4,347 932 893 850 324 345 422 (88) 162 (188) (446) (476) (294) 47 (59) (79) (23) (24) (8) (9) (20) (5) 85 480 330 299 437 249 430 (617) (168) 6,423 (637) (53) 33 (899) 88 (28) (1,496) 178 (2,460) 230 (2,055) (2,678) (6,785) (148) (2,006) 6,707 $ 4,701 5,968 (267) (46) 161 (755) 341 63 (503) 3,030 (1,316) 193 (1,739) (2,429) (2,261) 141 3,345 3,362 $ 6,707 5,456 (359) (177) 147 (812) 87 178 (936) 1,026 (1,119) 98 (1,440) (2,166) (3,601) 183 1,102 2,260 $ 3,362 Refer to Notes to Consolidated Financial Statements. B4 Appendix B Specimen Financial Statements: The Coca-Cola Company THE COCA-COLA COMPANY AND SUBSIDIARIES CONSOLIDATED STATEMENTS OF SHAREOWNERS EQUITY Year Ended December 31, (In millions except per share data) 2005 2004 2003 NUMBER OF COMMON SHARES OUTSTANDING Balance at beginning of year Stock issued to employees exercising stock options Purchases of stock for treasury1 Balance at end of year COMMON STOCK Balance at beginning of year Stock issued to employees exercising stock options Balance at end of year CAPITAL SURPLUS Balance at beginning of year Stock issued to employees exercising stock options Tax benefit from employees stock option and restricted stock plans Stock-based compensation Balance at end of year REINVESTED EARNINGS Balance at beginning of year Net income Dividends (per share $1.12, $1.00 and $0.88 in 2005, 2004 and 2003, respectively) Balance at end of year ACCUMULATED OTHER COMPREHENSIVE INCOME (LOSS) Balance at beginning of year Net foreign currency translation adjustment Net gain (loss) on derivatives Net change in unrealized gain on available-for-sale securities Net change in minimum pension liability Net other comprehensive income adjustments Balance at end of year TREASURY STOCK Balance at beginning of year Purchases of treasury stock Balance at end of year TOTAL SHAREOWNERS EQUITY COMPREHENSIVE INCOME Net income Net other comprehensive income adjustments TOTAL COMPREHENSIVE INCOME 1 2,409 7 (47) 2,369 $ 875 2 877 4,928 229 11 324 5,492 29,105 4,872 (2,678) 31,299 (1,348) (396) 57 13 5 (321) (1,669) (17,625) (2,019) (19,644) $ 16,355 $ $ $ 2,442 5 (38) 2,409 874 1 875 4,395 175 13 345 4,928 26,687 4,847 (2,429) 29,105 (1,995) 665 (3) 39 (54) 647 (1,348) (15,871) (1,754) (17,625) $ 15,935 4,847 647 5,494 $ 2,471 4 (33) 2,442 873 1 874 3,857 105 11 422 4,395 24,506 4,347 (2,166) 26,687 (3,047) 921 (33) 40 124 1,052 (1,995) (14,389) (1,482) (15,871) $ 14,090 $ $ 4,347 1,052 5,399 4,872 $ (321) 4,551 $ Common stock purchased from employees exercising stock options numbered 0.5 shares, 0.4 shares and 0.4 shares for the years ended December 31, 2005, 2004 and 2003, respectively. Refer to Notes to Consolidated Financial Statements. Appendix C OBJECTIVES Time Value of Money STUDY After studying this appendix, you should be able to:. 8 Use a financial calculator to solve time value of money problems. Would you rather receive $1,000 today or a year from now? You should prefer to receive the $1,000 today because you can invest the $1,000 and earn interest on it. As a result, you will have more than $1,000 a year from now. What this example illustrates is the concept of the t ime value of money . Everyone prefers to receive money today rather than in the future because of the interest factor. THE NATURE OF INTEREST Interest is payment for the use of another persons money. It is the difference between the amount borrowed or invested (called the principal) and the amount repaid or collected. The amount of interest to be paid or collected is usually stated as a rate over a specific period of time. The rate of interest is generally stated as an annual rate. The amount of interest involved in any financing transaction is based on three elements: 1. Principal (p): The original amount borrowed or invested. 2. Interest Rate (i): An annual percentage of the principal. 3. Time (n): The number of years that the principal is borrowed or invested. Simple Interest Simple interest is computed on the principal amount only. It is the return on the principal for one period. Simple interest is usually expressed as shown in Illustration C-1 on the next page. STUDY OBJECTIVE 1 Distinguish between simple and compound interest. C1 C2 Appendix C Time Value of Money Principal p Rate i Time n Illustration C-1 Interest computation Interest For example, if you borrowed $5,000 for 2 years at a simple interest rate of 12% annually, you would pay $1,200 in total interest computed as follows: Interest pin $5,000 .12 $1,200 2 Compound Interest Compound interest is computed on principal and on any interest earned that has not been paid or withdrawn. It is the return on the principal for two or more time periods. Compounding computes interest not only on the principal but also on the interest earned to date on that principal, assuming the interest is left on deposit. To illustrate the difference between simple and compound interest, assume that you deposit $1,000 in Bank Two, where it will earn simple interest of 9% per year, and you deposit another $1,000 in Citizens Bank, where it will earn compound interest of 9% per year compounded annually. Also assume that in both cases you will not withdraw any interest until three years from the date of deposit. Illustration C-2 shows the computation of interest you will receive and the accumulated year-end balances. Illustration C-2 Simple versus compound interest Bank Two Simple Interest Calculation Year 1 $1,000.00 9 % Year 2 $1,000.00 9 % Year 3 $1,000.00 9 % Simple Interest $ 90.00 90.00 90.00 $ 270.00 Accumulated Year-end Balance $1,090.00 $1,180.00 $1,270.00 $25.03 Difference Citizens Bank Compound Interest Calculation Year 1 $1,000.00 9 % Year 2 $1,090.00 9 % Year 3 $1,188.10 9 % Compound Interest $ 90.00 98.10 106.93 $ 295.03 Accumulated Year-end Balance $1,090.00 $1,188.10 $1,295.03 Note in Illustration C-2 that simple interest uses the initial principal of $1,000 to compute the interest in all three years. Compound interest uses the accumulated balance (principal plus interest to date) at each year-end to compute interest in the succeeding yearwhich explains why your compound interest account is larger. Obviously, if you had a choice between investing your money at simple interest or at compound interest, you would choose compound interest, all other things especially riskbeing equal. In the example, compounding provides $25.03 of additional interest income. For practical purposes, compounding assumes that unpaid interest earned becomes a part of the principal, and the accumulated balance at the Future Value of a Single Amount C3 end of each year becomes the new principal on which interest is earned during the next year. Illustration C-2 indicates that you should invest your money at the bank that compounds interest annually. Most business situations use compound interest. Simple interest is generally applicable only to short-term situations of one year or less. SECTION 1 Future Value Concepts STUDY OBJECTIVE 2 Solve for future value of a single amount. FUTURE VALUE OF A SINGLE AMOUNT The future value of a single amount is the value at a future date of a given amount invested assuming compound interest. For example, in Illustration C-2, $1,295.03 is the future value of the $1,000 at the end of three years. The $1,295.03 could be determined more easily by using the following formula. FV p (1 i) n Illustration C-3 Formula for future value where: FV p i n future value of a single amount principal (or present value) interest rate for one period number of periods FV p (1 i)n $1,000 (1 i)3 $1,000 1.29503 $1,295.03 The $1,295.03 is computed as follows. The 1.29503 is computed by multiplying (1.09 1.09 1.09). The amounts in this example can be depicted in the following time diagram. Illustration C-4 Time diagram i = 9% Present Value (p) Future Value 0 $1,000 1 n = 3 years 2 3 $1,295.03 C4 Appendix C Time Value of Money Another method that can be used to compute the future value of a single amount involves the use of a compound interest table. This table shows the future value of 1 for n periods. Table 1, shown below, is such a table. TABLE 1 Future Value of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% 1.04000 1.08160 1.12486 1.16986 1.21665 1.26532 1.31593 1.36857 1.42331 1.48024 1.53945 1.60103 1.66507 1.73168 1.80094 1.87298 1.94790 2.02582 2.10685 2.19112 5% 1.05000 1.10250 1.15763 1.21551 1.27628 1.34010 1.40710 1.47746 1.55133 1.62889 1.71034 1.79586 1.88565 1.97993 2.07893 2.18287 2.29202 2.40662 2.52695 2.65330 6% 1.06000 1.12360 1.19102 1.26248 1.33823 1.41852 1.50363 1.59385 1.68948 1.79085 1.89830 2.01220 2.13293 2.26090 2.39656 2.54035 2.69277 2.85434 3.02560 3.20714 8% 1.08000 1.16640 1.25971 1.36049 1.46933 1.58687 1.71382 1.85093 1.99900 2.15892 2.33164 2.51817 2.71962 2.93719 3.17217 3.42594 3.70002 3.99602 4.31570 4.66096 9% 1.09000 1.18810 1.29503 1.41158 1.53862 1.67710 1.82804 1.99256 2.17189 2.36736 2.58043 2.81267 3.06581 3.34173 3.64248 3.97031 4.32763 4.71712 5.14166 5.60441 10% 1.10000 1.21000 1.33100 1.46410 1.61051 1.77156 1.94872 2.14359 2.35795 2.59374 2.85312 3.13843 3.45227 3.79750 4.17725 4.59497 5.05447 5.55992 6.11591 6.72750 11% 1.11000 1.23210 1.36763 1.51807 1.68506 1.87041 2.07616 2.30454 2.55803 2.83942 3.15176 3.49845 3.88328 4.31044 4.78459 5.31089 5.89509 6.54355 7.26334 8.06231 12% 1.12000 1.25440 1.40493 1.57352 1.76234 1.97382 2.21068 2.47596 2.77308 3.10585 3.47855 3.89598 4.36349 4.88711 5.47357 6.13039 6.86604 7.68997 8.61276 9.64629 15% 1.15000 1.32250 1.52088 1.74901 2.01136 2.31306 2.66002 3.05902 3.51788 4.04556 4.65239 5.35025 6.15279 7.07571 8.13706 9.35762 10.76126 12.37545 14.23177 16.36654 In Table 1, n is the number of compounding periods, the percentages are the periodic interest rates, and the five-digit decimal numbers in the respective columns are the future value of 1 factors. In using Table 1, the principal amount is multiplied by the future value factor for the specified number of periods and interest rate. For example, the future value factor for two periods at 9% is 1.18810. Multiplying this factor by $1,000 equals $1,188.10, which is the accumulated balance at the end of year 2 in the Citizens Bank example in Illustration C-2. The $1,295.03 accumulated balance at the end of the third year can be calculated from Table 1 by multiplying the future value factor for three periods (1.29503) by the $1,000. The following demonstration problem illustrates how to use Table 1. Future Value of an Annuity C5 John and Mary Rich invested $20,000 in a savings account paying 6% interest at the time their son, Mike, was born. The money is to be used by Mike for his college education. On his 18th birthday, Mike withdraws the money from his savings account. How much did Mike withdraw from his account? Future Value = ? Present Value (p) i = 6% 0 1 $20,000 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 n = 18 years Answer: The future value factor from Table 1 is 2.85434 (18 periods at 6%). The future value of $20,000 earning 6% per year for 18 years is $57,086.80 ($20,000 2.85434). Illustration C-5 Demonstration Problem Using Table 1 for FV of 1 FUTURE VALUE OF AN ANNUITY The preceding discussion involved the accumulation of only a single STUDY OBJECTIVE 3 principal sum. Individuals and businesses frequently encounter situa- Solve for future value of an tions in which a series of equal dollar amounts are to be paid or received annuity. periodically, such as loans or lease (rental) contracts. Such payments or receipts of equal dollar amounts are referred to as annuities. the periodic payments or receipts. To illustrate the computation of the future value of an annuity, assume that you invest $2,000 at the end of each year for three years at 5% interest compounded annually. This situation is depicted in the time diagram in Illustration C-6. Illustration C-6 Time diagram for a threeyear annuity i = 5% Present Value $2,000 $2,000 Future Value = ? $2,000 0 1 n = 3 years 2 3 C6 Appendix C Time Value of Money As can be seen in Illustration C-6, the $2,000 invested at the end of year 1 will earn interest for two years (years 2 and 3), and the $2,000 invested at the end of year 2 will earn interest for one year (year 3). However, the last $2,000 investment (made at the end of year 3) will not earn any interest. The future value of these periodic payments could be computed using the future value factors from Table 1 as shown in Illustration C-7. Illustration C-7 Future value of periodic payments Year Invested 1 2 3 Amount Invested $2,000 $2,000 $2,000 Future Value of 1 Factor at 5% 1.10250 1.05000 1.00000 3.15250 Future Value $ 2,205 2,100 2,000 $6,305 The first $2,000 investment is multiplied by the future value factor for two periods (1.1025) because two years interest will accumulate on it (in years 2 and 3). The second $2,000 investment will earn only one years interest (in year 3) and therefore is multiplied by the future value factor for one year (1.0500). The final $2,000 investment is made at the end of the third year and will not earn any interest. Consequently, the future value of the last $2,000 invested is only $2,000 since it does not accumulate any interest. This method of calculation is required when the periodic payments or receipts are not equal in each period. However, when the periodic payments (receipts) are the same in each period, the future value can be computed by using a future value of an annuity of 1 table. Table 2, shown below, is such a table. TABLE 2 Future Value of an Annuity of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% 1.00000 2.04000 3.12160 4.24646 5.41632 6.63298 7.89829 9.21423 10.58280 12.00611 13.48635 15.02581 16.62684 18.29191 20.02359 21.82453 23.69751 25.64541 27.67123 29.77808 5% 1.00000 2.05000 3.15250 4.31013 5.52563 6.80191 8.14201 9.54911 11.02656 12.57789 14.20679 15.91713 17.71298 19.59863 21.57856 23.65749 25.84037 28.13238 30.53900 33.06595 6% 1.00000 2.06000 3.18360 4.37462 5.63709 6.97532 8.39384 9.89747 11.49132 13.18079 14.97164 16.86994 18.88214 21.01507 23.27597 25.67253 28.21288 30.90565 33.75999 36.78559 8% 1.00000 2.08000 3.24640 4.50611 5.86660 7.33592 8.92280 10.63663 12.48756 14.48656 16.64549 18.97713 21.49530 24.21492 27.15211 30.32428 33.75023 37.45024 41.44626 45.76196 9% 1.00000 2.09000 3.27810 4.57313 5.98471 7.52334 9.20044 11.02847 13.02104 15.19293 17.56029 20.14072 22.95339 26.01919 29.36092 33.00340 36.97351 41.30134 46.01846 51.16012 10% 1.00000 2.10000 3.31000 4.64100 6.10510 7.71561 9.48717 11.43589 13.57948 15.93743 18.53117 21.38428 24.52271 27.97498 31.77248 35.94973 40.54470 45.59917 51.15909 57.27500 11% 1.00000 2.11000 3.34210 4.70973 6.22780 7.91286 9.78327 11.85943 14.16397 16.72201 19.56143 22.71319 26.21164 30.09492 34.40536 39.18995 44.50084 50.39593 56.93949 64.20283 12% 1.00000 2.12000 3.37440 4.77933 6.35285 8.11519 10.08901 12.29969 14.77566 17.54874 20.65458 24.13313 28.02911 32.39260 37.27972 42.75328 48.88367 55.74972 63.43968 72.05244 15% 1.00000 2.15000 3.47250 4.99338 6.74238 8.75374 11.06680 13.72682 16.78584 20.30372 24.34928 29.00167 34.35192 40.50471 47.58041 55.71747 65.07509 75.83636 88.21181 102.44358 Present Value Variables C7 Table 2 shows the future value of 1 to be received periodically for a given number of periods. You can see from Table 2 that the future value of an annuity of 1 factor for three periods at 5% is 3.15250. The future value factor is the total of the three individual future value factors as shown in Illustration C-8. Multiplying this amount by the annual investment of $2,000 produces a future value of $6,305. The demonstration problem in Illustration C-8 illustrates how to use Table 2. Illustration C-8 Demonstration Problem Using Table 2 for FV of an annuity of 1 Henning Printing Company knows that in four years it must replace one of its existing printing presses with a new one. To insure that some funds are available to replace the machine in 4 years, the company is depositing $25,000 in a savings account at the end of each of the next four years (4 deposits in total). The savings account will earn 6% interest compounded annually. How much will be in the savings account at the end of 4 years when the new printing press is to be purchased? i = 6% Present Value $25,000 $25,000 $25,000 Future Value = ? $25,000 0 1 2 n = 4 years 3 4 Answer: The future value factor from Table 2 is 4.37462 (4 periods at 6%). The future value of $25,000 invested at the end of each year for 4 years at 6% interest is $109,365.50 ($25,000 4.37462). SECTION 2 Present Value Concepts PRESENT VALUE VARIABLES The present value is the value now of a given amount to be paid or reSTUDY OBJECTIVE 4 ceived in the future, assuming compound interest. The present value is Identify the variables fundamental based on three variables: (1) the dollar amount to be received (future to solving present value problems. amount), (2) the length of time until the amount is received (number of periods), and (3) the interest rate (the discount rate). The process of determining the present value is referred to as discounting the future amount. In this textbook, we use present value computations in measuring several items. For example, Chapter 11 computed the present value of the principal and interest payments to determine the market price of a bond. In addition, determining the amount to be reported for notes payable involves present value computations. C8 Appendix C Time Value of Money PRESENT VALUE OF A SINGLE AMOUNT STUDY OBJECTIVE 5 Solve for present value of a single amount. To illustrate present value, assume that you want to invest a sum of money that will yield $1,000 at the end of one year. What amount would you need to invest today to have $1,000 one year from now? Illustration C-9 shows the formula for calculating present value. Illustration C-9 Formula for present value Present Value Future Value (1 i )n Thus, if you want a 10% rate of return, you would compute the present value of $1,000 for one year as follows: PV PV PV FV (1 i)n $1,000 (1 .10)1 $1,000 1.10 $909.09 We know the future amount ($1,000), the discount rate (10%), and the number of periods (one). These variables are depicted in the time diagram in Illustration C-10. Illustration C-10 Finding present value if discounted for one period Present Value (?) i = 10% Future Value $909.09 n = 1 year $1,000 If you receive the single amount of $1,000 in two years, discounted at 10% [PV $1,000 (1 .10)2], the present value of your $1,000 is $826.45 [($1,000 1.21), depicted as shown in Illustration C-11 below. Illustration C-11 Finding present value if discounted for two periods Present Value (?) i = 10% Future Value 0 $826.45 1 n = 2 years 2 $1,000 You also could find the present value of your amount through tables that show the present value of 1 for n periods. In Table 3, on the next page, n (represented in Present Value of a Single Amount C9 the tables rows) is the number of discounting periods involved. The percentages (represented in the tables columns) are the periodic interest rates or discount rates. The five-digit decimal numbers in the intersections of the rows and columns are called the present value of 1 factors. When using Table 3 to determine present value, you multiply the future value by the present value factor specified at the intersection of the number of periods and the discount rate. TABLE 3 Present Value of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% .96154 .92456 .88900 .85480 .82193 .79031 .75992 .73069 .70259 .67556 .64958 .62460 .60057 .57748 .55526 .53391 .51337 .49363 .47464 .45639 5% .95238 .90703 .86384 .82270 .78353 .74622 .71068 .67684 .64461 .61391 .58468 .55684 .53032 .50507 .48102 .45811 .43630 .41552 .39573 .37689 6% .94340 .89000 .83962 .79209 .74726 .70496 .66506 .62741 .59190 .55839 .52679 .49697 .46884 .44230 .41727 .39365 .37136 .35034 .33051 .31180 8% .92593 .85734 .79383 .73503 .68058 .63017 .58349 .54027 .50025 .46319 .42888 .39711 .36770 .34046 .31524 .29189 .27027 .25025 .23171 .21455 9% .91743 .84168 .77218 .70843 .64993 .59627 .54703 .50187 .46043 .42241 .38753 .35554 .32618 .29925 .27454 .25187 .23107 .21199 .19449 .17843 10% .90909 .82645 .75132 .68301 .62092 .56447 .51316 .46651 .42410 .38554 .35049 .31863 .28966 .26333 .23939 .21763 .19785 .17986 .16351 .14864 11% .90090 .81162 .73119 .65873 .59345 .53464 .48166 .43393 .39092 .35218 .31728 .28584 .25751 .23199 .20900 .18829 .16963 .15282 .13768 .12403 12% .89286 .79719 .71178 .63552 .56743 .50663 .45235 .40388 .36061 .32197 .28748 .25668 .22917 .20462 .18270 .16312 .14564 .13004 .11611 .10367 15% .86957 .75614 .65752 .57175 .49718 .43233 .37594 .32690 .28426 .24719 .21494 .18691 .16253 .14133 .12289 .10687 .09293 .08081 .07027 .06110 For example, the present value factor for one period at a discount rate of 10% is .90909, which equals the $909.09 ($1,000 .90909) computed in Illustration C-10. For two periods at a discount rate of 10%, the present value factor is .82645, which equals the $826.45 ($1,000 .82645) computed previously. Note that a higher discount rate produces a smaller present value. For example, using a 15% discount rate, the present value of $1,000 due one year from now is $869.57, versus $909.09 at 10%. Also note that the further removed from the present the future value is, the smaller the present value. For example, using the same discount rate of 10%, the present value of $1,000 due in five years is $620.92, versus the present value of $1,000 due in one year, which is $909.09. The two demonstration problems on the next page (Illustrations C-12, C-13) illustrate how to use Table 3. C10 Appendix C Time Value of Money Illustration C-12 Demonstration problem Using Table 3 for PV of 1 Suppose you have a winning lottery ticket and the state gives you the option of taking $10,000 three years from now or taking the present value of $10,000 now. The state uses an 8% rate in discounting. How much will you receive if you accept your winnings now? PV = ? i = 8% $10,000 Now 1 n=3 2 3 years Answer: The present value factor from Table 3 is .79383 (3 periods at 8%). The present value of $10,000 to be received in 3 years discounted at 8% is $7,938.30 ($10,000 .79383). Illustration C-13 Demonstration problem Using Table 3 for PV of 1 Determine the amount you must deposit now in your SUPER savings account, paying 9% interest, in order to accumulate $5,000 for a down payment 4 years from now on a new Chevy Tahoe. PV = ? i = 9% $5,000 Now 1 2 n=4 3 4 years Answer: The present value factor from Table 3 is .70843 (4 periods at 9%). The present value of $5,000 to be received in 4 years discounted at 9% is $3,542.15 ($5,000 .70843). PRESENT VALUE OF AN ANNUITY The preceding discussion involved the discounting of only a single future amount. Businesses and individuals frequently engage in transactions in Solve for present value of an which a series of equal dollar amounts are to be received or paid periodiannuity. cally. Examples of a series of periodic receipts or payments are loan agreements, installment sales, mortgage notes, lease (rental) contracts, and pension obligations. These periodic receipts or payments are annuities. The present value of an annuity is the value now of a series of future receipts or payments, discounted assuming compound interest. In computing the present value of an annuity, you need to know: (1) the discount rate, (2) the number of discount periods, and (3) the amount of the periodic receipts or payments. To illustrate how to compute the present value of an annuity, assume that you will receive $1,000 cash annually for three years at a time when the discount rate is 10%. Illustration C-14 depicts this situation, and Illustration C-15 shows the computation of its present value. STUDY OBJECTIVE 6 Present Value of an Annuity C11 PV = ? $1,000 i = 10% n=3 $1,000 $1,000 Illustration C-14 Time diagram for a threeyear annuity Now 1 2 3 years Future Amount $1,000 (one year away) 1,000 (two years away) 1,000 (three years away) Present Value of 1 Factor at 10% .90909 .82645 . 75132 2.48686 Present Value $ 909.09 826.45 751.32 $2,486.86 Illustration C-15 Present value of a series of future amounts computation This method of calculation is required when the periodic cash flows are not uniform in each period. However, when the future receipts are the same in each period, there are two other ways to compute present value. First, you can multiply the annual cash flow by the sum of the three present value factors. In the previous example, $1,000 2.48686 equals $2,486.86. The second method is to use annuity tables. As illustrated in Table 4 below, these tables show the present value of 1 to be received periodically for a given number of periods. TABLE 4 Present Value of an Annuity of 1 (n) Periods 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 4% .96154 1.88609 2.77509 3.62990 4.45182 5.24214 6.00205 6.73274 7.43533 8.11090 8.76048 9.38507 9.98565 10.56312 11.11839 11.65230 12.16567 12.65930 13.13394 13.59033 5% .95238 1.85941 2.72325 3.54595 4.32948 5.07569 5.78637 6.46321 7.10782 7.72173 8.30641 8.86325 9.39357 9.89864 10.37966 10.83777 11.27407 11.68959 12.08532 12.46221 6% .94340 1.83339 2.67301 3.46511 4.21236 4.91732 5.58238 6.20979 6.80169 7.36009 7.88687 8.38384 8.85268 9.29498 9.71225 10.10590 10.47726 10.82760 11.15812 11.46992 8% .92593 1.78326 2.57710 3.31213 3.99271 4.62288 5.20637 5.74664 6.24689 6.71008 7.13896 7.53608 7.90378 8.24424 8.55948 8.85137 9.12164 9.37189 9.60360 9.81815 9% .91743 1.75911 2.53130 3.23972 3.88965 4.48592 5.03295 5.53482 5.99525 6.41766 6.80519 7.16073 7.48690 7.78615 8.06069 8.31256 8.54363 8.75563 8.95012 9.12855 10% .90909 1.73554 2.48685 3.16986 3.79079 4.35526 4.86842 5.33493 5.75902 6.14457 6.49506 6.81369 7.10336 7.36669 7.60608 7.82371 8.02155 8.20141 8.36492 8.51356 11% .90090 1.71252 2.44371 3.10245 3.69590 4.23054 4.71220 5.14612 5.53705 5.88923 6.20652 6.49236 6.74987 6.98187 7.19087 7.37916 7.54879 7.70162 7.83929 7.96333 12% .89286 1.69005 2.40183 3.03735 3.60478 4.11141 4.56376 4.96764 5.32825 5.65022 5.93770 6.19437 6.42355 6.62817 6.81086 6.97399 7.11963 7.24967 7.36578 7.46944 15% .86957 1.62571 2.28323 2.85498 3.35216 3.78448 4.16042 4.48732 4.77158 5.01877 5.23371 5.42062 5.58315 5.72448 5.84737 5.95424 6.04716 6.12797 6.19823 6.25933 C12 Appendix C Time Value of Money Table 4 shows that the present value of an annuity of 1 factor for three periods at 10% is 2.48685.1 (This present value factor is the total of the three individual present value factors, as shown in Illustration C-15.) Applying this amount to the annual cash flow of $1,000 produces a present value of $2,486.85. The following demonstration problem (Illustration C-16) illustrates how to use Table 4. Illustration C-16 Demonstration problem Using Table 4 for PV of an annuity of 1 Kildare Company has just signed a capitalizable lease contract for equipment that requires rental payments of $6,000 each, to be paid at the end of each of the next 5 years. The appropriate discount rate is 12%. What is the present value of the rental paymentsthat is, the amount used to capitalize the leased equipment? PV = ? $6,000 $6,000 $6,000 i = 12% n=5 2 3 $6,000 $6,000 Now 1 4 5 years Answer: The present value factor from Table 4 is 3.60478 (5 periods at 12%). The present value of 5 payments of $6,000 each discounted at 12% is $21,628.68 ($6,000 3.60478). TIME PERIODS AND DISCOUNTING In the preceding calculations, the discounting was done on an annual basis using an annual interest rate. Discounting may also be done over shorter periods of time such as monthly, quarterly, or semiannually. When the time frame is less than one year, you need to convert the annual interest rate to the applicable time frame. Assume, for example, that the investor in Illustration C-14 received $500 semiannually for three years instead of $1,000 annually. In this case, the number of periods becomes six (3 2), the discount rate is 5% (10% 2), the present value factor from Table 4 is 5.07569, and the present value of the future cash flows is $2,537.85 (5.07569 $500). This amount is slightly higher than the $2,486.86 computed in Illustration C-15 because interest is paid twice during the same year; therefore interest is earned on the first half years interest. COMPUTING THE PRESENT VALUE OF A LONG-TERM NOTE OR BOND STUDY OBJECTIVE 7 Compute the present value of notes and bonds. The present value (or market price) of a long-term note or bond is a function of three variables: (1) the payment amounts, (2) the length of time until the amounts are paid, and (3) the discount rate. Our illustration uses a five-year bond issue. 1 The difference of .00001 between 2.48686 and 2.48685 is due to rounding. Computing the Present Value of a Long-Term Note or Bond C13 The first variabledollars to be paidis made up of two elements: (1) a series of interest payments (an annuity), and (2) the principal amount (a single sum). To compute the present value of the bond, we must discount both the interest payments and the principal amounttwo different computations. The time diagrams for a bond due in five years are shown in Illustration C-17. Illustration C-17 Present value of a bond time diagram Diagram for Principal Present Value (?) Interest Rate (i) n=5 Principal Amount Now 1 yr. 2 yr. 3 yr. 4 yr. 5 yr. Diagram for Interest Present Value (?) Interest Interest Rate (i) Interest Interest n=5 Interest Interest Now 1 yr. 2 yr. 3 yr. 4 yr. 5 yr. When the investors market interest rate is equal to the bonds contractual interest rate, the present value of the bonds will equal the face value of the bonds. To illustrate, assume a bond issue of 10%, five-year bonds with a face value of $100,000 with interest payable semiannually on January 1 and July 1. If the discount rate is the same as the contractual rate, the bonds will sell at face value. In this case, the investor will receive the following: (1) $100,000 at maturity, and (2) a series of ten $5,000 interest payments [($100,000 10%) 2] over the term of the bonds. The length of time is expressed in terms of interest periodsin this case10, and the discount rate per interest period, 5%. The following time diagram (Illustration C-18) depicts the variables involved in this discounting situation. Illustration C-18 Time diagram for present value of a 10%, five-year bond paying interest semiannually Diagram for Principal Present Value (?) i = 5% Principal Amount $100,000 Now 1 2 3 4 5 n = 10 6 7 8 9 10 Diagram for Interest Present Interest Value i = 5% Payments (?) $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 $5,000 Now 1 2 3 4 5 n = 10 6 7 8 9 10 C14 Appendix C Time Value of Money Illustration C-19 shows the computation of the present value of these bonds. Illustration C-19 Present value of principal and interestface value 10% Contractual Rate10% Discount Rate Present value of principal to be received at maturity $100,000 PV of 1 due in 10 periods at 5% $100,000 .61391 (Table 3) Present value of interest to be received periodically over the term of the bonds $5,000 PV of 1 due periodically for 10 periods at 5% $5,000 7.72173 (Table 4) Present value of bonds *Rounded $ 61,391 38,609* $100,000 Now assume that the investors required rate of return is 12%, not 10%.The future amounts are again $100,000 and $5,000, respectively, but now a discount rate of 6% (12% 2) must be used. The present value of the bonds is $92,639, as computed in Illustration C-20. Illustration C-20 Present value of principal and interestdiscount 10% Contractual Rate12% Discount Rate Present value of principal to be received at maturity $100,000 .55839 (Table 3) Present value of interest to be received periodically over the term of the bonds $5,000 7.36009 (Table 4) Present value of bonds $55,839 36,800 $92,639 Conversely, if the discount rate is 8% and the contractual rate is 10%, the present value of the bonds is $108,111, computed as shown in Illustration C-21. Illustration C-21 Present value of principal and interestpremium 10% Contractual Rate8% Discount Rate Present value of principal to be received at maturity $100,000 .67556 (Table 3) Present value of interest to be received periodically over the term of the bonds $5,000 8.11090 (Table 4) Present value of bonds $ 67,556 40,555 $108,111 The above discussion relies on present value tables in solving present value problems. Many people use spreadsheets such as Excel or Financial calculators (some even on websites) to compute present values, without the use of tables. Many calculators, especially financial calculators, have present value ( PV ) functions that allow you to calculate present values by merely inputting the proper amount, discount rate, and periods, and pressing the PV key. The next section illustrates how to use a financial calculator in various business situations. Using Financial CalculatorsPresent Value of a Single Sum C15 SECTION 3 Using Financial Calculators Business professionals, once they have mastered the underlying concepts STUDY OBJECTIVE 8 in sections 1 and 2, often use a financial (business) calculator to solve time Use a financial calculator to solve value of money problems. In many cases, they must use calculators if in- time value of money problems. terest rates or time periods do not correspond with the information provided in the compound interest tables. To use financial calculators, you enter the time value of money variables into the calculator. Illustration C-22 shows the five most common keys used to solve time value of money problems.2 Illustration C-22 Financial calculator keys N I PV PMT FV where N I PV PMT FV number of periods interest rate per period (some calculators use I/YR or i) present value (occurs at the beginning of the first period) payment (all payments are equal, and none are skipped) future value (occurs at the end of the last period) In solving time value of money problems in this appendix, you will generally be given three of four variables and will have to solve for the remaining variable. The fifth key (the key not used) is given a value of zero to ensure that this variable is not used in the computation. PRESENT VALUE OF A SINGLE SUM To illustrate how to solve a present value problem using a financial calculator, assume that you want to know the present value of $84,253 to be received in five years, discounted at 11% compounded annually. Illustration C-23 pictures this problem. Illustration C-23 Calculator solution for present value of a single sum Inputs: 5 N 11 I ? PV 50,000 0 PMT 84,253 FV Answer: The diagram shows you the information (inputs) to enter into the calculator: N 5, I 11, PMT 0, and FV 84,253. You then press PV for the answer: $50,000. As indicated, the PMT key was given a value of zero because a series of payments did not occur in this problem. On many calculators, these keys are actual buttons on the face of the calculator; on others they appear on the display after the user accesses a present value menu. 2 C16 Appendix C Time Value of Money Plus and Minus The use of plus and minus signs in time value of money problems with a financial calculator can be confusing. Most financial calculators are programmed so that the positive and negative cash flows in any problem offset each other. In the present value problem, we identified the $84,253 future value initial investment as a positive (inflow); the answer $50,000 was shown as a negative amount, reflecting a cash outflow. If the 84,253 were entered as a negative, then the final answer would have been reported as a positive 50,000. Hopefully, the sign convention will not cause confusion. If you understand what is required in a problem, you should be able to interpret a positive or negative amount in determining the solution to a problem. Compounding Periods In the problem on page C15, we assumed that compounding occurs once a year. Some financial calculators have a default setting, which assumes that compounding occurs 12 times a year. You must determine what default period has been programmed into your calculator and change it as necessary to arrive at the proper compounding period. Rounding Most financial calculators store and calculate using 12 decimal places. As a result, because compound interest tables generally have factors only up to 5 decimal places, a slight difference in the final answer can result. In most time value of money problems, the final answer will not include more than two decimal points. PRESENT VALUE OF AN ANNUITY To illustrate how to solve a present value of an annuity problem using a financial calculator, assume that you are asked to determine the present value of rental receipts of $6,000 each to be received at the end of each of the next five years, when discounted at 12%, as pictured in Illustration C-24. Illustration C-24 Calculator solution for present value of an annuity Inputs: 5 N 12 I ? PV 21,628.66 6,000 PMT 0 FV Answer: In this case, you enter N 5, I 12, PMT arrive at the answer of $21, 628.66. 6,000, FV 0, and then press PV to USEFUL APPLICATIONS OF THE F INANCIAL CALCULATOR With a financial calculator you can solve for any interest rate or for any number of periods in a time value of money problem. Here are some examples of these applications. Summary of Study Objectives C17 Auto Loan Assume you are financing a car with a three-year loan. The loan has a 9.5% nominal annual interest rate, compounded monthly. The price of the car is $6,000, and you want to determine the monthly payments, assuming that the payments start one month after the purchase. This problem is pictured in Illustration C-25. Illustration C-25 Calculator solution for auto loan payments Inputs: 36 N 9.5 I 6,000 PV ? PMT 192.20 0 FV Answer: To solve this problem, you enter N 36 (12 3), I 9.5, PV 6,000, FV 0, and then press PMT. You will find that the monthly payments will be $192.20. Note that the payment key is usually programmed for 12 payments per year. Thus, you must change the default (compounding period) if the payments are other than monthly. Mortgage Loan Amount Lets say you are evaluating financing options for a loan on a house. You decide that the maximum mortgage payment you can afford is $700 per month.The annual interest rate is 8.4%. If you get a mortgage that requires you to make monthly payments over a 15-year period, what is the maximum purchase price you can afford? Illustration C-26 depicts this problem. Illustration C-26 Calculator solution for mortgage amount Inputs: 180 N 8.4 I ? PV 71,509.81 700 PMT 0 FV Answer: You enter N 180 (12 15 years), I 8.4, PMT 700, FV 0, and press PV. With the payment-per-year key set at 12, you find a present value of $71,509.81 the maximum house price you can afford, given that you want to keep your mortgage payments at $700. Note that by changing any of the variables, you can quickly conduct what-if analyses for different situations. SUMMARY OF STUDY OBJECTIVES 1. Distinguish between simple and compound interest. Simple interest is computed on the principal only, whereas compound interest is computed on the principal and any interest earned that has not been withdrawn. 2. Solve for future value of a single amount. Prepare a time diagram of the problem. Identify the principal amount, the number of compounding periods, and the interest rate. Using the future value of 1 table, multiply the principal amount by the future value factor specified at the intersection of the number of periods and the interest rate. 3. Solve for future value of an annuity. Prepare a time diagram of the problem. Identify the amount of the periodic payments, the number of compounding periods, and the C18 Appendix C Time Value of Money 7. Compute the present value of notes and bonds. To determine the present value of the principal amount: Multiply the principal amount (a single future amount) by the present value factor (from the present value of 1 table) intersecting at the number of periods (number of interest payments) and the discount rate. To determine the present value of the series of interest payments: Multiply the amount of the interest payment by the present value factor (from the present value of an annuity of 1 table) intersecting at the number of periods (number of interest payments) and the discount rate. Add the present value of the principal amount to the present value of the interest payments to arrive at the present value of the note or bond. 8. Use a financial calculator to solve time value of money problems. Financial calculators can be used to solve the same and additional problems as those solved with time value of money tables. One enters into the financial calculator the amounts for all of the known elements of a time value of money problem (periods, interest rate, payments, future or present value) and solves for the unknown element. Particularly useful situations involve interest rates and compounding periods not presented in the tables. interest rate. Using the future value of an annuity of 1 table, multiply the amount of the payments by the future value factor specified at the intersection of the number of periods and the interest rate. 4. Identify the variables fundamental to solving present value problems. The following three variables are fundamental to solving present value problems: (1) the future amount, (2) the number of periods, and (3) the interest rate (the discount rate). 5. Solve for present value of a single amount. Prepare a time diagram of the problem. Identify the future amount, the number of discounting periods, and the discount (interest) rate. Using the present value of 1 table, multiply the future amount by the present value factor specified at the intersection of the number of periods and the discount rate. 6. Solve for present value of an annuity. Prepare a time diagram of the problem. Identify the future amounts (annuities), the number of discounting periods, and the discount (interest) rate. Using the present value of an annuity of 1 table, multiply the amount of the annuity by the present value factor specified at the intersection of the number of periods and the interest rate. GLOSSARY Annuity A series of equal dollar amounts to be paid or received periodically. (p. C5, C10) Compound interest The interest computed on the principal and any interest earned that has not been paid or received. (p. C2) Discounting the future amount(s) The process of determining present value. (p. C7) Future value of a single amount The value at a future date of a given amount invested assuming compound interest. (p. C3) Future value of an annuity The sum of all the payments or receipts plus the accumulated compound interest on them. (p. C5) Interest Payment for the use of anothers money. (p. C1) Present value The value now of a given amount to be invested or received in the future assuming compound interest. (p. C7) Present value of an annuity A series of future receipts or payments discounted to their value now assuming compound interest. (p. C10) Principal The amount borrowed or invested. (p. C1) Simple interest The interest computed on the principal only. (p. C1) BRIEF EXERCISES Use tables to solve Brief Exercises 1-23. Compute the future value of a single amount. (SO 2) BEC-1 Russ Holub invested $4,000 at 5% annual interest, and left the money invested without withdrawing any of the interest for 10 years. At the end of the 10 years, Russ withdrew the accumulated amount of money. (a) What amount did Russ withdraw assuming the investment earns simple interest? (b) What amount did Russ withdraw assuming the investment earns interest compound annually? Use future value tables. (SO 2, 3) BEC-2 For each of the following cases, indicate (1) to what interest rate columns and (2) to what number of periods you would refer in looking up the future value factor. 1. In Table 1 (future value of 1): Annual Rate (a) (b) 8% 5% Number of Years Invested 5 3 Compounded Annually Semiannually Brief Exercises 2. In Table 2 (future value of an annuity of 1): C19 Annual Rate (a) (b) 5% 4% Number of Years Invested 10 6 Compounded Annually Semiannually Compute the future value of a single amount. (SO 2) BEC-3 Racine Company signed a lease for an office building for a period of 10 years. Under the lease agreement, a security deposit of $10,000 is made. The deposit will be returned at the expiration of the lease with interest compounded at 4% per year. What amount will Racine receive at the time the lease expires? BEC-4 Chaffee Company issued $1,000,000, 10-year bonds and agreed to make annual sinking fund deposits of $75,000. The deposits are made at the end of each year into an account paying 6% annual interest. What amount will be in the sinking fund at the end of 10 years? BEC-5 Wayne and Brenda Anderson invested $5,000 in a savings account paying 5% compound annual interest when their daughter, Sue, was born. They also deposited $1,000 on each of her birthdays until she was 18 (including her 18th birthday). How much will be in the savings account on her 18th birthday (after the last deposit)? BEC-6 Ty Ngu borrowed $20,000 on July 1, 2002. This amount plus accrued interest at 6% compounded annually is to be repaid on July 1, 2008. How much will Ty have to repay on July 1, 2008? BEC-7 For each of the following cases, indicate (a) to what interest rate columns and (b) to what number of periods you would refer in looking up the discount rate. 1. In Table 3 (present value of 1): Annual Rate (a) (b) (c) 12% 10% 8% Number of Years Involved 6 15 10 Discounts Per Year Annually Annually Semiannually Compute the future value of an annuity. (SO 3) Compute the future value of a single amount and of an annuity. (SO 2, 3) Compute the future value of a single amount. (SO 2) Use present value tables. (SO 5, 6) 2. In Table 4 (present value of an annuity of 1): Annual Rate (a) (b) (c) 8% 10% 12% Number of Years Involved 20 5 4 Number of Payments Involved 20 5 8 Frequency of Payments Annually Annually Semiannually Determine present values. (SO 5, 6) Compute the present value of a single-sum investment. (SO 5) Compute the present value of a single-sum investment. (SO 5) Compute the present value of an annuity investment. (SO 6) Compute the present value of an annuity investment. (SO 6) Compute the present value of bonds. (SO 5, 6, 7) BEC-8 (a) What is the present value of $20,000 due 8 periods from now, discounted at 8%? (b) What is the present value of $20,000 to be received at the end of each of 6 periods, discounted at 9%? BEC-9 Gonzalez Company is considering an investment that will return a lump sum of $500,000 5 years from now. What amount should Gonzalez Company pay for this investment in order to earn a 10% return? BEC-10 Lasorda Company earns 9% on an investment that will return $875,000 8 years from now. What is the amount Lasorda should invest now in order to earn this rate of return? BEC-11 Bosco Company is considering investing in an annuity contract that will return $30,000 annually at the end of each year for 15 years. What amount should Bosco Company pay for this investment if it earns a 6% return? BEC-12 Modine Enterprises earns 11% on an investment that pays back $120,000 at the end of each of the next 4 years. What is the amount Modine Enterprises invested to earn the 11% rate of return? BEC-13 Midwest Railroad Co. is about to issue $100,000 of 10-year bonds paying a 10% interest rate, with interest payable semiannually. The discount rate for such securities is 8%. How much can Midwest expect to receive from the sale of these bonds? C20 Appendix C Time Value of Money BEC-14 Assume the same information as in BEC-13 except that the discount rate is 10% instead of 8%. In this case, how much can Midwest expect to receive from the sale of these bonds? BEC-15 Lounsbury Company receives a $50,000, 6-year note bearing interest of 8% (paid annually) from a customer at a time when the discount rate is 9%. What is the present value of the note received by Lounsbury Company? BEC-16 Hartzler Enterprises issued 8%, 8-year, $2,000,000 par value bonds that pay interest semiannually on October 1 and April 1. The bonds are dated April 1, 2008, and are issued on that date. The discount rate of interest for such bonds on April 1, 2008, is 10%. What cash proceeds did Hartzler receive from issuance of the bonds? BEC-17 Vinny Carpino owns a garage and is contemplating purchasing a tire retreading machine for $16,280.After estimating costs and revenues,Vinny projects a net cash flow from the retreading machine of $3,000 annually for 8 years. Vinny hopes to earn a return of 11% on such investments. What is the present value of the retreading operation? Should Vinny Carpino purchase the retreading machine? BEC-18 Rodriguez Company issues a 10%, 6-year mortgage note on January 1, 2008, to obtain financing for new equipment. Land is used as collateral for the note. The terms provide for semiannual installment payments of $56,413.What were the cash proceeds received from the issuance of the note? BEC-19 Goltra Company is considering purchasing equipment. The equipment will produce the following cash flows: Year 1, $30,000; Year 2, $40,000; Year 3, $50,000. Goltra requires a minimum rate of return of 12%. What is the maximum price Goltra should pay for this equipment? BEC-20 If Maria Sanchez invests $3,152 now, she will receive $10,000 at the end of 15 years. What annual rate of interest will Maria earn on her investment? (Hint: Use Table 3.) BEC-21 Lori Burke has been offered the opportunity of investing $42,410 now. The investment will earn 10% per year and at the end of that time will return Lori $100,000. How many years must Lori wait to receive $100,000? (Hint: Use Table 3.) BEC-22 Nancy Burns purchased an investment for $12,462.21. From this investment, she will receive $1,000 annually for the next 20 years, starting one year from now. What rate of interest will Nancys investment be earning for her? (Hint: Use Table 4.) BEC-23 Betty Estes invests $7,536.08 now for a series of $1,000 annual returns, beginning one year from now. Betty will earn a return of 8% on the initial investment. How many annual payments of $1,000 will Betty receive? (Hint: Use Table 4.) BEC-24 Reba McEntire wishes to invest $19,000 on July 1, 2008, and have it accumulate to $49,000 by July 1, 2018. Instructions Use a financial calculator to determine at what exact annual rate of interest Reba must invest the $19,000. Compute the present value of bonds. (SO 5, 6, 7) Compute the present value of a note. (SO 5, 6, 7) Compute the present value of bonds. (SO 5, 6, 7) Compute the value of a machine for purposes of making a purchase decision. (SO 7) Compute the present value of a note. (SO 5, 6) Compute the maximum price to pay for the equipment. (SO 7) Compute the interest rate on a single sum. (SO 5) Compute the number of periods of a single sum. (SO 5) Compute the interest rate on an annuity. (SO 6) Compute the number of periods of an annuity. (SO 6) Determine interest rate. (SO 8) Determine interest rate. (SO 8) BEC-25 On July 17, 2008, Tim McGraw borrowed $42,000 from his grandfather to open a clothing store. Starting July 17, 2009, Tim has to make 10 equal annual payments of $6,500 each to repay the loan. Instructions Use a financial calculator to determine what interest rate Tim is paying. Determine interest rate. (SO 8) BEC-26 As the purchaser of a new house, Patty Loveless has signed a mortgage note to pay the Memphis National Bank and Trust Co. $14,000 every 6 months for 20 years, at the end of which time she will own the house. At the date the mortgage is signed the purchase price was $198,000, and Loveless made a down payment of $20,000.The first payment will be made 6 months after the date the mortgage is signed. Instructions Using a financial calculator, compute the exact rate of interest earned on the mortgage by the bank. Brief Exercises BEC-27 Using a financial calculator, solve for the unknowns in each of the following situations. (a) On June 1, 2008, Shelley Long purchases lakefront property from her neighbor, Joey Brenner, and agrees to pay the purchase price in seven payments of $16,000 each, the first payment to be payable June 1, 2009. (Assume that interest compounded at an annual rate of 7.35% is implicit in the payments.) What is the purchase price of the property? (b) On January 1, 2008, Cooke Corporation purchased 200 of the $1,000 face value, 8% coupon, 10-year bonds of Howe Inc. The bonds mature on January 1, 2018, and pay interest annually beginning January 1, 2009. Cooke purchased the bonds to yield 10.65%. How much did Cooke pay for the bonds? BEC-28 Using a financial calculator, provide a solution to each of the following situations. (a) Bill Schroeder owes a debt of $35,000 from the purchase of his new sport utility vehicle. The debt bears annual interest of 9.1% compounded monthly. Bill wishes to pay the debt and interest in equal monthly payments over 8 years, beginning one month hence. What equal monthly payments will pay off the debt and interest? (b) On January 1, 2008, Sammy Sosa offers to buy Mark Graces used snowmobile for $8,000, payable in five equal annual installments, which are to include 8.25% interest on the unpaid balance and a portion of the principal. If the first payment is to be made on December 31, 2008, how much will each payment be? C21 Various time value of money situations. (SO 8) Various time value of money situations. (SO 8) Appendix D OBJECTIVE Payroll Accounting STUDY After studying this appendix, you should be able to: 1. Discuss the objectives of internal control for payroll. 2. Compute and record the payroll for a pay period. 3. Describe and record employer payroll taxes. Payroll and related fringe benefits often make up a large percentage of current liabilities. Employee compensation is often the most significant expense that a company incurs. For example, Costco recently reported total employees of 103,000 and labor and fringe benefits costs that approximated 70% of the companys total cost of operations. Payroll accounting involves more than paying employees wages. Companies are required by law to maintain payroll records for each employee, to file and pay payroll taxes, and to comply with numerous state and federal tax laws related to employee compensation. Accounting for payroll has become much more complex due to these regulations. PAYROLL DEFINED The term payroll pertains to both salaries and wages. Managerial, administrative, and sales personnel are generally paid salaries. Salaries are often expressed in terms of a specified amount per month or per year rather than an hourly rate. Store clerks, factory employees, and manual laborers are normally paid wages. Wages are based on a rate per hour or on a piecework basis (such as per unit of product). Frequently, people use the terms salaries and wages interchangeably. The term payroll does not apply to payments made for services of professionals such as certified public accountants, attorneys, and architects. Such professionals are independent contractors rather than salaried employees. Payments to them are called fees. This distinction is important because government regulations relating to the payment and reporting of payroll taxes apply only to employees. INTERNAL CONTROL OF PAYROLL Chapter 8 introduced internal control. As applied to payrolls, the objecSTUDY OBJECTIVE 1 tives of internal control are (1) to safeguard company assets against unau- Discuss the objectives of internal thorized payments of payroll and (2) to ensure the accuracy and reliability control for payroll. of the accounting records pertaining to payrolls. Irregularities often result if internal control is lax. Methods of theft involving payroll include overstating hours, using unauthorized pay rates, adding fictitious employees to the payroll, continuing terminated employees on the payroll, and distributing duplicate payroll checks. Moreover, inaccurate records will result in incorrect paychecks, financial statements, and payroll tax returns. D1 D2 Appendix D Payroll Accounting Payroll activities involve four functions: hiring employees, timekeeping, preparing the payroll, and paying the payroll. For effective internal control, the company should assign these four functions to different departments or individuals. To illustrate these functions, we will examine the case of Academy Company and one of its employees, Michael Jordan. Hiring Employees Human Resources Hiring Employees The human resources (personnel) department is responsible for posting job openings, screening and interviewing applicants, and hiring employees. From a control standpoint, this department provides significant documentation and authorization. When an employee is hired, the human resources department prepares an authorization form. The one used by Academy Company for Michael Jordan is shown in Illustration D-1. Human Resources department documents and authorizes employment. Illustration D-1 Authorization form prepared by the human resources department ACADEMY COMPANY Employee Name Classification Department Jordan, LAST Michael FIRST MI Starting Date 9/01/06 329-36-9547 Skilled-Level 10 Shipping Classification Rate $ 10.00 New Rate $ Clerk per hour 12.00 Social Security No. Division Entertainment Trans. from Temp. NEW HIRE Salary Grade Level 10 Bonus N/A 9/1/07 Non-exempt x Exempt Effective Date RATE CHANGE Present Rate $ 10.00 Merit x Promotion Previous Increase Date Resignation Discharge Decrease None Retirement Other Amount $ Reason per Type SEPARATION Leave of absence Last Day Worked From to Type APPROVALS BRANCH OR DEPT. MANAGER DATE DIVISION V.P. DATE PERSONNEL DEPARTMENT The human resources department sends the authorization form to the payroll department, where it is used to place the new employee on the payroll.A chief concern of the human resources department is ensuring the accuracy of this form. The reason is quite simple: One of the most common types of payroll frauds is adding fictitious employees to the payroll. The human resources department is also responsible for authorizing changes in employment status. Specifically, they must authorize (1) changes in pay rates and (2) terminations of employment. Every authorization should be in writing, and a copy of the change in status should be sent to the payroll department. Notice in Illustration D-1 that Jordan received a pay increase of $2 per hour. Internal Control of Payroll D3 Timekeeping Another area in which internal control is important is timekeeping. Hourly employees are usually required to record time worked by punching a time clock. The employee inserts a time card into the clock, which automatically records the employees arrival and departure times. Illustration D-2 shows Michael Jordans time card. Timekeeping Supervisors monitor hours worked through time cards and time reports. PAY PERIOD ENDING No. 17 NAME Michael Jordan 1/14/08 REGULAR TIME 8:58 12:00 1:00 5:01 P.M. A.M. 9:00 11:59 12:59 5:00 P.M. A.M. 8:59 12:01 1:01 5:00 P.M. A.M. 9:00 12:00 1:00 5:00 P.M. A.M. 8:57 11:58 1:00 5:01 P.M. A.M. 8:00 1:00 A.M. IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT IN OUT EXTRA TIME 1st Day 2nd Day NOON NOON THIS THIS 5:00 9:00 3rd Day 4th Day 5th Day 6th Day 7th Day NOON NOON SIDE SIDE OU T OU T P.M. A.M. NOON P.M. NOON NOON TOTAL 4 TOTAL 40 Illustration D-2 Time card In large companies, time clock procedures are often monitored by a supervisor or security guard to make sure an employee punches only his or her own card.At the end of the pay period, each employees supervisor approves the hours shown by signing the time card.When overtime hours are involved, approval by a supervisor is usually mandatory. This guards against unauthorized overtime. The approved time cards are then sent to the payroll department. For salaried employees, a manually prepared weekly or monthly time report kept by a supervisor may be used to record time worked. Preparing the Payroll The payroll department prepares the payroll on the basis of two inputs: (1) human resources department authorizations and (2) approved time cards. Numerous calculations are involved in determining gross wages and payroll deductions. Therefore, a second payroll department employee, working independently, verifies all calculated amounts, and a payroll department supervisor then approves the payroll.The payroll department is also responsible for preparing (but not signing) payroll checks, maintaining payroll records, and preparing payroll tax returns. Preparing the Payroll Two (or more) employees verify payroll amounts; supervisor approves. D4 Appendix D Payroll Accounting Paying the Payroll Paying the Payroll The treasurers department pays the payroll. Payment by check minimizes the risk of loss from theft, and the endorsed check provides proof of payment. For good internal control, payroll checks should be prenumbered, and all checks should be accounted for. All checks must be signed by the treasurer (or a designated agent). Distribution of the payroll checks to employees should be controlled by the treasurers department. Many employees have their pay credited electronically to their bank accounts. To control these disbursements, the company provides to employees receipts detailing gross pay deductions and net pay. Occasionally companies pay the payroll in currency. In such cases it is customary to have a second person count the cash in each pay envelope. The paymaster should obtain a signed receipt from the employee upon payment. Treasurer signs and distributes checks. DETERMINING THE PAYROLL STUDY OBJECTIVE 2 Compute and record the payroll for a pay period. Determining the payroll involves computing three amounts: (1) gross earnings, (2) payroll deductions, and (3) net pay. Gross Earnings Gross earnings is the total compensation earned by an employee. It consists of wages or salaries, plus any bonuses and commissions. Companies determine total wages for an employee by multiplying the hours worked by the hourly rate of pay. In addition to the hourly pay rate, most companies are required by law to pay hourly workers a minimum of 112 times the regular hourly rate for overtime work in excess of eight hours per day or 40 hours per week. In addition, many employers pay overtime rates for work done at night, on weekends, and on holidays. For example, assume that Michael Jordan, an employee of Academy Company, worked 44 hours for the weekly pay period ending January 14. His regular wage is $12 per hour. For any hours in excess of 40, the company pays at one-and-a-half times the regular rate.Academy computes Jordans gross earnings (total wages) as follows. Illustration D-3 Computation of total wages Type of Pay Regular Overtime Total wages Hours 40 4 Rate $12 18 Gross Earnings $480 72 $552 This computation assumes that Jordan receives 112 times his regular hourly rate ($12 1.5) for his overtime hours. Union contracts often reBonuses often reward outquire that overtime rates be as much as twice the regular rates. standing individual perAn employees salary is generally based on a monthly or yearly rate. formance, but successful corpoThe company then prorates these rates to its payroll periods (e.g., birations also need considerable teamwork. A challenge is to weekly or monthly). Most executive and administrative positions are motivate individuals while presalaried. Federal law does not require overtime pay for employees in venting an unethical employee such positions. from taking anothers idea for Many companies have bonus agreements for employees. One survey his or her own advantage. found that over 94% of the largest U.S. manufacturing companies offer annual bonuses to key executives. Bonus arrangements may be based on such factors as increased sales or net income. Companies may pay bonuses in cash and/or by granting employees the opportunity to acquire shares of company stock at favorable prices (called stock option plans). ETHICS NOTE Determining the Payroll D5 Payroll Deductions As anyone who has received a paycheck knows, gross earnings are usually very different from the amount actually received. The difference is due to payroll deductions. Payroll deductions may be mandatory or voluntary. Mandatory deductions are required by law and consist of FICA taxes and income taxes. Voluntary deductions are at the option of the employee. Illustration D-4 summarizes common types of payroll deductions. Such deductions do not result in payroll tax expense to the employer. The employer is merely a collection agent, and subsequently transfers the deducted amounts to the government and designated recipients. Federal Income Tax FICA Taxes State and City Income Taxes Gross Pay Net Pay Charity Insurance, Pensions, and/or Union Dues Illustration D-4 Payroll deductions FICA TAXES In 1937 Congress enacted the Federal Insurance Contribution Act (FICA). FICA taxes are designed to provide workers with supplemental retirement, employment disability, and medical benefits. In 1965, Congress extended benefits to include Medicare for individuals over 65 years of age. The benefits are financed by a tax levied on employees earnings. FICA taxes are commonly referred to as Social Security taxes. Congress sets the tax rate and the tax base for FICA taxes. When FICA taxes were first imposed, the rate was 1% on the first $3,000 of gross earnings, or a maximum of $30 per year.The rate and base have changed dramatically since that time! In 2007, the rate was 7.65% (6.2% Social Security plus 1.45% Medicare) on the first $97,500 of gross earnings for each employee.1 For purpose of illustration in this chapter, we will assume a rate of 8% on the first $97,500 of gross earnings, or a maximum of $7,800. Using the 8% rate, the FICA withholding for Jordan for the weekly pay period ending January 14 is $44.16 ($552 8%). 1 The Medicare provision also includes a tax of 1.45% on gross earnings in excess of $97,500. In the interest of simplification, we ignore this 1.45% charge in our end-of-chapter assignment material. We assume zero FICA withholdings on gross earnings above $97,500. D6 Appendix D Payroll Accounting INCOME TAXES Under the U.S. pay-as-you-go system of federal income taxes, employers are required to withhold income taxes from employees each pay period.Three variables determine the amount to be withheld: (1) the employees gross earnings; (2) the number of allowances claimed by the employee; and (3) the length of the pay period. The number of allowances claimed typically includes the employee, his or her spouse, and other dependents. To indicate to the Internal Revenue Service the number of allowances claimed, the employee must complete an Employees Withholding Allowance Certificate (Form W-4). As shown in Illustration D-5, Michael Jordan claims two allowances on his W-4. Illustration D-5 W-4 form Form W-4 Michael Employee's Withholding Allowance Certificate For Privacy Act and Paperwork Reduction Act Notice, see page 2. Last name OMB No. 1545-0010 Department of the Treasury Internal Revenue Service 1 Type or print your first name and middle initial Home address (number and street or rural route) 2 Your social security number 3 4 2345 Mifflin Ave. City or town, State, and ZIP code Jordan Single x Married 329-36-9547 Married, but withhold at higher Single rate. Note: If married, but legally separated, or spouse is a nonresident alien, check the Single box. If your last name differs from that on your social security card, check here and call 1-800-772-1213 for a new card . . . . . 5 5 Total number of allowances you are claiming (from line H above or from the worksheet on page 2 if they apply) 2 6$ 6 Additional amount, if any, you want withheld from each paycheck 7 I claim exemption from withholding for 2006,. Hampton, MI 48292 If you meet both conditions, enter Exempt here 7 Under penalties of perjury, I certify that I am entitled to the number of withholding allowances claimed on this certificate or entitled to claim exempt status. Employee's signature 8 Employers name and address (Employer: Complete 8 and 10 only if sending to the IRS) Date September 1 , 20 08 9 Office code (optional) 10 Employer identification number Cat. No. 102200 Withholding tables furnished by the Internal Revenue Service indicate the amount of income tax to be withheld. Withholding amounts are based on gross wages and the number of allowances claimed. Separate tables are provided for weekly, biweekly, semimonthly, and monthly pay periods. Illustration D-6 (next page) shows the withholding tax table for Michael Jordan (assuming he earns $552 per week and claims two allowances). For a weekly salary of $552 with two allowances, the income tax to be withheld is $49. In addition, most states (and some cities) require employers to withhold income taxes from employees earnings. As a rule, the amounts withheld are a percentage (specified in the state revenue code) of the amount withheld for the federal income tax. Or they may be a specified percentage of the employees earnings. For the sake of simplicity, we have assumed that Jordans wages are subject to state income taxes of 2%, or $11.04 (2% $552) per week. There is no limit on the amount of gross earnings subject to income tax withholdings. In fact, under our progressive system of taxation, the higher the earnings, the higher the percentage of income withheld for taxes. OTHER DEDUCTIONS Employees may voluntarily authorize withholdings for charitable, retirement, and other purposes. All voluntary deductions from gross earnings should be authorized in writing by the employee. The authorization(s) may be made individually or as part of a group plan. Deductions for charitable organizations, such as the United Way, or for financial arrangements, such as U.S. savings bonds and repayment of Determining the Payroll Illustration D-6 Withholding tax table MARRIED Persons WEEKLY Payroll Period (For Wages Paid in 2008) D7 If the wages are At least 490 500 510 520 530 540 550 560 570 580 590 600 610 620 630 640 650 660 670 680 But less than 500 510 520 530 540 550 560 570 580 590 600 610 620 630 640 650 660 670 680 690 0 1 And the number of withholding allowances claimed is 2 3 4 5 6 7 8 9 10 The amount of income tax to be withheld is 56 57 59 60 62 63 65 66 68 69 71 72 74 75 77 78 80 81 83 84 48 49 51 52 54 55 57 58 60 61 63 64 66 67 69 70 72 73 75 76 40 42 43 45 46 48 49 51 52 54 55 57 58 60 61 63 64 66 67 69 32 34 35 37 38 40 41 43 44 46 47 49 50 52 53 55 56 58 59 61 24 26 27 29 30 32 33 35 36 38 39 41 42 44 45 47 48 50 51 53 17 18 20 21 23 24 26 27 29 30 32 33 35 36 38 39 41 42 44 45 9 10 12 13 15 16 18 19 21 22 24 25 27 28 30 31 33 34 36 37 1 3 4 6 7 9 10 12 13 15 16 18 19 21 22 24 25 27 28 30 0 0 0 0 0 1 2 4 5 7 8 10 11 13 14 16 17 19 20 22 0 0 0 0 0 0 0 0 0 0 1 2 4 5 7 8 10 11 13 14 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 3 5 6 loans from company credit unions, are made individually. Deductions for union dues, health and life insurance, and pension plans are often made on a group basis. We will assume that Jordan has weekly voluntary deductions of $10 for the United Way and $5 for union dues. Net Pay Academy Company determines net pay by subtracting payroll deductions from gross earnings. Illustration D-7 shows the computation of Jordans net pay for the pay period. A LT E R N AT I V E TERMINOLOGY Net pay is also called take-home pay. Illustration D-7 Computation of net pay Gross earnings Payroll deductions: FICA taxes Federal income taxes State income taxes United Way Union dues Net pay $552.00 $44.16 49.00 11.04 10.00 5.00 119.20 $432.80 Assuming that Michael Jordans wages for each week during the year are $552, total wages for the year are $28,704 (52 $552).Thus, all of Jordans wages are subject to FICA tax during the year. In comparison, lets assume that Jordans department head earns $2,000 per week, or $104,000 for the year. Since only the first $97,500 is subject to FICA taxes, the maximum FICA withholdings on the department heads earnings would be $7,800 ($97,500 8%). D8 Appendix D Payroll Accounting RECORDING THE PAYROLL Recording the payroll involves maintaining payroll department records, recognizing payroll expenses and liabilities, and recording payment of the payroll. Maintaining Payroll Department Records To comply with state and federal laws, an employer must keep a cumulative record of each employees gross earnings, deductions, and net pay during the year. The record that provides this information is the employee earnings record. Illustration D-8 shows Michael Jordans employee earnings record. Illustration D-8 Employee earnings record File A Edit B View C Insert D Format E Tools Data F Window G Help H I J K L M N 1 2 3 4 5 6 7 8 9 10 11 12 ACADEMY COMPANY Employee Earnings Record For the Year 2008 Name Social Security Number Date of Birth Date Employed Sex Michael Jordan 329-36-9547 December 24, 1962 September 1, 2003 Male Telephone Date Employment Ended Exemptions 2 Address 2345 Mifflin Ave. Hampton, Michigan 48292 555-238-9051 13 Single x Married 14 Gross Earnings 15 2008 16 Period Total 17 Ending Hours Regular Overtime Total Cumulative 42 480.00 36.00 516.00 516.00 18 1/7 480.00 72.00 552.00 1,068.00 44 19 1/14 43 480.00 54.00 534.00 1,602.00 20 1/21 42 480.00 36.00 516.00 2,118.00 21 1/28 Jan. 22 1,920.00 198.00 2,118.00 Total 23 24 FICA 41.28 44.16 42.72 41.28 Deductions Fed. State United Inc. Tax Inc. Tax Way 43.00 10.32 10.00 49.00 11.04 10.00 46.00 10.68 10.00 43.00 10.32 10.00 42.36 Union Dues 5.00 5.00 5.00 5.00 Total 109.60 119.20 114.40 109.60 Payment Net Check Amount No. 406.40 974 432.80 1028 419.60 1077 406.40 1133 169.44 181.00 40.00 20.00 452.80 1,665.20 Companies keep a separate earnings record for each employee, and update these records after each pay period. The employer uses the cumulative payroll data on the earnings record to: (1) determine when an employee has earned the maximum earnings subject to FICA taxes, (2) file state and federal payroll tax returns (as explained later), and (3) provide each employee with a statement of gross earnings and tax withholdings for the year. (Illustration D-12 on page D13 shows this statement.) In addition to employee earnings records, many companies find it useful to prepare a payroll register. This record accumulates the gross earnings, deductions, and net pay by employee for each pay period. It provides the documentation for preparing a paycheck for each employee. Illustration D-9 (next page) presents Academy Companys payroll register. It shows the data for Michael Jordan in the wages section. In this example, Academy Companys total weekly payroll is $17,210, as shown in the gross earnings column. Note that this record is a listing of each employees payroll data for the pay period. In some companies, a payroll register is a journal or book of original entry; Recording the Payroll D9 File Edit A View B Insert C Format D Tools E Data F Window G Help H I J K L M N O 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ACADEMY COMPANY Payroll Register For the Week Ending January 14, 2008 Earnings g Total OverEmployee Hours Regular time Office Salaries Arnold, Patricia 40 580.00 Canton, Matthew 40 590.00 Gross 580.00 590.00 FICA 46.40 47.20 Deductions Federal State Income Income United Union Tax Tax Way Dues 61.00 63.00 11.60 11.80 15.00 20.00 Accounts Debited Office Check Salaries Wages Net Pay No. Expense Expense 446.00 448.00 998 999 580.00 590.00 Paid Total 134.00 142.00 Mueller, William Subtotal Wages Bennett, Robin Jordan, Michael 40 42 44 530.00 5,200.00 480.00 480.00 36.00 72.00 530.00 5,200.00 516.00 552.00 42.40 54.00 10.60 11.00 416.00 1,090.00 104.00 120.00 41.28 44.16 43.00 49.00 10.32 11.04 18.00 10.00 5.00 5.00 118.00 412.00 1000 530.00 1,730.00 3,470.00 5,200.00 117.60 119.20 398.40 1025 432.80 1028 516.00 552.00 Milroy, Lee Subtotal Total 43 480.00 54.00 534.00 42.72 46.00 10.68 10.00 5.00 114.40 419.60 1029 534.00 11,000.00 1,010.00 12,010.00 960.80 2,400.00 240.20 301.50 115.00 4,017.50 7,992.50 12,010.00 16,200.00 1,010.00 17,210.00 1,376.80 3,490.00 344.20 421.50 115.00 5,747.50 11,462.50 5,200.00 12,010.00 Illustration D-9 Payroll register postings are made from the payroll register directly to ledger accounts. In other companies, the payroll register is a memorandum record that provides the data for a general journal entry and subsequent posting to the ledger accounts.At Academy Company, the latter procedure is followed. Recognizing Payroll Expenses and Liabilities From the payroll register in Illustration D-9, Academy Company makes a journal entry to record the payroll. For the week ending January 14 the entry is: A L SE 5,200.00 Exp 12,010.00 Exp Jan. 14 Office Salaries Expense Wages Expense FICA Taxes Payable Federal Income Taxes Payable State Income Taxes Payable United Way Payable Union Dues Payable Salaries and Wages Payable (To record payroll for the week ending January 14) 5,200.00 12,010.00 1,376.80 3,490.00 344.20 421.50 115.00 11,462.50 1,376.80 3,490.00 344.20 421.50 115.00 11,462.50 Cash Flows no effect The company credits specific liability accounts for the mandatory and voluntary deductions made during the pay period. In the example, Academy debits Office Salaries Expense for the gross earnings of salaried office workers, and it debits Wages Expense for the gross earnings of employees who are paid at an hourly rate. Other companies may debit other accounts such as Store Salaries or Sales Salaries. The amount credited to Salaries and Wages Payable is the sum of the individual checks the employees will receive. D10 Appendix D Payroll Accounting Recording Payment of the Payroll A company makes payments by check (or electronic funds transfer) either from its regular bank account or a payroll bank account. Each paycheck is usually accompanied by a detachable statement of earnings document.This shows the employees gross earnings, payroll deductions, and net pay, both for the period and for the year-to-date. Academy Company uses its regular bank account for payroll checks. Illustration D-10 shows the paycheck and statement of earnings for Michael Jordan. Illustration D-10 Paycheck and statement of earnings AC Pay to the order of City Bank & Trust P.O. Box 3000 Hampton, MI 48291 For ACADEMY COMPANY 19 Center St. Hampton, MI 48291 $ No. 1028 20 621113 610 Dollars HELPFUL HINT Do any of the income tax liabilities result in payroll tax expense for the employer? Answer: No. The employer is acting only as a collection agent for the government. DETACH AND RETAIN THIS PORTION FOR YOUR RECORDS NAME SOC. SEC. NO. EMPL. NUMBER NO. EXEMP PAY PERIOD ENDING Michael Jordan REG. HRS. O.T. HRS. OTH. HRS. (1) OTH. HRS. (2) REG. EARNINGS 329-36-9547 O.T. EARNINGS 2 1/14/08 GROSS OTH. EARNINGS (1) OTH. EARNINGS (2) 40 FED. W/H TAX 4 FICA STATE TAX LOCAL TAX 480.00 11.04 (1) 72.00 OTHER DEDUCTIONS (2) $552.00 (3) (4) NET PAY 49.00 44.16 10.00 5.00 432.80 FED. W/H TAX FICA STATE TAX LOCAL TAX YEAR TO DATE OTHER DEDUCTIONS (1) 92.00 85.44 21.36 20.00 (2) 10.00 (3) (4) NET PAY $839.20 Following payment of the payroll, the company enters the check numbers in the payroll register. Academy Company records payment of the payroll as follows. A 11,462.50 Cash Flows 11,462.50 L OE 11,462.50 Jan. 14 Salaries and Wages Payable Cash (To record payment of payroll) 11,462.50 11,462.50 When a company uses currency in payment, it prepares one check for the payrolls total amount of net pay. The company cashes this check, and inserts the coins and currency in individual pay envelopes for disbursement to individual employees. Before You Go On... REVIEW IT 1. Identify two internal control procedures that apply to each payroll function. 2. What are the primary sources of gross earnings? 3. What payroll deductions are (a) mandatory and (b) voluntary? 4. What account titles do companies use in recording a payroll, assuming only mandatory payroll deductions are involved? Employer Payroll Taxes D11 DO IT Your cousin Stan is establishing a house-cleaning business and will have a number of employees working for him. He is aware that documentation procedures are an important part of internal control. But he is unsure about the difference between an employee earnings record and a payroll register. He asks you to explain the principal differences, because he wants to be sure that he sets up the proper payroll procedures. Action Plan Determine the earnings and deductions data that must be recorded and reported for each employee. Design a record that will accumulate earnings and deductions data and will serve as a basis for journal entries to be prepared and posted to the general ledger accounts. Explain the difference between the employee earnings record and the payroll register. Solution An employee earnings record is kept for each employee. It shows gross earnings, payroll deductions, and net pay for each pay period, as well as cumulative payroll data for that employee. In contrast, a payroll register is a listing of all employees gross earnings, payroll deductions, and net pay for each pay period. It is the documentation for preparing paychecks and for recording the payroll. Of course, Stan will need to keep both documents. Related exercise material: BED-1, BED-3, and ED-1. EMPLOYER PAYROLL TAXES Payroll tax expense for businesses results from three taxes that governSTUDY OBJECTIVE 3 mental agencies levy on employers. These taxes are: (1) FICA, (2) federal Describe and record employer unemployment tax, and (3) state unemployment tax. These taxes plus such payroll taxes. items as paid vacations and pensions (discussed in the appendix to this chapter) are collectively referred to as fringe benefits. As indicated earlier, the cost of fringe benefits in many companies is substantial. The pie chart in the margin shows the pieces of the benefits pie. BENEFITS FICA Taxes Each employee must pay FICA taxes. In addition, employers must match each employees FICA contribution. The matching contribution results in payroll tax expense to the employer. The employers tax is subject to the same rate and maximum earnings as the employees. The company uses the same account, FICA Taxes Payable, to record both the employees and the employers FICA contributions. For the January 14 payroll, Academy Companys FICA tax contribution is $1,376.80 ($17,210.00 8%). 3% Disability and life insurance 13% Retirement income such as pensions 23% Legally required benefits such as Social Security 24% Medical benefits 37% Vacation and other benefits such as parental and sick leaves, child care Federal Unemployment Taxes The Federal Unemployment Tax Act (FUTA) is another feature of the federal Social Security program. Federal unemployment taxes provide benefits for a limited period of time to employees who lose their jobs through no fault of their own. The FUTA tax rate is 6.2% of taxable wages. The taxable wage base is the first $7,000 of wages paid to each employee in a calendar year. Employers who HELPFUL HINT Both the employer and employee pay FICA taxes. Federal unemployment taxes and (in most states) the state unemployment taxes are borne entirely by the employer. D12 Appendix D Payroll Accounting pay the state unemployment tax on a timely basis will receive an offset credit of up to 5.4%. Therefore, the net federal tax rate is generally 0.8% (6.2%5.4%). This rate would equate to a maximum of $56 of federal tax per employee per year (.008 $7,000). State tax rates are based on state law. The employer bears the entire federal unemployment tax. There is no deduction or withholding from employees. Companies use the account Federal Unemployment Taxes Payable to recognize this liability. The federal unemployment tax for Academy Company for the January 14 payroll is $137.68 ($17,210.00 0.8%). State Unemployment Taxes All states have unemployment compensation programs under state unemployment tax acts (SUTA). Like federal unemployment taxes, state unemployment taxes provide benefits to employees who lose their jobs. These taxes are levied on employers.2 The basic rate is usually 5.4% on the first $7,000 of wages paid to an employee during the year.The state adjusts the basic rate according to the employers experience rating: Companies with a history of stable employment may pay less than 5.4%. Companies with a history of unstable employment may pay more than the basic rate. Regardless of the rate paid, the companys credit on the federal unemployment tax is still 5.4%. Companies use the account State Unemployment Taxes Payable for this liability. The state unemployment tax for Academy Company for the January 14 payroll is $929.34 ($17,210.00 5.4%). Illustration D-11 summarizes the types of employer payroll taxes. Illustration D-11 Employer payroll taxes FICA Taxes Federal Unemployment Taxes State Unemployment Taxes Computation Based on Wages Recording Employer Payroll Taxes Companies usually record employer payroll taxes at the same time they record the payroll. The entire amount of gross pay ($17,210.00) shown in the payroll register in Illustration D-9 is subject to each of the three taxes mentioned above. Accordingly, Academy records the payroll tax expense associated with the January 14 payroll with the entry shown on page D13. 2 In a few states, the employee is also required to make a contribution. In this textbook, including the homework, we will assume that the tax is only on the employer. Filing and Remitting Payroll Taxes Jan. 14 Payroll Tax Expense FICA Taxes Payable Federal Unemployment Taxes Payable State Unemployment Taxes Payable (To record employers payroll taxes on January 14 payroll) 2,443.82 1,376.80 137.68 929.34 A L 1,376.80 137.68 929.34 Cash Flows no effect D13 SE 2,443.82 Exp Note that Academy uses separate liability accounts instead of a single credit to Payroll Taxes Payable. Why? Because these liabilities are payable to different taxing authorities at different dates. Companies classify the liability accounts in the balance sheet as current liabilities since they will be paid within the next year. They classify Payroll Tax Expense on the income statement as an operating expense. FILING AND REMITTING PAYROLL TAXES Preparation of payroll tax returns is the responsibility of the payroll department. The treasurers department makes the tax payment. Much of the information for the returns is obtained from employee earnings records. For purposes of reporting and remitting to the IRS, the Company combines the FICA taxes and federal income taxes that it withheld. Companies must report the taxes quarterly, no later than one month following the close of each quarter. The remitting requirements depend on the amount of taxes withheld and the length of the pay period. Companies remit funds through deposits in either a Federal Reserve bank or an authorized commercial bank. Companies generally file and remit federal unemployment taxes annually on or before January 31 of the subsequent year. Earlier payments are required when the tax exceeds a specified amount. Companies usually must file and pay state unemployment taxes by the end of the month following each quarter.When payroll taxes are paid, companies debit payroll liability accounts, and credit Cash. Employers also must provide each employee with a Wage and Tax Statement (Form W-2) by January 31 following the end of a calendar year. This statement shows gross earnings, FICA taxes withheld, and income taxes withheld for the year. The required W-2 form for Michael Jordan, using assumed annual data, is shown in Illustration D-12. The employer must send a copy of each employees Illustration D-12 W-2 form Form W-2 Wage and Tax Statement OMB No. 1545-0008 Calendar Year 2008 1 Control number 2 Employer's name, address and ZIP code 3 Employer's identification number 4 Employer's State number Academy Company 19 Center St. Hampton, MI 48291 36-2167852 5 Stat. Deceased employee 6 Allocated tips Legal rep. 942 emp. Subtotal Void 7 Advance EIC payment 8 Employee's social security number 9 Federal income tax withheld 10 Wages, tips, other compensation 11 Social security tax withheld 329-36-9547 12 Employee's name, address, and ZIP code $2,248.00 $26,300.00 13 Social security wages $2,104.00 14 Social security tips $26,300.00 16 Michael Jordan 2345 Mifflin Ave. Hampton, MI 48292 17 State income tax 18 State wages, tips, etc. 19 Name of State $526.00 20 Local income tax Michigan 21 Local wages, tips, etc. 22 Name of locality HELPFUL HINT Employers generally transmit their W-2s to the government electronically. The taxing agencies store the information in their computer systems for subsequent comparison against earnings and taxes withheld reported on employees income tax returns. D14 Appendix D Payroll Accounting Wage and Tax Statement (Form W-2) to the Social Security Administration. This agency subsequently furnishes the Internal Revenue Service with the income data required. Before You Go On... REVIEW IT 1. What payroll taxes do governments levy on employers? 2. What accounts are involved in accruing employer payroll taxes? DO IT In January, the payroll supervisor determines that gross earnings for Halo Company are $70,000. All earnings are subject to 8% FICA taxes, 5.4% state unemployment taxes, and 0.8% federal unemployment taxes. Halo asks you to record the employers payroll taxes. Action Plan Compute the employers payroll taxes on the periods gross earnings. Identify the expense account(s) to be debited. Identify the liability account(s) to be credited. The entry to record the employers payroll taxes is: 9,940 5,600 560 3,780 Solution Payroll Tax Expense FICA Taxes Payable ($70,000 8%) Federal Unemployment Taxes Payable ($70,000 0.8%) State Unemployment Taxes Payable ($70,000 5.4%) (To record employers payroll taxes on January payroll) Related exercise material: BED-2, BED-3, BED-4, ED-1, ED-2, ED-3, ED-4, and ED-5. Demonstration Problem Indiana Jones Company had the following selected transactions. Feb. 1 Signs a $50,000, 6-month, 9%-interest-bearing note payable to CitiBank and receives $50,000 in cash. 10 Cash register sales total $43,200, which includes an 8% sales tax. 28 The payroll for the month consists of Sales Salaries $32,000 and Office Salaries $18,000. All wages are subject to 8% FICA taxes. A total of $8,900 federal income taxes are withheld. The salaries are paid on March 1. 28 The following adjustment data are developed. 1. Interest expense of $375 has been incurred on the note. 2. Employer payroll taxes include 8% FICA taxes, a 5.4% state unemployment tax, and a 0.8% federal unemployment tax. Instructions (a) Journalize the February transactions. (b) Journalize the adjusting entries at February 28. Glossary D15 Solution (a) Feb. 1 Cash Notes Payable (Issued 6-month, 9%-interest-bearing note to CitiBank) Cash Sales ($43,200 1.08) Sales Taxes Payable ($40,000 8%) (To record sales and sales taxes payable) Sales Salaries Expense Office Salaries Expense FICA Taxes Payable (8% $50,000) Federal Income Taxes Payable Salaries Payable (To record February salaries) Interest Expense Interest Payable (To record accrued interest for February) Payroll Tax Expense FICA Taxes Payable Federal Unemployment Taxes Payable (0.8% $50,000) State Unemployment Taxes Payable (5.4% $50,000) (To record employers payroll taxes on February payroll) 50,000 50,000 action plan To determine sales, divide the cash register total by 100% plus the sales tax percentage. Base payroll taxes on gross earnings. 10 43,200 40,000 3,200 32,000 18,000 4,000 8,900 37,100 375 375 7,100 4,000 400 2,700 28 (b) Feb. 28 28 SUMMARY OF STUDY OBJECTIVES 1 Discuss the objectives of internal control for payroll. The objectives of internal control for payroll are (1) to safeguard company assets against unauthorized payments of payrolls, and (2) to ensure the accuracy and reliability of the accounting records pertaining to payrolls. 2 Compute and record the payroll for a pay period. The computation of the payroll involves gross earnings, payroll deductions, and net pay. In recording the payroll, Salaries (or Wages) Expense is debited for gross earnings, individual tax and other liability accounts are credited for payroll deductions, and Salaries (Wages) Payable is credited for net pay. When the payroll is paid, Salaries and Wages Payable is debited, and Cash is credited. 3 Describe and record employer payroll taxes. Employer payroll taxes consist of FICA, federal unemployment taxes, and state unemployment taxes. The taxes are usually accrued at the time the payroll is recorded by debiting Payroll Tax Expense and crediting separate liability accounts for each type of tax. GLOSSARY Bonus Compensation to management personnel and other employees, based on factors such as increased sales or the amount of net income. (p. D4). Employee earnings record A cumulative record of each employees gross earnings, deductions, and net pay during the year. (p. D8). Employees Withholding Allowance Certificate (Form W-4) An Internal Revenue Service form on which the employee indicates the number of allowances claimed for withholding federal income taxes. (p. D6). Federal unemployment taxes Taxes imposed on the employer that provide benefits for a limited time period to employees who lose their jobs through no fault of their own. (p. D11). Fees Payments made for the services of professionals. (p. D1). FICA taxes Taxes designed to provide workers with supplemental retirement, employment disability, and medical benefits. (p. D5). Gross earnings Total compensation earned by an employee. (p. D4). D16 Appendix D Payroll Accounting State unemployment taxes Taxes imposed on the employer that provide benefits to employees who lose their jobs. (p. D12). Wage and Tax Statement (Form W-2) A form showing gross earnings, FICA taxes withheld, and income taxes withheld which is prepared annually by an employer for each employee. (p. D13). Wages Amounts paid to employees based on a rate per hour or on a piece-work basis. (p. D1). Net pay Gross earnings less payroll deductions. (p. D7). Payroll deductions Deductions from gross earnings to determine the amount of a paycheck. (p. D5). Payroll register A payroll record that accumulates the gross earnings, deductions, and net pay by employee for each pay period. (p. D8). Salaries Specified amount per month or per year paid to managerial, administrative, and sales personnel. (p. D1). Statement of earnings A document attached to a paycheck that indicates the employees gross earnings, payroll deductions, and net pay. (p. D10). SELF-STUDY QUESTIONS Answers are at the end of the appendix. 1. The department that should pay the payroll is the: a. timekeeping department. b. human resources department. c. payroll department. d. treasurers department. (SO 2) 2. J. Barr earns $14 per hour for a 40-hour week and $21 per hour for any overtime work. If Barr works 45 hours in a week, gross earnings are: a. $560. b. $630. (SO 1) c. $650. d. $665. 3. Employer payroll taxes do not include: a. federal unemployment taxes. b. state unemployment taxes. c. federal income taxes. d. FICA taxes. Go to the books website,, for Additional Self-Study questions. (SO 3) QUESTIONS 1. You are a newly hired accountant with Schindlebeck Company. On your first day, the controller asks you to identify the main internal control objectives related to payroll accounting. How would you respond? 2. What are the four functions associated with payroll activities? 3. What is the difference between gross pay and net pay? Which amount should a company record as wages or salaries expense? 4. Which payroll tax is levied on both employers and employees? 5. Are the federal and state income taxes withheld from employee paychecks a payroll tax expense for the employer? Explain your answer. 6. What do the following acronyms stand for: FICA, FUTA, and SUTA? 7. What information is shown on a W-4 statement? On a W-2 statement? 8. Distinguish between the two types of payroll deductions and give examples of each. 9. What are the primary uses of the employee earnings record? 10. (a) Identify the three types of employer payroll taxes. (b) How are tax liability accounts and Payroll Tax Expense classified in the financial statements? BRIEF EXERCISES Identify payroll functions. (SO 1) BED-1 (a) (b) (c) (d) Hernandez Company has the following payroll procedures. Supervisor approves overtime work. The human resources department prepares hiring authorization forms for new hires. A second payroll department employee verifies payroll calculations. The treasurers department pays employees. Identify the payroll function to which each procedure pertains. Exercises BED-2 Sandy Teters regular hourly wage rate is $16, and she receives an hourly rate of $24 for work in excess of 40 hours. During a January pay period, Sandy works 45 hours. Sandys federal income tax withholding is $95, and she has no voluntary deductions. Compute Sandy Teters gross earnings and net pay for the pay period. BED-3 Data for Sandy Teter are presented in BED-2. Prepare the journal entries to record (a) Sandys pay for the period and (b) the payment of Sandys wages. Use January 15 for the end of the pay period and the payment date. BED-4 In January, gross earnings in Yoon Company totaled $90,000. All earnings are subject to 8% FICA taxes, 5.4% state unemployment taxes, and 0.8% federal unemployment taxes. Prepare the entry to record January payroll tax expense. D17 Compute gross earnings and net pay. (SO 2) Record a payroll and the payment of wages. (SO 2) Record employer payroll taxes. (SO 3) EXERCISES ED-1 Betty Williams regular hourly wage rate is $14, and she receives a wage of 112 times the regular hourly rate for work in excess of 40 hours. During a March weekly pay period Betty worked 42 hours. Her gross earnings prior to the current week were $6,000. Betty is married and claims three withholding allowances. Her only voluntary deduction is for group hospitalization insurance at $15 per week. Instructions (a) Compute the following amounts for Bettys wages for the current week. (1) Gross earnings. (2) FICA taxes. (Assume an 8% rate on maximum of $97,500.) (3) Federal income taxes withheld. (Use the withholding table in the text, page D7.) (4) State income taxes withheld. (Assume a 2.0% rate.) (5) Net pay. (b) Record Bettys pay, assuming she is an office computer operator. ED-2 Employee earnings records for Brantley Company reveal the following gross earnings for four employees through the pay period of December 15. C. Mays L. Jeter $83,500 $95,200 D. Delgado T. Rolen $95,700 $97,500 Compute maximum FICA deductions. (SO 2) Compute net pay and record pay for one employee. (SO 2) For the pay period ending December 31, each employees gross earnings is $3,000. Employees are required to pay a FICA tax rate of 8% gross earnings of $97,500. Instructions Compute the FICA withholdings that should be made for each employee for the December 31 pay period. (Show computations.) ED-3 Piniella Company has the following data for the weekly payroll ending January 31. Hours Employee M. Hindi E. Benson K. Estes M 8 8 9 T 8 8 10 W 9 8 8 T 8 8 8 F 10 8 9 S 3 2 0 Federal Income Tax Withholding $34 37 58 Prepare payroll register and record payroll and payroll tax expense. (SO 2, 3) Hourly Rate $11 13 14 Health Insurance $10 15 15 Employees are paid 112 times the regular hourly rate for all hours worked in excess of 40 hours per week. FICA taxes are 8% on the first $97,500 of gross earnings. Piniella Company is subject to 5.4% state unemployment taxes on the first $9,800 and 0.8% federal unemployment taxes on the first $7,000 of gross earnings. Instructions (a) Prepare the payroll register for the weekly payroll. (b) Prepare the journal entries to record the payroll and Piniellas payroll tax expense. D18 Appendix D Payroll Accounting ED-4 Selected data from a February payroll register for Landmark Company are presented below. Some amounts are intentionally omitted. Gross earnings: Regular Overtime Total Deductions: FICA taxes Federal income taxes $8,900 (1) (2) $ 760 1,140 State income taxes Union dues Total deductions Net pay Accounts debited: Warehouse wages Store wages $(3) 100 (4) $7,215 (5) $4,000 Compute missing payroll amounts and record payroll. (SO 2) FICA taxes are 8%. State income taxes are 3% of gross earnings. Instructions (a) Fill in the missing amounts. (b) Journalize the February payroll and the payment of the payroll. Determine employers payroll taxes; record payroll tax expense. (SO 3) ED-5 According to a payroll register summary of Cruz Company, the amount of employees gross pay in December was $850,000, of which $70,000 was not subject to FICA tax and $760,000 was not subject to state and federal unemployment taxes. Instructions (a) Determine the employers payroll tax expense for the month, using the following rates: FICA 8%, state unemployment 5.4%, federal unemployment 0.8%. (b) Prepare the journal entry to record December payroll tax expense. llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Exercise Set B. PROBLEMS: SET A Identify internal control weaknesses and make recommendations for improvement. (SO 1) PD-1A The payroll procedures used by three different companies are described below. 1. In Brewer Company each employee is required to mark on a clock card the hours worked. At the end of each pay period, the employee must have this clock card approved by the department manager. The approved card is then given to the payroll department by the employee. Subsequently, the treasurers department pays the employee by check. 2. In Hilyard Computer Company clock cards and time clocks are used. At the end of each pay period, the department manager initials the cards, indicates the rates of pay, and sends them to payroll. A payroll register is prepared from the cards by the payroll department. Cash equal to the total net pay in each department is given to the department manager, who pays the employees in cash. 3. In Hyun-chan Company employees are required to record hours worked by punching clock cards in a time clock. At the end of each pay period, the clock cards are collected by the department manager. The manager prepares a payroll register in duplicate and forwards the original to payroll. In payroll, the summaries are checked for mathematical accuracy, and a payroll supervisor pays each employee by check. Instructions (a) Indicate the weakness(es) in internal control in each company. (b) For each weakness, describe the control procedure(s) that will provide effective internal Use control. the following format for your answer: (a) Weaknesses (b) Recommended Procedures .w i l e y. c o EXERCISES: SET B www m /co Problems: Set A PD-2A Graves Drug Store has four employees who are paid on an hourly basis plus time-anda-half for all hours worked in excess of 40 a week. Payroll data for the week ended February 15, 2008, are presented below. Federal Income Tax Withholdings $? ? 61 52 D19 Prepare payroll register and payroll entries. (SO 2, 3) Employees L. Leiss S. Bjork M. Cape L. Wild Hours Worked 39 42 44 48 Hourly Rate $14.00 $12.00 $12.00 $12.00 United Way $0 5.00 7.50 5.00 Leiss and Bjork are married. They claim 2 February 15, 2008, and the accrual of employer payroll taxes. (c) Journalize the payment of the payroll on February 16, 2008. (d) Journalize the deposit in a Federal Reserve bank on February 28, 2008, of the FICA and federal income taxes payable to the government. PD-3A The following payroll liability accounts are included in the ledger of Eikleberry Company on January 1, 2008. FICA Taxes Payable Federal Income Taxes Payable State Income Taxes Payable Federal Unemployment Taxes Payable State Unemployment Taxes Payable Union Dues Payable U.S. Savings Bonds Payable In January, the following transactions occurred. Jan. 10 Sent check for $250.00 to union treasurer for union dues. 12 Deposited check for $1,916.80 in Federal Reserve bank for FICA taxes and federal income taxes withheld. 15 Purchased U.S. Savings Bonds for employees by writing check for $350.00. 17 Paid state income taxes withheld from employees. 20 Paid federal and state unemployment taxes. 31 Completed monthly payroll register, which shows office salaries $17,600, store wages $27,400, FICA taxes withheld $3,600, federal income taxes payable $1,770, state income taxes payable $360, union dues payable $400, United Fund contributions payable $1,800, and net pay $37,070. 31 Prepared payroll checks for the net pay and distributed checks to employees. At January 31, the company also makes the following accrual for employer payroll taxes: FICA taxes 8%, state unemployment taxes 5.4%, and federal unemployment taxes 0.8%. Instructions (a) Journalize the January transactions. (b) Journalize the adjustments pertaining to employee compensation at January 31. $ 662.20 1,254.60 102.15 312.00 1,954.40 250.00 350.00 (a) Net pay $1,786.32; Store wages expense $1,614.00 (b) Payroll tax expense $317.79 Journalize payroll transactions and adjusting entries. (SO 2, 3) (b) Payroll tax expense $6,390.00 D20 Appendix D Payroll Accounting PD-4A For the year ended December 31, 2008, R. Visnak Company reports the following summary payroll data. Gross earnings: Administrative salaries Electricians wages Total Deductions: FICA taxes Federal income taxes withheld State income taxes withheld (2.6%) United Way contributions payable *Hospital insurance premiums Total $180,000 320,000 $500,000 $ 35,200 153,000 13,000 25,000 15,800 $242,000 Prepare entries for payroll and payroll taxes; prepare W-2 data. (SO 2, 3) R. Visnak Companys payroll taxes are: FICA 8%, state unemployment 2.5% (due to a stable employment record), and 0.8% federal unemployment. Gross earnings subject to FICA taxes total $440,000, and unemployment taxes total $110,000. (a) Wages Payable $258,000 (b) Payroll tax expense $38,830 R. Lopez K. Kirk Gross Earnings $60,000 27,000 Federal Income Tax Withheld $27,500 11,000 PROBLEMS: SET B Identify internal control weaknesses and make recommendations for improvement. (SO 1) PD-1B Selected payroll procedures of Wallace Company are described below. 1. Department managers interview applicants and on the basis of the interview either hire or reject the applicants. When an applicant is hired, the applicant fills out a W-4 form (Employees Withholding Allowance Certificate). One copy of the form is sent to the human resources department, and one copy is sent to the payroll department as notice that the individual has been hired. On the copy of the W-4 sent to payroll, the managers manually indicate the hourly pay rate for the new hire. 2. The payroll checks are manually signed by the chief accountant and given to the department managers for distribution to employees in their department.The managers are responsible for seeing that any absent employees receive their checks. 3. There are two clerks in the payroll department. The payroll is divided alphabetically; one clerk has employees A to L and the other has employees M to Z. Each clerk computes the gross earnings, deductions, and net pay for employees in the section and posts the data to the employee earnings records. Instructions (a) Indicate the weaknesses in internal control. (b) For each weakness, describe the control procedures that will provide effective internal control. Use the following format for your answer: (a) Weaknesses (b) Recommended Procedures Problems: Set B PD-2B Lee Hardware has four employees who are paid on an hourly basis plus time-and-a half for all hours worked in excess of 40 a week. Payroll data for the week ended March 15, 2008, are presented below. D21 Prepare payroll register and payroll entries. (SO 2, 3) Employee Joe Coomer Mary Walker Andy Dye Kim Shen Hours Worked 40 42 44 48 Hourly Rate $15.00 13.00 13.00 13.00 Federal Income Tax Withholdings $? ? 60 67 United Way $5.00 5.00 8.00 5.00 Coomer and Walker March 15, 2008, and the accrual of employer payroll taxes. (c) Journalize the payment of the payroll on March 16, 2008. (d) Journalize the deposit in a Federal Reserve bank on March 31, 2008, of the FICA and federal income taxes payable to the government. PD-3B The following payroll liability accounts are included in the ledger of Nordlund Company on January 1, 2008. FICA Taxes Payable Federal Income Taxes Payable State Income Taxes Payable Federal Unemployment Taxes Payable State Unemployment Taxes Payable Union Dues Payable $ 760.00 1,204.60 108.95 288.95 1,954.40 870.00 (a) Net pay $1,910.37; Store wages expense $1,757 (b) Payroll tax expense $345.48 Journalize payroll transactions and adjusting entries. (SO 2, 3) U.S. Savings Bonds Payable In January, the following transactions occurred. 360.00 Jan. 10 Sent check for $870.00 to union treasurer for union dues. 12 Deposited check for $1,964.60 in Federal Reserve bank for FICA taxes and federal income taxes withheld. 15 Purchased U.S. Savings Bonds for employees by writing check for $360.00. 17 Paid state income taxes withheld from employees. 20 Paid federal and state unemployment taxes. 31 Completed monthly payroll register, which shows office salaries $21,600, store wages $28,400, FICA taxes withheld $4,000, federal income taxes payable $1,958, state income taxes payable $414, union dues payable $400, United Fund contributions payable $1,888, and net pay $41,340. 31 Prepared payroll checks for the net pay and distributed checks to employees. At January 31, the company also makes the following accrued adjustment for employer payroll taxes: FICA taxes 8%, federal unemployment taxes 0.8%, and state unemployment taxes 5.4%. Instructions (a) Journalize the January transactions. (b) Journalize the adjustments pertaining to employee compensation at January 31. (b) Payroll tax expense $7,100 D22 Appendix D Payroll Accounting PD-4B For the year ended December 31, 2008, Niehaus Electrical Repair Company reports the following summary payroll data. Gross earnings: Administrative salaries Electricians wages Total $180,000 370,000 $550,000 Prepare entries for payroll and payroll taxes; prepare W-2 data. (SO 2, 3) Deductions: FICA taxes $ 38,000 Federal income taxes withheld 168,000 State income taxes withheld (2.6%) 14,300 United Way contributions payable 27,500 *Hospital insurance premiums 17,200 Total $265,000 Niehaus Companys payroll taxes are: FICA 8%, state unemployment 2.5% (due to a stable employment record), and 0.8% federal unemployment. Gross earnings subject to FICA taxes total $475,000, and unemployment taxes total $125,000. (a) Wages payable $285,000 (b) Payroll tax expense $42,125 Anna Hashmi Sharon Bishop Gross Earnings $59,000 26,000 Federal Income Tax Withheld $28,500 10,200 llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Problem Set C. BROADENING YOUR PERSPECTIVE FINANCIAL REPORTING AND ANALYSIS llege /w eygand Exploring the Web BYPD-1 The Internal Revenue Service provides considerable information over the Internet. The following demonstrates how useful one of its sites is in answering payroll tax questions faced by employers. Address:, or go to Steps 1. Go to the site shown above. 2. Choose View Online, Tax Publications. 3. Choose Publication 15, Circular E, Employers Tax Guide. m /co .w i l e y. c o PROBLEMS: SET C www m /co t www .w i l e y. c o Broadening Your Perspective Instructions Answer each of the following questions. (a) How does the government define employees? (b) What are the special rules for Social Security and Medicare regarding children who are employed by their parents? (c) How can an employee obtain a Social Security card if he or she doesnt have one? (d) Must employees report to their employer tips received from customers? If so, what is the process? (e) Where should the employer deposit Social Security taxes withheld or contributed? D23 CRITICAL THINKING Decision Making Across the Organization BYPD-2 Summerville Processing Company provides word-processing services for business clients and students in a university community. The work for business clients is fairly steady throughout the year. The work for students peaks significantly in December and May as a result of term papers, research project reports, and dissertations. Two years ago, the company attempted to meet the peak demand by hiring part-time help. However, this led to numerous errors and considerable customer dissatisfaction. A year ago, the company hired four experienced employees on a permanent basis instead of using part-time help. This proved to be much better in terms of productivity and customer satisfaction. But, it has caused an increase in annual payroll costs and a significant decline in annual net income. Recently, Valarie Flynn, a sales representative of Davidson Services Inc., has made a proposal to the company. Under her plan, Davidson Services will provide up to four experienced workers at a daily rate of $80 per person for an 8-hour workday. Davidson workers are not available on an hourly basis. Summerville Processing would have to pay only the daily rate for the workers used. The owner of Summerville Processing, Nancy Bell, asks you, as the companys accountant, to prepare a report on the expenses that are pertinent to the decision. If the Davidson plan is adopted, Nancy will terminate the employment of two permanent employees and will keep two permanent employees. At the moment, each employee earns an annual income of $22,000. Summerville Processing pays 8% FICA taxes, 0.8% federal unemployment taxes, and 5.4% state unemployment taxes. The unemployment taxes apply to only the first $7,000 of gross earnings. In addition, Summerville Processing pays $40 per month for each employee for medical and dental insurance. Nancy indicates that if the Davidson Services plan is accepted, her needs for workers will be as follows. Months JanuaryMarch AprilMay JuneOctober NovemberDecember Number 2 3 2 3 Working Days per Month 20 25 18 23 Instructions With the class divided into groups, answer the following. (a) Prepare a report showing the comparative payroll expense of continuing to employ permanent workers compared to adopting the Davidson Services Inc. plan. (b) What other factors should Nancy consider before finalizing her decision? Communication Activity BYPD-3 Ivan Blanco, president of the Blue Sky Company, has recently hired a number of additional employees. He recognizes that additional payroll taxes will be due as a result of this hiring, and that the company will serve as the collection agent for other taxes. D24 Appendix D Payroll Accounting Instructions In a memorandum to Ivan Blanco, explain each of the taxes, and identify the taxes that result in payroll tax expense to Blue Sky Company. Ethics Case BYPD-4 Johnny Fuller owns and manages Johnnys Restaurant, a 24-hour restaurant near the citys medical complex. Johnny employs 9 full-time employees and 16 part-time employees. He pays all of the full-time employees by check, the amounts of which are determined by Johnnys public accountant, Mary Lake. Johnny pays all of his part-time employees in cash. He computes their wages and withdraws the cash directly from his cash register. Mary has repeatedly urged Johnny to pay all employees by check. But as Johnny has told his competitor and friend, Steve Hill, who owns the Greasy Diner, First of all, my part-time employees prefer the cash over a check, and secondly I dont withhold or pay any taxes or workmens compensation insurance on those wages because they go totally unrecorded and unnoticed. Instructions (a) Who are the stakeholders in this situation? (b) What are the legal and ethical considerations regarding Johnnys handling of his payroll? (c) Mary Lake is aware of Johnnys payment of the part-time payroll in cash. What are her ethical responsibilities in this case? (d) What internal control principle is violated in this payroll process? Answers to Self-Study Questions 1. d 2. d 3. c Appendix E OBJECTIVES Subsidiary Ledgers and Special Journals STUDY After studying this appendix, you should be able to: 1. Describe the nature and purpose of a subsidiary ledger. 2. Explain how companies use special journals in journalizing. 3. Indicate how companies post a multi-column journal. SECTION 1 Expanding the Ledger Subsidiary Ledgers NATURE AND PURPOSE OF SUBSIDIARY LEDGERS Imagine a business that has several thousand charge (credit) customers STUDY OBJECTIVE 1 and shows the transactions with these customers in only one general Describe the nature and purpose ledger accountAccounts Receivable. It would be nearly impossible to of a subsidiary ledger.: 1. The accounts receivable (or customers) subsidiary ledger, which collects transaction data of individual customers. 2. The accounts payable (or creditors) subsidiary ledger, which collects transaction data of individual creditors. In each of these subsidiary ledgers, companies usually arrange individual accounts in alphabetical order. A general ledger account summarizes the detailed data from a subsidiary ledger. For example, the detailed data from the accounts receivable subsidiary ledger are summarized in Accounts Receivable in the general ledger. The general ledger account that summarizes subsidiary ledger data is called a control account. Illustration E-1 (page E2) presents an overview of the relationship of subsidiary ledgers to the general ledger.There, the general ledger control accounts and subsidiary ledger accounts are in green. Note that cash and owners capital in this E1 E2 Appendix E Subsidiary Ledgers and Special Journals illustration are not control accounts because there are no subsidiary ledger accounts related to these accounts. At the end of an accounting period, each general ledger control account balance must equal the composite balance of the individual accounts in the related subsidiary ledger. For example, the balance in Accounts Payable in Illustration E-1 must equal the total of the subsidiary balances of Creditors X Y Z. Control accounts General Ledger Accounts Receivable Accounts Payable Cash Common Stock Subsidiary Ledgers Customer Customer Customer A B C Creditor X Creditor Y Creditor Z Illustration E-1 Relationship of general ledger and subsidiary ledgers Subsidiary Ledger Example Illustration E-2 provides an example of a control account and subsidiary ledger for Pujols Enterprises. (Due to space considerations, the explanation column in these accounts is not shown in this and subsequent illustrations.) Illustration E-2 is based on the transactions listed in Illustration E-3 (next page). Illustration E-2 Relationship between general and subsidiary ledgers File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 Jan 10 19 Ref. Aaron Co. Debit Credit 6,000 4,000 Branden Inc. Debit Credit 3,000 3,000 Caron Co. Debit Credit 3,000 1,000 Balance 6,000 2,000 Date 2008 Jan 31 31 Accounts Receivable Ref. Debit Credit 12,000 8,000 No. 112 Balance 12,000 4,000 Date 2008 Jan 12 21 Ref. Balance 3,000 -----The subsidiary ledger is separate from the general ledger. Accounts Receivable is a control account. Date 2008 Jan 20 29 Ref. Balance 3,000 2,000 General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Nature and Purpose of Subsidiary Ledgers Credit Sales Jan. 10 Aaron Co. 12 Branden Inc. 20 Caron Co. $ 6,000 3,000 3,000 $12,000 Collections on Account Jan. 19 Aaron Co. 21 Branden Inc. 29 Caron Co. $ 4,000 3,000 1,000 $ 8,000 Illustration E-3 Sales and collection transactions E3 Pujols can reconcile the total debits ($12,000) and credits ($8,000) in Accounts Receivable in the general ledger to the detailed debits and credits in the subsidiary accounts. Also, the balance of $4,000 in the control account agrees with the total of the balances in the individual accounts (Aaron Co. $2,000 Branden Inc. $0 Caron Co. $2,000) in the subsidiary ledger. As Illustration E-2 shows, companies make monthly postings to the control accounts in the general ledger.This practice allows them to prepare monthly financial statements. Companies post to the individual accounts in the subsidiary ledger daily. Daily posting ensures that account information is current. This enables the company to monitor credit limits, bill customers, and answer inquiries from customers about their account balances. Advantages of Subsidiary Ledgers Subsidiary ledgers have several advantages: 1. They show in a single account transactions affecting one customer or one creditor, thus providing up-to-date information on specific account balances. 2. They free the general ledger of excessive details. As a result, a trial balance of the general ledger does not contain vast numbers of individual account balances. 3. They help locate errors in individual accounts by reducing the number of accounts in one ledger and by using control accounts. 4. They make possible a division of labor in posting. One employee can post to the general ledger while someone else posts to the subsidiary ledgers. Before You Go On... REVIEW IT 1. What is a subsidiary ledger, and what purpose does it serve? 2. What is a control account, and what purpose does it serve? 3. Name two general ledger accounts that may act as control accounts for a subsidiary ledger. Can you think of a third control account? DO IT Presented below is information related to Sims Company for its first month of operations. Determine the balances that appear in the accounts payable subsidiary ledger. What Accounts Payable balance appears in the general ledger at the end of January? Credit Purchases Jan. 5 11 22 Devon Co. Shelby Co. Taylor Co. $11,000 7,000 14,000 Jan. 9 14 27 Cash Paid Devon Co. Shelby Co. Taylor Co. $7,000 2,000 9,000 Action Plan Subtract cash paid from credit purchases to determine the balances in the accounts payable subsidiary ledger. Sum the individual balances to determine the Accounts Payable balance. E4 Appendix E Subsidiary Ledgers and Special Journals Solution Subsidiary ledger balances: Devon Co. $4,000 ($11,000 $7,000) Shelby Co. $5,000 ($7,000 $2,000) Taylor Co. $5,000 ($14,000 $9,000). General ledger Accounts Payable balance: $14,000 ($4,000 $5,000 $5,000). Related exercise material: BEE-4, BEE-5, EE-1, EE-2, EE-4, and EE-5. Expanding the Journal Special Journals SECTION 2 So far you have learned to journalize transactions in a two-column general journal and post each entry to the general ledger. This procedure is satisfacExplain how companies use tory in only the very smallest companies. To expedite journalizing and postspecial journals in journalizing. ing, most companies use special journals in addition to the general journal. Companies use special journals to record similar types of transactions. Examples are all sales of merchandise on account, or all cash receipts. The types of transactions that occur frequently in a company determine what special journals the company uses. Most merchandising enterprises record daily transactions using Illustration E-4 the journals shown in Illustration E-4. STUDY OBJECTIVE 2 Use of special journals and the general journal Sales Journal Used for: All sales of merchandise on account Cash Receipts Journal Used for: All cash received (including cash sales) Purchases Journal Used for: All purchases of merchandise on account Cash Payments Journal Used for: All cash paid (including cash purchases) General Journal Used for: Transactions that cannot be entered in a special journal, including correcting, adjusting, and closing entries If a transaction cannot be recorded in a special journal, the company records it in the general journal. For example, if a company had special journals for only the four types of transactions listed above, it would record purchase returns and allowances in the general journal. Similarly, correcting, adjusting, and closing entries are recorded in the general journal. In some situations, companies might use special journals other than those listed above. For example, when sales returns and allowances are frequent, a company might use a special journal to record these transactions. Special journals permit greater division of labor because several people can record entries in different journals at the same time. For example, one employee may journalize all cash receipts, and another may journalize all credit sales. Also, the use of special journals reduces the time needed to complete the posting process. With special journals, companies may post some accounts monthly, instead of daily, as we will illustrate later in the chapter. On the following pages, we discuss the four special journals shown in Illustration E-4. Sales Journal E5 SALES JOURNAL In the sales journal, companies record sales of merchandise on account. Cash sales of merchandise go in the cash receipts journal. Credit sales of assets other than merchandise go in the general journal. Journalizing Credit Sales To demonstrate use of a sales journal, we will use data for Karns Wholesale Supply, which uses a perpetual inventory system. Under this system, each entry in the sales journal results in one entry at selling price and another entry at cost. The entry at selling price is a debit to Accounts Receivable (a control account) and a credit of equal amount to Sales. The entry at cost is a debit to Cost of Goods Sold and a credit of equal amount to Merchandise Inventory (a control account). Using a sales journal with two amount columns, the company can show on only one line a sales transaction at both selling price and cost. Illustration E-5 shows this two-column sales journal of Karns Wholesale Supply, using assumed credit sales transactions (for sales invoices 101107). HELPFUL HINT Postings are also made daily to individual ledger accounts in the inventory subsidiary ledger to maintain a perpetual inventory. Illustration E-5 Journalizing the sales journalperpetual inventory system No. Ref. 101 102 103 104 105 106 107 Accts. Receivable Dr. Sales Cr. 10,600 11,350 7,800 9,300 15,400 21,210 14,570 , 90,230 , Cost of Goods Sold Dr. Merchandise Inventory Cr. 6,360 7,370 5,070 6,510 10,780 15,900 10,200 , 62,190 , General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Note several points: Unlike the general journal, an explanation is not required for each entry in a special journal. Also, use of prenumbered invoices ensures that all invoices are journalized. Finally, the reference (Ref.) column is not used in journalizing. It is used in posting the sales journal, as explained next. Posting the Sales Journal Companies make daily postings from the sales journal to the individual accounts receivable in the subsidiary ledger. Posting to the general ledger is done monthly. Illustration E-6 (page E6) shows both the daily and monthly postings. A check mark () is inserted in the reference posting column to indicate that the daily posting to the customers account has been made. If the subsidiary ledger accounts were numbered, the account number would be entered in place of the check mark. At the end of the month, Karns posts the column totals of the sales E6 Appendix E Subsidiary Ledgers and Special Journals Accts. Receivable Dr. Cost of Goods Sold Dr. No. Ref. Sales Cr. Merchandise Inventory Cr. 101 102 103 104 105 106 107 10,600 11,350 7,800 9,300 15,400 21,210 14,570 , 90,230 , (112) / (401) 6,360 7,370 5,070 6,510 10,780 15,900 10,200 , 62,190 , (505) / (120) At the end of the accounting period, the company posts totals to the general ledger. The company posts individual amounts to the subsidiary ledger daily. Accounts Receivable Date Ref. Debit Credit 2008 May 31 S1 90,230 No. 112 Balance 90,230 Date 2008 May 3 y 21 Ref. S1 S1 Abbot Sisters Debit Credit 10,600 15,400 Babson Co. Debit Credit 11,350 14,570 Carson Bros. Debit Credit 7,800 Balance 10,600 26,000 Date 2008 May 7 27 Ref. S1 S1 Balance 11,350 25,920 Merchandise Inventory Date Ref. Debit Credit 2008 May 31 S1 62,190 No. 120 Balance 62,1901 Date Ref. 2008 May 14 S1 Balance 7,800 Date Ref. 2008 May 31 S1 Sales Debit Credit 90,230 No. 401 Balance 90,230 Date Ref. 2008 May 19 S1 24 S1 Deli Co. Debit Credit 9,300 21,210 Balance 9,300 30,510 Date Ref. 2008 May 31 S1 Cost of Goods Sold Debit Credit 62,190 No. 505 Balance 62,190 The subsidiary ledger is separate from the general ledger. 1 Accounts Receivable is a control account. The normal balance for Merchandise Inventory is a debit. But, because of the sequence in which we have posted the special journals, with the sales journals first, the credits to Merchandise Inventory are posted before the debits. This posting sequence explains the credit balance in Merchandise Inventory, which exists only until the other journals are posted. General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Illustration E-6 Posting the sales journal journal to the general ledger. Here, the column totals are as follows: From the sellingprice column, a debit of $90,230 to Accounts Receivable (account No. 112), and a credit of $90,230 to Sales (account No. 401). From the cost column, a debit of $62,190 to Cost of Goods Sold (account No. 505), and a credit of $62,190 to Merchandise Inventory (account No. 120). Karns inserts the account numbers Cash Receipts Journal E7 below the column totals to indicate that the postings have been made. In both the general ledger and subsidiary ledger accounts, the reference S1 indicates that the posting came from page 1 of the sales journal. Proving the Ledgers The next step is to prove the ledgers. To do so, Karns must determine two things: (1) The total of the general ledger debit balances must equal the total of the general ledger credit balances. (2) The sum of the subsidiary ledger balances must equal the balance in the control account. Illustration E-7 shows the proof of the postings from the sales journal to the general and subsidiary ledger. Postings to General Ledger General Ledger Credits Merchandise Inventory Sales Debits Accounts Receivable Cost of Goods Sold Debit Postings to the Accounts Receivable Subsidiary Ledger Subsidiary Ledger $62,190 90,230 $152,420 $90,230 62,190 $152,420 Abbot Sisters Babson Co. Carson Bros. Deli Co. $26,000 25,920 7,800 30,510 $90,230 Illustration E-7 Proving the equality of the postings from the sales journal Advantages of the Sales Journal Use of a special journal to record sales on account has several advantages. First, the one-line entry for each sales transaction saves time. In the sales journal, it is not necessary to write out the four account titles for each transaction. Second, only totals, rather than individual entries, are posted to the general ledger. This saves posting time and reduces the possibilities of posting errors. Finally, a division of labor results, because one individual can take responsibility for the sales journal. CASH RECEIPTS JOURNAL In the cash receipts journal, companies record all receipts of cash. The most common types of cash receipts are cash sales of merchandise and collections of accounts receivable. Many other possibilities exist, such as receipt of money from bank loans and cash proceeds from disposal of equipment. A one- or two-column cash receipts journal would not have space enough for all possible cash receipt transactions. Therefore, companies use a multiple-column cash receipts journal. Generally, a cash receipts journal includes the following columns: debit columns for Cash and Sales Discounts, and credit columns for Accounts Receivable, Sales, and Other accounts. Companies use the Other Accounts category when the cash receipt does not involve a cash sale or a collection of accounts receivable. Under a perpetual inventory system, each sales entry also is accompanied by an entry that debits Cost of Goods Sold and credits Merchandise Inventory for the cost of the merchandise sold. Illustration E-8 (page E8) shows a six-column cash receipts journal. E8 Appendix E Subsidiary Ledgers and Special Journals Illustration E-8 Journalizing and posting the cash receipts journal File Edit View ? Go Bookmarks Tools Help Post Closing Reports Tools Help Problem Entries Date 2008 May 1 7 10 12 17 22 23 28 Account Credited Common Stock Abbot Sisters Babson Co. Notes Payable Carson Bros. Deli Co. Ref. 311 Cash Dr. 5,000 1,900 10,388 2,600 11,123 6,000 7,644 9,114 , 53,769 , (101) Sales Accounts Other Cost of Goods Discounts Receivable Sales Accounts Sold Dr. Dr. Cr. Cr. Cr. Mdse. Inv. Cr. 5,000 1,900 212 227 156 186 781 (414) 10,600 2,600 11,350 6,000 7,800 9,300 , 39,050 , (112) 1,690 1,240 200 4,500 (401) 2,930 11,000 , (505)/(120) (x ) The company posts individual amounts to the subsidiary ledger daily. At the end of the accounting period, the company posts totals to the general ledger. Date Ref. 2008 May 3 S1 10 CR1 21 S1 Abbot Sisters Debit Credit 10,600 10,600 15,400 Babson Co. Debit Credit 11,350 11,350 14,570 Carson Bros. Debit Credit 7,800 7,800 Deli Co. Balance 10,600 -------15,400 Date Ref. 2008 May 31 CR1 Cash Debit Credit 53,769 No. 101 Balance 53,769 No. 112 Balance 90,230 51,180 No. 120 Balance 62,190 65,120 No. 200 Balance 6,000 No. 311 Balance 5,000 No. 401 Accounts Receivable Date Ref. 2008 May 31 S1 31 CR1 Debit 90,230 39,050 Credit Date Ref. 2008 May 7 S1 17 CR1 27 S1 Balance 11,350 -------14,570 Merchandise Inventory Date Ref. 2008 May 31 S1 31 CR1 Debit Credit 62,190 2,930 Notes Payable Date Ref. 2008 May 22 CR1 Debit Credit 6,000 Common Stock Date Ref. 2008 May 1 CR1 Debit Credit 5,000 Sales Date Ref. 2008 May 14 S1 23 CR1 Balance 7,800 ------- Date Ref. 2008 May 19 S1 24 S1 28 CR1 Debit 9,300 21,210 Credit Balance 9,300 30,510 21,210 9,300 The subsidiary ledger is separate from the general ledger. Accounts Receivable is a control account. Date Ref. 2008 May 31 S1 31 CR1 Debit Credit 90,230 4,500 Balance 90,230 94,730 No. 414 Balance 781 No. 505 Balance 62,190 65,120 Sales Discounts Date Ref. 2008 May 31 CR1 y Debit 781 Cost of Goods Sold Date Ref. 2008 May 31 S1 31 CR1 Debit 62,190 2,930 Credit Credit General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Cash Receipts Journal E9 Companies may use additional credit columns if these columns significantly reduce postings to a specific account. For example, a loan company, such as Household International, receives thousands of cash collections from customers. Using separate credit columns for Loans Receivable and Interest Revenue, rather than the Other Accounts credit column, would reduce postings. Journalizing Cash Receipts Transactions To illustrate the journalizing of cash receipts transactions, we will continue with the May transactions of Karns Wholesale Supply. Collections from customers relate to the entries recorded in the sales journal in Illustration E-5. The entries in the cash receipts journal are based on the following cash receipts. May 1 Stockholders invested $5,000 in the business. 7 Cash sales of merchandise total $1,900 (cost, $1,240). 10 Received a check for $10,388 from Abbot Sisters in payment of invoice No. 101 for $10,600 less a 2% discount. 12 Cash sales of merchandise total $2,600 (cost, $1,690). 17 Received a check for $11,123 from Babson Co. in payment of invoice No. 102 for $11,350 less a 2% discount. 22 Received cash by signing a note for $6,000. 23 Received a check for $7,644 from Carson Bros. in full for invoice No. 103 for $7,800 less a 2% discount. 28 Received a check for $9,114 from Deli Co. in full for invoice No. 104 for $9,300 less a 2% discount. Further information about the columns in the cash receipts journal is listed below. Debit Columns: 1. Cash. Karns enters in this column the amount of cash actually received in each transaction. The column total indicates the total cash receipts for the month. 2. Sales Discounts. Karns includes a Sales Discounts column in its cash receipts journal. By doing so, it does not need to enter sales discount items in the general journal. As a result, the cash receipts journal shows on one line the collection of an account receivable within the discount period. Credit Columns: 3. Accounts Receivable. Karns uses the Accounts Receivable column to record cash collections on account. The amount entered here is the amount to be credited to the individual customers account. 4. Sales. The Sales column records all cash sales of merchandise. Cash sales of other assets (plant assets, for example) are not reported in this column. 5. Other Accounts. Karns uses the Other Accounts column whenever the credit is other than to Accounts Receivable or Sales. For example, in the first entry, Karns enters $5,000 as a credit to Common Stock.This column is often referred to as the sundry accounts column. Debit and Credit Column: 6. Cost of Goods Sold and Merchandise Inventory. This column records debits to Cost of Goods Sold and credits to Merchandise Inventory. In a multi-column journal, generally only one line is needed for each entry. Debit and credit amounts for each line must be equal. When Karns journalizes the collection from Abbot Sisters on May 10, for example, three amounts are indicated. Note also that the Account Credited column identifies both general ledger and subsidiary ledger account titles. General ledger accounts are illustrated in the May 1 HELPFUL HINT When is an account title entered in the Account Credited column of the cash receipts journal? Register to View Answersubsidiary ledger account is entered when the entry involves a collection of accounts receivable. A general ledger account is entered when the account is not shown in a special column (and an amount must be entered in the Other Accounts column). Otherwise, no account is shown in the Account Credited column. E10 Appendix E Subsidiary Ledgers and Special Journals and May 22 entries. A subsidiary account is illustrated in the May 10 entry for the collection from Abbot Sisters. When Karns has finished journalizing a multi-column journal, it totals the amount columns and compares the totals to prove the equality of debits and credits. Illustration E-9 shows the proof of the equality of Karnss cash receipts journal. Illustration E-9 Proving the equality of the cash receipts journal Debits Cash Sales Discounts Cost of Goods Sold $53,769 781 2,930 $57,480 Credits Accounts Receivable Sales Other Accounts Merchandise Inventory $39,050 4,500 11,000 2,930 $57,480 Totaling the columns of a journal and proving the equality of the totals is called footing and cross-footing a journal. Posting the Cash Receipts Journal Posting a multi-column journal involves the following steps. 1. At the end of the month, the company posts all column totals, except for the Other Accounts total, to the account title(s) specified in the Indicate how companies post a column heading (such as Cash or Accounts Receivable). The company multi-column journal. then enters account numbers below the column totals to show that they have been posted. For example, Karns has posted cash to account No. 101, accounts receivable to account No. 112, merchandise inventory to account No. 120, sales to account No. 401, sales discounts to account No. 414, and cost of goods sold to account No. 505. 2. The company separately posts the individual amounts comprising the Other Accounts total to the general ledger accounts specified in the Account Credited column. See, for example, the credit posting to Common Stock: The total amount of this column has not been. STUDY OBJECTIVE 3 The symbol CR, used in both the subsidiary and general ledgers, identifies postings from the cash receipts journal. Proving the Ledgers After posting of the cash receipts journal is completed, Karns proves the ledgers. As shown in Illustration E-10 (next page), the general ledger totals agree. Also, the sum of the subsidiary ledger balances equals the control account balance. Purchases Journal E11 Accounts Receivable Subsidiary Ledger General Ledger Debits Illustration E-10 Proving the ledgers after posting the sales and the cash receipts journals $53,769 51,180 781 65,120 $170,850 Abbot Sisters Babson Co. Deli Co. $15,400 14,570 21,210 $51,180 Cash Accounts Receivable Sales Discounts Cost of Goods Sold Credits Notes Payable Common Stock Sales Merchandise Inventory $ 6,000 5,000 94,730 65,120 $170,850 PURCHASES JOURNAL In the purchases journal, companies record all purchases of merchandise on account. Each entry in this journal results in a debit to Merchandise Inventory and a credit to Accounts Payable. Illustration E-11 (page E12) shows the purchases journal for Karns Wholesale Supply. When using a one-column purchases journal (as in Illustration E-11), a company cannot journalize other types of purchases on account or cash purchases in it. For example, using the purchases journal shown in Illustration E-11, Karns would have to record credit purchases of equipment or supplies in the general journal. Likewise, all cash purchases would be entered in the cash payments journal. As illustrated later, companies that make numerous credit purchases for items other than merchandise often expand the purchases journal to a multi-column format. (See Illustration E-14 on page E13.) Journalizing Credit Purchases of Merchandise The journalizing procedure is similar to that for a sales journal. Companies make entries in the purchases journal from purchase invoices. In contrast to the sales journal, the purchases journal may not have an invoice number column, because invoices received from different suppliers will not be in numerical sequence. To ensure that they record all purchase invoices, some companies consecutively number each invoice upon receipt and then use an internal document number column in the purchases journal. The entries for Karns Wholesale Supply are based on the assumed credit purchases listed in Illustration E-12 (page E12). Posting the Purchases Journal The procedures for posting the purchases journal are similar to those for the sales journal. In this case, Karns makes daily postings to the accounts payable ledger; it makes monthly postings to Merchandise Inventory and Accounts Payable in the general ledger. In both ledgers, Karns uses P1 in the reference column to show that the postings are from page 1 of the purchases journal. Proof of the equality of the postings from the purchases journal to both ledgers is shown in Illustration E-13 (page E13). HELPFUL HINT Postings to subsidiary ledger accounts are done daily because it is often necessary to know a current balance for the subsidiary accounts. E12 Appendix E Subsidiary Ledgers and Special Journals File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 May 6 10 14 19 26 29 Account Credited Jasper Manufacturing Inc. Eaton and Howe Inc. Fabor and Son Jasper Manufacturing Inc. Fabor and Son Eaton and Howe Inc. Terms 2/10, n/30 3/10, n/30 1/10, n/30 2/10, n/30 1/10, n/30 3/10, n/30 Merchandise Inventory Dr. Ref. Accounts Payable Cr. 11,000 7,200 6,900 17,500 8,700 12,600 63,900 (120)/(201) The company posts individual amounts to the subsidiary ledger daily. At the end of the accounting period, the company posts totals to the general ledger. Date 2008 May 10 29 Eaton and Howe Inc. Ref. Debit Credit P1 P1 7,200 12,600 Fabor and Son Debit Credit 6,900 8,700 Merchandise Inventory Balance 7,200 19,800 Date Ref. Debit Credit 62,190 2,930 63,900 Accounts Payable Debit Credit 63,900 2008 May 31 S1 31 CR1 31 P1 No. 120 Balance 62,190 65,120 1,220 No. 201 Balance 63,900 Date 2008 May 14 26 Ref. P1 P1 Balance Date 6,900 15,600 2008 May 31 y Ref. P1 Date 2008 May 6 19 Jasper Manufacturing Inc. Ref. Debit Credit Balance P1 P1 11,000 17,500 11,000 28,500 The subsidiary ledger is separate from the general ledger. Accounts Payable is a control account. General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Illustration E-11 Journalizing and posting the purchases journal Illustration E-12 Credit purchases transactions Date 5/6 5/10 5/14 5/19 5/26 5/29 Supplier Jasper Manufacturing Inc. Eaton and Howe Inc. Fabor and Son Jasper Manufacturing Inc. Fabor and Son Eaton and Howe Inc. Amount $11,000 7,200 6,900 17,500 8,700 12,600 Cash Payments Journal E13 Postings to General Ledger Merchandise Inventory (debit) $63,900 Credit Postings to Accounts Payable Ledger Eaton and Howe Inc. Fabor and Son Jasper Manufacturing Inc. $19,800 15,600 28,500 $63,900 Illustration E-13 Proving the equality of the purchases journal Accounts Payable (credit) $63,900 Expanding the Purchases Journal As noted earlier, some companies expand the purchases journal to include all types of purchases on account. Instead of one column for merchandise inventory and accounts payable, they use a multiple-column format. This format usually includes a credit column for Accounts Payable and debit columns for purchases of Merchandise Inventory, Office Supplies, Store Supplies, and Other Accounts. Illustration E-14 shows a multi-column purchases journal for Hanover Co.The posting procedures are similar to those shown earlier for posting the cash receipts journal. HELPFUL HINT A single-column purchases journal needs only to be footed to prove the equality of debits and credits. Illustration E-14 Multi-column purchases journal File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 June 1 Signe Audio 3 Wight Co. 5 Orange Tree Co. Accounts Merchandise Office Store Other Accounts Payable Inventory Supplies Supplies Dr. Account Ref. Amount Account Credited Ref. Cr. Dr. Dr. Dr. 2,000 1,500 2,600 800 56,600 2,000 1,500 Equipment 157 43,000 7,500 800 1,200 2,600 4,900 30 Sue's Business Forms General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl CASH PAYMENTS JOURNAL In a cash payments (cash disbursements) journal, companies record all disbursements of cash. Entries are made from prenumbered checks. Because companies make cash payments for various purposes, the cash payments journal has multiple columns. Illustration E-15 (page E14) shows a four-column journal. Journalizing Cash Payments Transactions The procedures for journalizing transactions in this journal are similar to those for the cash receipts journal. Karns records each transaction on one line, and for each line there must be equal debit and credit amounts.The entries in the cash payments E14 Appendix E Subsidiary Ledgers and Special Journals File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date 2008 May 1 3 8 10 19 23 28 30 Ck. No. 101 102 103 104 105 106 107 108 Account Debited Prepaid Insurance Mdse. Inventory Mdse. Inventory Jasper Manuf. Inc. Eaton & Howe Inc. Fabor and Son Jp Jasper Manuf. Inc. Dividends Accounts Merchandise Other Ref. Accounts Dr. Payable Dr. Inventory Cr. 130 120 120 1,200 100 4,400 11,000 7,200 6,900 17,500 332 500 6,200 (x) 42,600 (201) 220 216 69 350 855 (120) Cash Cr. 1,200 100 4,400 10,780 6,984 6,831 17,150 500 47,945 (101) The company posts individual amounts to the subsidiary ledger daily. At the end of the accounting period, the company posts totals to the general ledger. Date Eaton and Howe Inc. Ref. Debit Credit 7,200 7,200 12,600 Fabor and Son Debit Credit 6,900 6,900 8,700 Balance 7,200 ------12,600 Date Ref. Cash Debit Credit 53,769 47,945 No. 101 Balance 53,769 5,824 No. 120 Balance 100 4,500 57,690 60,620 3,280 2,425 No. 130 Balance 1,200 2008 May 10 P1 19 CP1 29 P1 2008 May 31 CR1 31 CP1 Merchandise Inventory Balance 6,900 ------8,700 Date 2008 May 3 8 31 31 31 31 Ref. CPI CPI SI CRI Pl CPI Debit 100 4,400 62,190 2,930 63,900 855 Prepaid Insurance Debit Credit 1,200 Credit Date Ref. 2008 May 14 P1 23 CP1 26 P1 Date 2008 May 6 P1 10 CP1 19 P1 28 CP1 Jasper Manufacturing Inc. Ref. Debit Credit Balance 11,000 11,000 17,500 17,500 11,000 -------17,500 -------- Date Ref. 2008 May 1 CP1 The subsidiary ledger is separate from the general ledger. Accounts Receivable is a control account. Date Ref. Accounts Payable Debit Credit 63,900 42,600 Dividends Debit Credit 500 No. 201 Balance 63,900 21,300 No. 332 Balance 500 2008 May 31 P1 31 CP1 Date Ref. 2008 May 30 CP1 General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Illustration E-15 Journalizing and posting the cash payments journal Cash Payments Journal E15 journal in Illustration E-15 are based on the following transactions for Karns Wholesale Supply. May 1 Issued check No. 101 for $1,200 for the annual premium on a fire insurance policy. 3 Issued check No. 102 for $100 in payment of freight when terms were FOB shipping point. 8 Issued check No. 103 for $4,400 for the purchase of merchandise. 10 Sent check No. 104 for $10,780 to Jasper Manufacturing Inc. in payment of May 6 invoice for $11,000 less a 2% discount. 19 Mailed check No. 105 for $6,984 to Eaton and Howe Inc. in payment of May 10 invoice for $7,200 less a 3% discount. 23 Sent check No. 106 for $6,831 to Fabor and Son in payment of May 14 invoice for $6,900 less a 1% discount. 28 Sent check No. 107 for $17,150 to Jasper Manufacturing Inc. in payment of May 19 invoice for $17,500 less a 2% discount. 30 Issued check No. 108 for $500 to stockholders as a dividend. Note that whenever Karns enters an amount in the Other Accounts column, it must identify a specific general ledger account in the Account Debited column. The entries for checks No. 101, 102, 103, and 108 illustrate this situation. Similarly, Karns must identify a subsidiary account in the Account Debited column whenever it enters an amount in the Accounts Payable column. See, for example, the entry for check No. 104. After Karns journalizes the cash payments journal, it totals the columns. The totals are then balanced to prove the equality of debits and credits. Posting the Cash Payments Journal The procedures for posting the cash payments journal are similar to those for the cash receipts journal. Karns posts the amounts recorded in the Accounts Payable column individually to the subsidiary ledger and in total to the control account. It posts Merchandise Inventory and Cash only in total at the end of the month. Transactions in the Other Accounts column are posted individually to the appropriate account(s) affected. The company does not post totals for the Other Accounts column. Illustration E-15 shows the posting of the cash payments journal. Note that Karns uses the symbol CP as the posting reference. After postings are completed, the company proves the equality of the debit and credit balances in the general ledger. In addition, the control account balances should agree with the subsidiary ledger total balance. Illustration E-16 shows the agreement of these balances. Illustration E-16 Proving the ledgers after postings from the sales, cash receipts, purchases, and cash payments journals Accounts Payable Subsidiary Ledger General Ledger Debits Cash Accounts Receivable Merchandise Inventory Prepaid Insurance Dividends Sales Discounts Cost of Goods Sold Credits Notes Payable Accounts Payable Common Stock Sales $ 6,000 21,300 5,000 94,730 $127,030 $ 5,824 51,180 2,425 1,200 500 781 65,120 $127,030 Eaton and Howe Inc. Fabor and Son $12,600 8,700 $21,300 E16 Appendix E Subsidiary Ledgers and Special Journals EFFECTS OF SPECIAL JOURNALS ON THE GENERAL JOURNAL Special journals for sales, purchases, and cash substantially reduce the number of entries that companies make in the general journal. Only transactions that cannot be entered in a special journal are recorded in the general journal. For example, a company may use the general journal to record such transactions as granting of credit to a customer for a sales return or allowance, granting of credit from a supplier for purchases returned, acceptance of a note receivable from a customer, and purchase of equipment by issuing a note payable. Also, correcting, adjusting, and closing entries are made in the general journal. The general journal has columns for date, account title and explanation, reference, and debit and credit amounts. When control and subsidiary accounts are not involved, the procedures for journalizing and posting of transactions are the same as those described in earlier chapters. When control and subsidiary accounts are involved, companies make two changes from the earlier procedures: 1. In journalizing, they identify both the control and the subsidiary accounts. 2. In posting, there must be a dual posting: once to the control account and once to the subsidiary account. Illustration E-17 Journalizing and posting the general journal File Edit View ? Go Bookmarks Tools Entries Help Post Closing Reports Tools Help Problem Date Account Title and Explanation Ref. Debit 500 Credit 2008 201/ May 31 Accounts PayableFabor and Son 120 Merchandise Inventory (Received credit for returned goods) 500 Date Ref. Fabor and Son Debit Credit 6,900 6,900 8,700 500 Balance 6,900 ------8,700 8,200 , Date Merchandise Inventory Ref. Debit Credit 500 No. 120 Balance 500 2008 May 14 P1 23 CP1 26 P1 31 G1 2008 May 31 G1 Date Ref. Accounts Payable Debit Credit 63,900 42,600 500 No. 201 Balance 63,900 21,300 20,800 2008 May 31 P1 31 CP1 31 G1 General Ledger General Jrnl Sales Jrnl Cash Receipts Jrnl Purchases Jrnl Effects of Special Journals on the General Journal E17 To illustrate, assume that on May 31, Karns Wholesale Supply returns $500 of merchandise for credit to Fabor and Son. Illustration E-17 shows the entry in the general journal and the posting of the entry. Note that if Karns receives cash instead of credit on this return, then it would record the transaction in the cash receipts journal. Note that the general journal indicates two accounts (Accounts Payable, and Fabor and Son) for the debit, and two postings (201/) in the reference column. One debit is posted to the control account and another debit to the creditors account in the subsidiary ledger. Before You Go On... REVIEW IT 1. What types of special journals do companies frequently use to record transactions? Why do they use special journals? 2. Explain how companies post transactions recorded in the sales journal and the cash receipts journal. 3. Indicate the types of transactions that companies record in the general journal when they use special journals. Demonstration Problem Cassandra Wilson Company uses a six-column cash receipts journal with the following columns: Cash (Dr.) Sales Discounts (Dr.) Accounts Receivable (Cr.) Sales (Cr.) Other Accounts (Cr.) Cost of Goods Sold (Dr.) and Merchandise Inventory (Cr.) Cash receipts transactions for the month of July 2008 are as follows. July 3 Cash sales total $5,800 (cost, $3,480). 5 Received a check for $6,370 from Jeltz Company in payment of an invoice dated June 26 for $6,500, terms 2/10, n/30. 9 Stockholders made an additional investment of $5,000 cash in the business. 10 Cash sales total $12,519 (cost, $7,511). 12 Received a check for $7,275 from R. Eliot & Co. in payment of a $7,500 invoice dated July 3, terms 3/10, n/30. 15 Received a customer advance of $700 cash for future sales. 20 Cash sales total $15,472 (cost, $9,283). 22 Received a check for $5,880 from Beck Company in payment of $6,000 invoice dated July 13, terms 2/10, n/30. 29 Cash sales total $17,660 (cost, $10,596). 31 Received cash of $200 on interest earned for July. action plan Record all cash receipts in the cash receipts journal. The account credited indicates items posted individually to the subsidiary ledger or general ledger. Record cash sales in the cash receipts journalnot in the sales journal. The total debits must equal the total credits. Instructions (a) Journalize the transactions in the cash receipts journal. (b) Contrast the posting of the Accounts Receivable and Other Accounts columns. E18 Appendix E Subsidiary Ledgers and Special Journals Solution (a) CASSANDRA WILSON COMPANY Cash Receipts Journal CR1 Sales Cr. 5,800 Date 2008 7/3 5 9 10 12 15 20 22 29 31 Account Credited Ref. Cash Dr. 5,800 6,370 5,000 12,519 7,275 700 15,472 5,880 17,660 200 76,876 Sales Discounts Dr. Accounts Receivable Cr. Other Accounts Cr. Cost of Goods Sold Dr. Mdse. Inv. Cr. 3,480 Jeltz Company Common Stock R. Eliot & Co. Unearned Revenue Beck Company Interest Revenue 130 6,500 5,000 12,519 7,511 700 15,472 9,283 10,596 200 5,900 30,870 225 7,500 120 6,000 17,660 475 20,000 51,451 (b) The Accounts Receivable column is posted as a credit to Accounts Receivable. The individual amounts are credited to the customers accounts identified in the Account Credited column, which are maintained in the accounts receivable subsidiary ledger. The amounts in the Other Accounts column are posted individually. They are credited to the account titles identified in the Account Credited column. SUMMARY OF STUDY OBJECTIVES 1 Describe the nature and purpose of a subsidiary ledger. A subsidiary ledger is a group of accounts with a common characteristic. It facilitates the recording process by freeing the general ledger from details of individual balances. 2 Explain how companies use special journals in journalizing. Companies use special journals to group similar types of transactions. In a special journal, generally only one line is used to record a complete transaction. 3 Indicate how companies post a multi-column journal. In posting a multi-column journal: (a) Companies post all column totals except for the Other Accounts column once at the end of the month to the account title specified in the column heading. (b) Companies do not post the total of the Other Accounts column. Instead, the individual amounts comprising the total are posted separately to the general ledger accounts specified in the Account Credited (Debited) column. (c) The individual amounts in a column posted in total to a control account are posted daily to the subsidiary ledger accounts specified in the Account Credited (Debited) column. GLOSSARY Accounts payable (creditors) subsidiary ledger A subsidiary ledger that collects transaction data of individual creditors. (p. E1). Accounts receivable (customers) subsidiary ledger A subsidiary ledger that collects transaction data of individual customers. (p. E1). Cash payments (disbursements) journal A special journal that records all cash paid. (p. E13). Cash receipts journal A special journal that records all cash received. (p. E7). Control account An account in the general ledger that summarizes subsidiary ledger. (p. E1). Questions Purchases journal A special journal that records all purchases of merchandise on account. (p. E11). Sales journal A special journal that records all sales of merchandise on account. (p. E5). E19 Special journal A journal that records similar types of transactions, such as all credit sales. (p. E4). Subsidiary ledger A group of accounts with a common characteristic. (p. E1). SELF-STUDY QUESTIONS Answers are at the end of the chapter. (SO 1) (SO 2) 1. Which of the following is incorrect concerning subsidiary ledgers? a. The purchases ledger is a common subsidiary ledger for creditor accounts. b. The accounts receivable ledger is a subsidiary ledger. c. A subsidiary ledger is a group of accounts with a common characteristic. d. An advantage of the subsidiary ledger is that it permits a division of labor in posting. 2. A sales journal will be used for: Credit Cash Sales Sales Sales Discounts a. no yes yes b. yes no yes c. yes no no d. yes yes no 3. Which of the following statements is correct? a. The sales discount column is included in the cash receipts journal. b. The purchases journal records all purchases of merchandise whether for cash or on account. c. The cash receipts journal records sales on account. d. Merchandise returned by the buyer is recorded by the seller in the purchases journal. 4. Which of the following is incorrect concerning the posting of the cash receipts journal? a. The total of the Other Accounts column is not posted. b. All column totals except the total for the Other Accounts column are posted once at the end of the month to the account title(s) specified in the column heading. c. The totals of all columns are posted daily to the accounts specified in the column heading. d. The individual amounts in a column posted in total to a control account are posted daily to the subsidiary (SO 2, 3) (SO 3) ledger account specified in the Account Credited column. 5. Postings from the purchases journal to the subsidiary (SO 3) ledger are generally made: a. yearly. b. monthly. c. weekly. d. daily. 6. Which statement is incorrect regarding the general journal? (SO 2) a. Only transactions that cannot be entered in a special journal are recorded in the general journal. b. Dual postings are always required in the general journal. c. The general journal may be used to record acceptance of a note receivable in payment of an account receivable. d. Correcting, adjusting, and closing entries are made in the general journal. 7. When companies use special journals: (SO 2) a. they record all purchase transactions in the purchases journal. b. they record all cash received, except from cash sales, in the cash receipts journal. c. they record all cash disbursements in the cash payments journal. d. a general journal is not necessary. 8. If a customer returns goods for credit, the selling company (SO 2) normally makes an entry in the: a. cash payments journal. b. sales journal. c. general journal. d. cash receipts journal. Go to the books website,, for Additional Self-Study questions. QUESTIONS 1. What are the advantages of using subsidiary ledgers? 2. (a) When do companies normally post to (1) the subsidiary accounts and (2) the general ledger control accounts? (b) Describe the relationship between a control account and a subsidiary ledger. 3. Identify and explain the four special journals discussed in the chapter. List an advantage of using each of these journals rather than using only a general journal. 4. Thogmartin Company uses special journals. It recorded in a sales journal a sale made on account to R. Peters for $435. A few days later, R. Peters returns $70 worth of merchandise for credit. Where should Thogmartin Company record the sales return? Why? 5. A $500 purchase of merchandise on account from Lore Company was properly recorded in the purchases journal. When posted, however, the amount recorded in the E20 Appendix E Subsidiary Ledgers and Special Journals (d) Sales of merchandise on account. (e) Collection of cash on account from a customer. (f) Purchase of office supplies on account. In what journal would the following transactions be recorded? (Assume that a two-column sales journal and a single-column purchases journal are used.) (a) Cash received from signing a note payable. (b) Investment of cash by stockholders. (c) Closing of the expense accounts at the end of the year. (d) Purchase of merchandise on account. (e) Credit received for merchandise purchased and returned to supplier. (f) Payment of cash on account due a supplier. What transactions might be included in a multiple-column purchases journal that would not be included in a singlecolumn purchases journal? Give an example of a transaction in the general journal that causes an entry to be posted twice (i.e., to two accounts), one in the general ledger, the other in the subsidiary ledger. Does this affect the debit/credit equality of the general ledger? Give some examples of appropriate general journal transactions for an organization using special journals. 6. 7. 8. 9. subsidiary ledger was $50. How might this error be discovered? Why would special journals used in different businesses not be identical in format? What type of business would maintain a cash receipts journal but not include a column for accounts receivable? The cash and the accounts receivable columns in the cash receipts journal were mistakenly overadded by $4,000 at the end of the month. (a) Will the customers ledger agree with the Accounts Receivable control account? (b) Assuming no other errors, will the trial balance totals be equal? One column total of a special journal is posted at monthend to only two general ledger accounts. One of these two accounts is Accounts Receivable. What is the name of this special journal? What is the other general ledger account to which that same month-end total is posted? In what journal would the following transactions be recorded? (Assume that a two-column sales journal and a single-column purchases journal are used.) (a) Recording of depreciation expense for the year. (b) Credit given to a customer for merchandise purchased on credit and returned. (c) Sales of merchandise for cash. 10. 11. 12. 13. BRIEF EXERCISES Identify subsidiary ledger balances. (SO 1) BEE-1 Presented below is information related to Kienholz Company for its first month of operations. Identify the balances that appear in the accounts receivable subsidiary ledger and the accounts receivable balance that appears in the general ledger at the end of January. Credit Sales Jan. 7 Agler Co. 15 Barto Co. 23 Maris Co. Identify subsidiary ledger accounts. (SO 1) Identify special journals. (SO 2) Cash Collections $10,000 6,000 9,000 Jan. 17 Agler Co. 24 Barto Co. 29 Maris Co. $7,000 4,000 9,000 BEE-2 Identify in what ledger (general or subsidiary) each of the following accounts is shown. 3. Notes Payable 4. Accounts PayableThebeau 4. Credit sales 5. Purchase of merchandise on account 6. Receipt of cash for services performed 1. Rent Expense 2. Accounts ReceivableChar BEE-3 1. Cash sales 2. Payment of dividends 3. Cash purchase of land Identify the journal in which each of the following transactions is recorded. Identify entries to cash receipts journal. (SO 2) BEE-4 Indicate whether each of the following debits and credits is included in the cash receipts journal. (Use Yes or No to answer this question.) 1. Debit to Sales 2. Credit to Merchandise Inventory 3. Credit to Accounts Receivable 4. Debit to Accounts Payable Identify transactions for special journals. (SO 2) BEE-5 Galindo Co. uses special journals and a general journal. Identify the journal in which each of the following transactions is recorded. (a) (b) (c) (d) Purchased equipment on account. Purchased merchandise on account. Paid utility expense in cash. Sold merchandise on account. Exercises BEE-6 Identify the special journal(s) in which the following column headings appear. 4. Sales Cr. 5. Merchandise Inventory Dr. Identify transactions for special journals. (SO 2) E21 1. Sales Discounts Dr. 2. Accounts Receivable Cr. 3. Cash Dr. BEE-7 Kidwell Computer Components Inc. uses a multi-column cash receipts journal. Indicate which column(s) is/are posted only in total, only daily, or both in total and daily. 1. Accounts Receivable 2. Sales Discounts 3. Cash 4. Other Accounts Indicate postings to cash receipts journal. (SO 3) EXERCISES EE-1 Donahue Company uses both special journals and a general journal as described in this chapter. On June 30, after all monthly postings had been completed, the Accounts Receivable control account in the general ledger had a debit balance of $320,000; the Accounts Payable control account had a credit balance of $77,000. The July transactions recorded in the special journals are summarized below. No entries affecting accounts receivable and accounts payable were recorded in the general journal for July. Sales journal Purchases journal Cash receipts journal Cash payments journal Total sales $161,400 Total purchases $56,400 Accounts receivable column total $131,000 Accounts payable column total $47,500 Determine control account balances, and explain posting of special journals. (SO 1, 3) $161,400 in the sales journal posted? (d) To what account(s) is the accounts receivable column total of $131,000 in the cash receipts journal posted? EE-2 Presented below is the subsidiary accounts receivable account of Jeremy Dody. Explain postings to subsidiary ledger. (SO 1) Date 2008 Sept. 2 9 27 Ref. S31 G4 CR8 Debit 61,000 Credit Balance 61,000 47,000 14,000 47,000 Instructions Write a memo to Andrea Barden, chief financial officer, that explains each transaction. EE-3. Instructions (a) Set up control and subsidiary accounts and enter the beginning balances. Do not construct the journals. (b) Post the various journals. Post the items as individual items or as totals, whichever would be the appropriate procedure. (No sales discounts given.) Post various journals to control and subsidiary accounts. (SO 1, 3) E22 Appendix E Subsidiary Ledgers and Special Journals (c) Prepare a list of customers and prove the agreement of the controlling account with the subsidiary ledger at September 30, 2008. Determine control and subsidiary ledger balances for accounts receivable. (SO 1) EE-4 Yu Suzuki Company has a balance in its Accounts Receivable control account of $11,000 on January 1, 2008. The subsidiary ledger contains three accounts: Smith Company, balance $4,000; Green Company, balance $2,500; and Koyan Company. During January, the following receivable-related transactions occurred. Credit Sales Smith Company Green Company Koyan Company $9,000 7,000 8,500 Collections $8,000 2,500 9,000 Returns $ -03,000 -0- Instructions (a) What is the January 1 balance in the Koyan Company subsidiary account? (b) What is the January 31 balance in the control account? (c) Compute the balances in the subsidiary accounts at the end of the month. (d) Which January transaction would not be recorded in a special journal? Determine control and subsidiary ledger balances for accounts payable. (SO 1) EE-5 Nobo Uematsu Company has a balance in its Accounts Payable control account of $8,250 on January 1, 2008. The subsidiary ledger contains three accounts: Jones Company, balance $3,000; Brown Company, balance $1,875; and Aatski Company. During January, the following receivable-related transactions occurred. Purchases Jones Company Brown Company Aatski Company $6,750 5,250 6,375 Payments $6,000 1,875 6,750 Returns $ -02,250 -0- Instructions (a) What is the January 1 balance in the Aatski Company subsidiary account? (b) What is the January 31 balance in the control account? (c) Compute the balances in the subsidiary accounts at the end of the month. (d) Which January transaction would not be recorded in a special journal? Record transactions in sales and purchases journal. (SO 1, 2) EE-6 Montalvo Company uses special journals and a general journal. The following transactions occurred during September 2008. Sept. 2 Sold merchandise on account to T. Hossfeld, invoice no. 101, $720, terms n/30. The cost of the merchandise sold was $420. 10 Purchased merchandise on account from L. Rincon $600, terms 2/10, n/30. 12 Purchased office equipment on account from R. Press $6,500. 21 Sold merchandise on account to P. Lowther, invoice no. 102 for $800, terms 2/10, n/30. The cost of the merchandise sold was $480. 25 Purchased merchandise on account from W. Barone $860, terms n/30. 27 Sold merchandise to S. Miller for $700 cash. The cost of the merchandise sold was $400. Instructions (a) Prepare a sales journal (see Illustration E-6) and a single-column purchase journal (see Illustration E-11). (Use page 1 for each journal.) (b) Record the transaction(s) for September that should be journalized in the sales journal and the purchases journal. Record transactions in cash receipts and cash payments journal. (SO 1, 2) EE-7 Pherigo Co. uses special journals and a general journal. The following transactions occurred during May 2008. May 1 2 3 14 I. Pherigo invested $50,000 cash in the business in exchange for common stock. Sold merchandise to B. Sherrick for $6,300 cash. The cost of the merchandise sold was $4,200. Purchased merchandise for $7,200 from J. DeLeon using check no. 101. Paid salary to H. Potter $700 by issuing check no. 102. Exercises 16 22 Sold merchandise on account to K. Kimbell for $900, terms n/30. The cost of the merchandise sold was $630. A check of $9,000 is received from M. Moody in full for invoice 101; no discount given. E23 Instructions (a) Prepare a multiple-column cash receipts journal (see Illustration E-8) and a multiplecolumn cash payments journal (see Illustration E-15). (Use page 1 for each journal.) (b) Record the transaction(s) for May that should be journalized in the cash receipts journal and cash payments journal. EE-8 Wick Company uses the columnar cash journals illustrated in the textbook. In April, the following selected cash transactions occurred. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Made a refund to a customer for the return of damaged goods. Received collection from customer within the 3% discount period. Purchased merchandise for cash. Paid a creditor within the 3% discount period. Received collection from customer after the 3% discount period had expired. Paid freight on merchandise purchased. Paid cash for office equipment. Received cash refund from supplier for merchandise returned. Paid cash dividend to stockholders. Made cash sales. Explain journalizing in cash journals. (SO 2) Instructions Indicate (a) the journal, and (b) the columns in the journal that should be used in recording each transaction. EE-9 Velasquez Company has the following selected transactions during March. Purchased equipment costing $9,400 from Chang Company on account. Received credit of $410 from Lyden Company for merchandise damaged in shipment to Velasquez. Issued credit of $400 to Higley Company for merchandise the customer returned. The returned merchandise had a cost of $260. Journalize transactions in general journal and post. (SO 1, 3) Mar. 2 5 7 Velasquez Company uses a one-column purchases journal, a sales journal, the columnar cash journals used in the text, and a general journal. Instructions (a) Journalize the transactions in the general journal. (b) In a brief memo to the president of Velasquez Company, explain the postings to the control and subsidiary accounts from each type of journal. EE-10 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Below are some typical transactions incurred by Kwun Company. Indicate journalizing in special journals. (SO 2) Payment of creditors on account. Return of merchandise sold for credit. Collection on account from customers. Sale of land for cash. Sale of merchandise on account. Sale of merchandise for cash. Received credit for merchandise purchased on credit. Sales discount taken on goods sold. Payment of employee wages. Payment of cash dividend to stockholders. Depreciation on building. Purchase of office supplies for cash. Purchase of merchandise on account. Instructions For each transaction, indicate whether it would normally be recorded in a cash receipts journal, cash payments journal, sales journal, single-column purchases journal, or general journal. E24 Appendix E Subsidiary Ledgers and Special Journals EE-11 The general ledger of Sanchez Company contained the following Accounts Payable control account (in T-account form). Also shown is the related subsidiary ledger. Explain posting to control account and subsidiary ledger. (SO 1, 3) GENERAL LEDGER Accounts Payable Feb. 15 General journal 28 ? 1,400 ? Feb. 1 5 11 28 Balance General journal General journal Purchases 26,025 265 550 13,400 9,500 Feb. 28 Balance ACCOUNTS PAYABLE LEDGER Perez Feb. 28 Bal. 4,600 Tebbetts Feb. 28 Bal. ? Zerbe Feb. 28 Bal. 2,300 Instructions (a) Indicate the missing posting reference and amount in the control account, and the missing ending balance in the subsidiary ledger. (b) Indicate the amounts in the control account that were dual-posted (i.e., posted to the control account and the subsidiary accounts). Prepare purchases and general journals. (SO 1, 2) EE-12 Selected accounts from the ledgers of Lockhart Company at July 31 showed the following. GENERAL LEDGER Store Equipment Date July 1 Date July 1 15 18 25 31 Explanation Explanation Ref. G1 Ref. G1 G1 G1 G1 P1 Debit 3,900 Debit Credit 3,900 400 100 200 8,300 Credit No. 153 Balance 3,900 No. 201 Balance 3,900 4,300 4,200 4,000 12,300 Date July 15 18 25 31 Merchandise Inventory Explanation Ref. G1 G1 G1 P1 Debit 400 100 200 8,300 Credit No. 120 Balance 400 300 100 8,400 Accounts Payable ACCOUNTS PAYABLE LEDGER Albin Equipment Co. Date July 1 Explanation Ref. G1 Debit Credit 3,900 Balance 3,900 Date July 14 25 Date July 12 21 Date July 15 Explanation Explanation Explanation Drago Co. Ref. P1 G1 Ref. P1 P1 Debit 200 Debit Credit 500 600 Debit Credit 400 Credit 1,100 Balance 1,100 900 Balance 500 1,100 Balance 400 Brian Co. Date July 3 20 Date July 17 18 29 Explanation Explanation Ref. P1 P1 Debit Credit 2,400 700 Debit 100 1,600 Credit 1,400 Balance 2,400 3,100 Balance 1,400 1,300 2,900 Erik Co. Chacon Corp Ref. P1 G1 P1 Heinen Inc. Ref. G1 Problems: Set A Instructions From the data prepare: (a) the single-column purchases journal for July. (b) the general journal entries for July. EE-13 Kansas Products uses both special journals and a general journal as described in this chapter. Kansas also posts customers accounts in the accounts receivable subsidiary ledger. The postings for the most recent month are included in the subsidiary T accounts below. E25 Determine correct posting amount to control account. (SO 3) Bargo Bal. 340 200 250 Bal. Leary 150 240 150 Carol Bal. 0 145 145 Bal. Paul 120 190 150 120 Instructions Determine the correct amount of the end-of-month posting from the sales journal to the Accounts Receivable control account. EE-14 below. Selected account balances for Matisyahu Company at January 1, 2008, are presented Compute balances in various accounts. (SO 3) Accounts Payable Accounts Receivable Cash Inventory $14,000 22,000 17,000 13,500 Matisyahus sales journal for January shows a total of $100,000 in the selling price column, and its one-column purchases journal for January shows a total of $72,000. The column totals in Matisyahus cash receipts journal are: Cash Dr. $61,000; Sales Discounts Dr. $1,100; Accounts Receivable Cr. $45,000; Sales Cr. $6,000; and Other Accounts Cr. $11,100. The column totals in Matisyahus cash payments journal for January are: Cash Cr. $55,000; Inventory Cr. $1,000; Accounts Payable Dr. $46,000; and Other Accounts Dr. $10,000. Matisyahus total cost of goods sold for January is $63,600. Accounts Payable, Accounts Receivable, Cash, Inventory, and Sales are not involved in the Other Accounts column in either the cash receipts or cash payments journal, and are not involved in any general journal entries. Instructions Compute the January 31 balance for Matisyahu in the following accounts. (a) Accounts Payable. (b) Accounts Receivable. (c) Cash. (d) Inventory. (e) Sales. llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Exercise Set B. PROBLEMS: SET A PE-1A Grider Companys chart of accounts includes the following selected accounts. 101 112 120 311 Cash Accounts Receivable Merchandise Inventory Common Stock 401 Sales 414 Sales Discounts 505 Cost of Goods Sold Journalize transactions in cash receipts journal; post to control account and subsidiary ledger. (SO 1, 2, 3) .w i l e y. c o EXERCISES: SET B www m /co E26 Appendix E Subsidiary Ledgers and Special Journals Stockholders invested $7,200 additional cash in the business, in exchange for common stock.. (a) Balancing totals $21,205 (c) Accounts Receivable $1,430 Journalize transactions in cash payments journal; post to control account April transactions to these accounts. (c) Prove the agreement of the control account and subsidiary account balances. PE-2A Ming Companys chart of accounts includes the following selected accounts. 101 120 130 157 Cash Merchandise Inventory Prepaid Insurance Equipment 201 Accounts Payable 332 Dividends 505 Cost of Goods Sold On October 1 the accounts payable ledger of Ming Company showed the following balances: Bovary Company $2,700, Nyman Co. $2,500, Pyron Co. $1,800, and Sims Company $3,700. The October transactions involving the payment of cash were as follows. Oct. 1 3 5 10 15 16 19 29 (a) Balancing totals $12,350 Purchased merchandise, check no. 63, $300. Purchased equipment, check no. 64, $800. Paid Bovary Company balance due of $2,700, less 2% discount, check no. 65, $2,646. Purchased merchandise, check no. 66, $2,250. Paid Pyron Co. balance due of $1,800, check no. 67. Paid cash dividend of $400, check no. 68. Paid Nyman Co. in full for invoice no. 610, $1,600 less 2% cash discount, check no. 69, $1,568. Paid Sims Company in full for invoice no. 264, $2,500, check no. 70. (c) Accounts Payable $2,100 Journalize transactions in multi-column purchases journal; post to the general and subsidiary ledgers. (SO 1, 2, 3) October transactions to these accounts. (c) Prove the agreement of the control account and the subsidiary account balances. PE-3A The chart of accounts of Lopez Company includes the following selected accounts. 112 120 126 157 201 Accounts Receivable Merchandise Inventory Supplies Equipment Accounts Payable 401 412 505 610 Sales Sales Returns and Allowances Cost of Goods Sold Advertising Expense In July the following selected transactions were completed. All purchases and sales were on account. The cost of all merchandise sold was 70% of the sales price. July 1 2 3 Purchased merchandise from Fritz Company $8,000. Received freight bill from Wayward Shipping on Fritz purchase $400. Made sales to Pinick Company $1,300, and to Wayne Bros. $1,500. Problems: Set A 5 8 13 15 16 18 21 22 24 26 28 30 Purchased merchandise from Moon Company $3,200. Received credit on merchandise returned to Moon Company $300. Purchased store supplies from Cress Supply $720. Purchased merchandise from Fritz Company $3,600 and from Anton Company $3,300. Made sales to Sager Company $3,450 and to Wayne Bros. $1,570. Received bill for advertising from Lynda Advertisements $600. Made sales to Pinick Company $310 and to Haddad Company $2,800. Granted allowance to Pinick Company for merchandise damaged in shipment $40. Purchased merchandise from Moon Company $3,000. Purchased equipment from Cress Supply $900. Received freight bill from Wayward Shipping on Moon purchase of July 24, $380. Made sales to Sager Company $5,600. E27A Selected accounts from the chart of accounts of Boyden Company are shown below. 101 112 120 126 157 201 Cash Accounts Receivable Merchandise Inventory Supplies Equipment Accounts Payable 401 412 414 505 726 Sales Sales Returns and Allowances Sales Discounts Cost of Goods Sold Salaries Expense (a) Purchases journal Accounts Payable $24,100 Sales column total $16,530 (c) Accounts Receivable $16,490 Accounts Payable $23,800 Journalize transactions in special journals. (SO 1, 2, 3) The cost of all merchandise sold was 60% of the sales price. During January, Boyden completed the following transactions. Jan. 3 4 4 5 6 8 9 11 13 13 15 15 17 17 19 20 20 23 24 27 30 31 31 1. 2. Purchased merchandise on account from Wortham Co. $10,000. Purchased supplies for cash $80. Sold merchandise on account to Milam $5,250, invoice no. 371, terms 1/10, n/30. Returned $300 worth of damaged goods purchased on account from Wortham Co. on January 3. Made cash sales for the week totaling $3,150. Purchased merchandise on account from Noyes Co. $4,500. Sold merchandise on account to Connor Corp. $6,400, invoice no. 372, terms 1/10, n/30. Purchased merchandise on account from Betz Co. $3,700. Paid in full Wortham Co. on account less a 2% discount. Made cash sales for the week totaling $6,260. Received payment from Connor Corp. for invoice no. 372. Paid semi-monthly salaries of $14,300 to employees. Received payment from Milam for invoice no. 371. Sold merchandise on account to Bullock Co. $1,200, invoice no. 373, terms 1/10, n/30. Purchased equipment on account from Murphy Corp. $5,500. Cash sales for the week totaled $3,200. Paid in full Noyes Co. on account less a 2% discount. Purchased merchandise on account from Wortham Co. $7,800. Purchased merchandise on account from Forgetta Corp. $5,100. Made cash sales for the week totaling $4,230. Received payment from Bullock Co. for invoice no. 373. Paid semi-monthly salaries of $13,200 to employees. Sold merchandise on account to Milam $9,330, invoice no. 374, terms 1/10, n/30. Boyden Company uses the following journals. Sales journal. Single-column purchases journal. E28 Appendix E Subsidiary Ledgers and Special Journals 3. 4. 5. Cash receipts journal with columns for Cash Dr., Sales Discounts Dr., Accounts Receivable Cr., Sales Cr., Other Accounts Cr., and Cost of Goods Sold Dr./Merchandise Inventory Cr. Cash payments journal with columns for Other Accounts Dr., Accounts Payable Dr., Merchandise Inventory Cr., and Cash Cr. General journal. (a) Sales journal $22,180 Purchases journal $31,100 Cash receipts journal balancing total $29,690 Cash payments journal balancing total $41,780 Journalize in sales and cash receipts journals; post; prepare a trial balance; prove control to subsidiary; prepare adjusting entries; prepare an adjusted trial balance. (SO 1, 2, 3) Instructions Using the selected accounts provided: (a) Record the January transactions in the appropriate journal noted. (b) Foot and crossfoot all special journals. (c) Show how postings would be made by placing ledger account numbers and checkmarks as needed in the journals. (Actual posting to ledger accounts is not required.) PE-5A Presented below are the purchases and cash payments journals for Reyes Co. for its first month of operations. PURCHASES JOURNAL Date July 4 5 11 13 20 P1 Account Credited G. Clemens A. Ernst J. Happy C. Tabor M. Sneezy Ref. Merchandise Inventory Dr. Accounts Payable Cr. 6,800 8,100 5,920 15,300 7,900 44,020 CASH PAYMENTS JOURNAL Account Debited Store Supplies A. Ernst Prepaid Rent G. Clemens Dividends C. Tabor CP1 Cash Cr. 600 8,019 6,000 6,800 2,500 15,147 39,066 Date July 4 10 11 15 19 21 Ref. Other Accounts Dr. 600 Accounts Payable Dr. 8,100 Merchandise Inventory Cr. 81 6,000 6,800 2,500 15,300 9,100 30,200 153 234 In addition, the following transactions have not been journalized for July. The cost of all merchandise sold was 65% of the sales price. July 1 6 7 8 10 13 16 20 21 29 D. Reyes invested $80,000 in cash in exchange for common stock. Sold merchandise on account to Ewing Co. $6,200 terms 1/10, n/30. Made cash sales totaling $6,000. Sold merchandise on account to S. Beauty $3,600, terms 1/10, n/30. Sold merchandise on account to W. Pitts $4,900, terms 1/10, n/30. Received payment in full from S. Beauty. Received payment in full from W. Pitts. Received payment in full from Ewing Co. Sold merchandise on account to H. Prince $5,000, terms 1/10, n/30. Returned damaged goods to G. Clemens and received cash refund of $420. Instructions (a) Open the following accounts in the general ledger. 101 Cash 112 Accounts Receivable 120 Merchandise Inventory 127 Store Supplies 131 Prepaid Rent 201 Accounts Payable Problems: Set A 311 332 401 414 Common Stock Dividends Sales Sales Discounts 505 Cost of Goods Sold 631 Supplies Expense 729 Rent Expense E29 (b) Journalize the transactions that have not been journalized in the sales journal, the cash receipts journal (see Illustration E-8), and the general journal. (c) Post to the accounts receivable and accounts payable subsidiary ledgers. Follow the sequence of transactions as shown in the problem. (d) Post the individual entries and totals to the general ledger. (e) Prepare a trial balance at July 31, 2008. (f) Determine whether the subsidiary ledgers agree with the control accounts in the general ledger. (g) The following adjustments at the end of July are necessary. (1) A count of supplies indicates that $140 is still on hand. (2) Recognize rent expense for July, $500. Prepare the necessary entries in the general journal. Post the entries to the general ledger. (h) Prepare an adjusted trial balance at July 31, 2008. PE-6A The post-closing trial balance for Cortez Co. is as follows. (b) Sales journal total $19,700 Cash receipts journal balancing totals $101,120 (e) Totals $119,520 (f) Accounts Receivable $5,000 Accounts Payable $13,820 (h) Totals $119,520 Journalize in special journals; post; prepare a trial balance. (SO 1, 2, 3) CORTEZ CO. Post-Closing Trial Balance December 31, 2008 Debit Cash Accounts Receivable Notes Receivable Merchandise Inventory Equipment Accumulated DepreciationEquipment Accounts Payable Common Stock $ 41,500 15,000 45,000 23,000 6,450 Credit $ 1,500 43,000 86,450 $130,950 $130,950 The subsidiary ledgers contain the following information: (1) accounts receivableJ. Anders $2,500, F. Cone $7,500, T. Dudley $5,000; (2) accounts payableJ. Feeney $10,000, D. Goodman $18,000, and K. Inwood $15,000. The cost of all merchandise sold was 60% of the sales price. The transactions for January 2009 are as follows. Jan. 3 5 7 11 12 13 14 15 17 18 20 23 24 27 29 30 Sell merchandise to M. Rensing $5,000, terms 2/10, n/30. Purchase merchandise from E. Vietti $2,000, terms 2/10, n/30. Receive a check from T. Dudley $3,500. Pay freight on merchandise purchased $300. Pay rent of $1,000 for January. Receive payment in full from M. Rensing. Post all entries to the subsidiary ledgers. Issued credit of $300 to J. Aders for returned merchandise. Send K. Inwood a check for $14,850 in full payment of account, discount $150. Purchase merchandise from G. Marley $1,600, terms 2/10, n/30. Pay sales salaries of $2,800 and office salaries $2,000. Give D. Goodman a 60-day note for $18,000 in full payment of account payable. Total cash sales amount to $9,100. Post all entries to the subsidiary ledgers. Sell merchandise on account to F. Cone $7,400, terms 1/10, n/30. Send E. Vietti a check for $950. Receive payment on a note of $40,000 from B. Lemke. Post all entries to the subsidiary ledgers. Return merchandise of $300 to G. Marley for credit. E30 Appendix E Subsidiary Ledgers and Special Journals Instructions (a) Open general and subsidiary ledger accounts for the following. 101 112 115 120 157 158 200 201 Cash Accounts Receivable Notes Receivable Merchandise Inventory Equipment Accumulated DepreciationEquipment Notes Payable Accounts Payable 311 401 412 414 505 726 727 729 Common Stock Sales Sales Returns and Allowances Sales Discounts Cost of Goods Sold Sales Salaries Expense Office Salaries Expense Rent Expense (b) Sales journal $12,400 Purchases journal $3,600 Cash receipts journal (balancing) $57,600 Cash payments journal (balancing) $22,050 (d) Totals $139,800 (e) Accounts Receivable $18,600 Accounts Payable $12,350 (b) Record the January transactions in a sales journal, a single-column purchases journal, a cash receipts journal (see Illustration E-8), a cash payments journal (see Illustration E-15), and a general journal. (c) Post the appropriate amounts to the general ledger. (d) Prepare a trial balance at January 31, 2009. (e) Determine whether the subsidiary ledgers agree with controlling accounts in the general ledger. PROBLEMS: SET B Journalize transactions in cash receipts journal; post to control account and subsidiary ledger. (SO 1, 2, 3) PE-1B Darby Companys chart of accounts includes the following selected accounts. 101 112 120 311 Cash Accounts Receivable Merchandise Inventory Common Stock 401 Sales 414 Sales Discounts 505 Cost of Goods Sold On June 1 the accounts receivable ledger of Darby Company showed the following balances: Deering & Son $2,500, Farley Co. $1,900, Grinnell Bros. $1,600, and Lenninger Co. $1,300. The June transactions involving the receipt of cash were as follows. June 1 3 6 7 9 11 15 20 (a) Balancing totals $28,255 Stockholders invested $10,000 additional cash in the business, in exchange for common stock. Received check in full from Lenninger Co. less 2% cash discount. Received check in full from Farley Co. less 2% cash discount. Made cash sales of merchandise totaling $6,135. The cost of the merchandise sold was $4,090. Received check in full from Deering & Son less 2% cash discount. Received cash refund from a supplier for damaged merchandise $320. Made cash sales of merchandise totaling $4,500. The cost of the merchandise sold was $3,000. Received check in full from Grinnell Bros. $1,600. (c) Accounts Receivable $0 Journalize transactions in cash payments journal; post to the general June transactions to these accounts. (c) Prove the agreement of the control account and subsidiary account balances. PE-2B Gonya Companys chart of accounts includes the following selected accounts. 101 Cash 120 Merchandise Inventory 130 Prepaid Insurance 157 Equipment 201 Accounts Payable 332 Dividends On November 1 the accounts payable ledger of Gonya Company showed the following balances: A. Hess & Co. $4,500, C. Kimberlin $2,350, G. Ruttan $1,000, and Wex Bros. $1,500. The November transactions involving the payment of cash were as follows. Nov. 1 3 Purchased merchandise, check no. 11, $1,140. Purchased store equipment, check no. 12, $1,700. Problems: Set B 5 11 15 16 19 25 30 Paid Wex Bros. balance due of $1,500, less 1% discount, check no. 13, $1,485. Purchased merchandise, check no. 14, $2,000. Paid G. Ruttan balance due of $1,000, less 3% discount, check no. 15, $970. Paid cash dividend of $500, check no. 16. Paid C. Kimberlin in full for invoice no. 1245, $1,150 less 2% discount, check no. 17, $1,127. Paid premium due on one-year insurance policy, check no. 18, $3,000. Paid A. Hess & Co. in full for invoice no. 832, $3,500, check no. 19. E31 November transactions to these accounts. (c) Prove the agreement of the control account and the subsidiary account balances. PE-3B The chart of accounts of Emley Company includes the following selected accounts. 112 120 126 157 201 Accounts Receivable Merchandise Inventory Supplies Equipment Accounts Payable 401 412 505 610 Sales Sales Returns and Allowances Cost of Goods Sold Advertising Expense (a) Balancing totals $15,490 (c) Accounts Payable $2,200 Journalize transactions in multi-column purchases journal; post to the general and subsidiary ledgers. (SO 1, 2, 3) In May the following selected transactions were completed. All purchases and sales were on account except as indicated. The cost of all merchandise sold was 65% of the sales price. May 2 3 5 8 10 15 16 17 18 20 23 25 26 28 Purchased merchandise from Younger Company $7,500. Received freight bill from Ruden Freight on Younger purchase $360. Made sales to Ellie Company $1,980, DeShazer Bros. $2,700, and Liu Company $1,500. Purchased merchandise from Utley Company $8,000 and Zeider Company $8,700. Received credit on merchandise returned to Zeider Company $500. Purchased supplies from Rodriquez Supply $900. Purchased merchandise from Younger Company $4,500, and Utley Company $7,200. Returned supplies to Rodriquez Supply, receiving credit $100. (Hint: Credit Supplies.) Received freight bills on May 16 purchases from Ruden Freight $500. Returned merchandise to Younger Company receiving credit $300. Made sales to DeShazer Bros. $2,400 and to Liu Company $3,600. Received bill for advertising from Amster Advertising $900. Granted allowance to Liu Company for merchandise damaged in shipment $200. Purchased equipment from Rodriquez Supply $500. (a) Purchases journal Accounts Payable, Cr. $39,060 Sales column total $12,180 (c) Accounts Receivable $11,980 Accounts Payable $38,160 Journalize transactions in special journals. (SO 1, 2, 3)B Selected accounts from the chart of accounts of Litke Company are shown below. 101 112 120 126 140 145 Cash Accounts Receivable Merchandise Inventory Supplies Land Buildings 201 401 414 505 610 Accounts Payable Sales Sales Discounts Cost of Goods Sold Advertising Expense The cost of all merchandise sold was 70% of the sales price. During October, Litke Company completed the following transactions. E32 Appendix E Subsidiary Ledgers and Special Journals Oct. 2 4 5 7 9 10 12 13 14 16 17 18 21 23 25 25 25 26 27 28 30 30 30 Purchased merchandise on account from Camacho Company $16,500. Sold merchandise on account to Enos Co. $7,700. Invoice no. 204, terms 2/10, n/30. Purchased supplies for cash $80. Made cash sales for the week totaling $9,160. Paid in full the amount owed Camacho Company less a 2% discount. Purchased merchandise on account from Finn Corp. $3,500. Received payment from Enos Co. for invoice no. 204. Returned $210 worth of damaged goods purchased on account from Finn Corp. on October 10. Made cash sales for the week totaling $8,180. Sold a parcel of land for $27,000 cash, the lands original cost. Sold merchandise on account to G. Richter & Co. $5,350, invoice no. 205, terms 2/10, n/30. Purchased merchandise for cash $2,125. Made cash sales for the week totaling $8,200. Paid in full the amount owed Finn Corp. for the goods kept (no discount). Purchased supplies on account from Robinson Co. $260. Sold merchandise on account to Hunt Corp. $5,220, invoice no. 206, terms 2/10, n/30. Received payment from G. Richter & Co. for invoice no. 205. Purchased for cash a small parcel of land and a building on the land to use as a storage facility.The total cost of $35,000 was allocated $21,000 to the land and $14,000 to the building. Purchased merchandise on account from Kudro Co. $8,500. Made cash sales for the week totaling $7,540. Purchased merchandise on account from Camacho Company $14,000. Paid advertising bill for the month from the Gazette, $400. Sold merchandise on account to G. Richter & Co. $4,600, invoice no. 207, terms 2/10, n/30. Litke Company uses the following journals. 1. 2. 3. Sales journal. Single-column purchases journal. Cash receipts journal with columns for Cash Dr., Sales Discounts Dr., Accounts Receivable Cr., Sales Cr., Other Accounts Cr., and Cost of Goods Sold Dr./Merchandise Inventory Cr. Cash payments journal with columns for Other Accounts Dr., Accounts Payable Dr., Merchandise Inventory Cr., and Cash Cr. General journal. 4. 5. (b) Sales journal $22,870 Purchases journal $42,500 Cash receipts journal Cash, Dr. $72,869 Cash payments journal, Cash, Cr. $57,065 Journalize in purchases and cash payments journals; post; prepare a trial balance; prove control to subsidiary; prepare adjusting entries; prepare an adjusted trial balance. (SO 1, 2, 3) Instructions Using the selected accounts provided: (a) Record the October transactions in the appropriate journals. (b) Foot and crossfoot all special journals. (c) Show how postings would be made by placing ledger account numbers and check marks as needed in the journals. (Actual posting to ledger accounts is not required.) PE-5B Presented below are the sales and cash receipts journals for Wyrick Co. for its first month of operations. SALES JOURNAL Date Feb. 3 9 12 26 S. Arndt C. Boyd F. Catt M. Didde S1 Cost of Goods Sold Dr. Merchandise Inventory Cr. 3,630 4,290 5,280 4,620 17,820 Account Debited Ref. Accounts Receivable Dr. Sales Cr. 5,500 6,500 8,000 7,000 27,000 Comprehensive Problem: Chapters 3 to 6 and Appendix E E33 CR1 CASH RECEIPTS JOURNAL Cash Dr. 30,000 6,500 5,445 150 6,500 48,595 Date Feb. 1 2 13 18 26 Account Credited Common Stock S. Arndt Merchandise Inventory C. Boyd Ref. Sales Accounts Other Discounts Receivable Sales Accounts Cost of Goods Sold Dr. Dr. Cr. Cr. Cr. Merchandise Inventory Cr. 30,000 6,500 55 5,500 150 6,500 55 12,000 6,500 30,150 4,290 4,290 In addition, the following transactions have not been journalized for February 2008. Feb. 2 7 9 12 15 16 17 20 21 28 Purchased merchandise on account from J. Vopat for $4,600, terms 2/10, n/30. Purchased merchandise on account from P. Kneiser for $30,000, terms 1/10, n/30. Paid cash of $1,250 for purchase of supplies. Paid $4,508 to J. Vopat in payment for $4,600 invoice, less 2% discount. Purchased equipment for $7,000 cash. Purchased merchandise on account from J. Nunez $2,400, terms 2/10, n/30. Paid $29,700 to P. Kneiser in payment of $30,000 invoice, less 1% discount. Paid cash dividend of $1,100. Purchased merchandise on account from G. Reedy for $7,800, terms 1/10, n/30. Paid $2,400 to J. Nunez in payment of $2,400 invoice. Instructions (a) Open the following accounts in the general ledger. 101 112 120 126 157 158 201 Cash Accounts Receivable Merchandise Inventory Supplies Equipment Accumulated DepreciationEquipment Accounts Payable 311 332 401 414 505 631 711 Common Stock Dividends Sales Sales Discounts Cost of Goods Sold Supplies Expense Depreciation Expense (b) Purchases journal total $44,800 Cash payments journal Cash, Cr. $45,958 (e) Totals $71,300 (f) Accounts Receivable $15,000 Accounts Payable $7,800 (b) Journalize the transactions that have not been journalized in a one-column purchases journal and the cash payments journal (see Illustration E-15). (c) Post to the accounts receivable and accounts payable subsidiary ledgers. Follow the sequence of transactions as shown in the problem. (d) Post the individual entries and totals to the general ledger. (e) Prepare a trial balance at February 29, 2008. (f) Determine that the subsidiary ledgers agree with the control accounts in the general ledger. (g) The following adjustments at the end of February are necessary. (1) A count of supplies indicates that $300 is still on hand. (2) Depreciation on equipment for February is $200. Prepare the adjusting entries and then post the adjusting entries to the general ledger. (h) Prepare an adjusted trial balance at February 29, 2008. (h) Totals $71,500 llege /w eygand t Visit the books website at, and choose the Student Companion site, to access Problem Set C. COMPREHENSIVE PROBLEM: CHAPTERS 3 TO 6 AND APPENDIX E Packard Company has the following opening account balances in its general and subsidiary ledgers on January 1 and uses the periodic inventory system.All accounts have normal debit and credit balances. .w i l e y. c o PROBLEMS: SET C www m /co E34 Appendix E Subsidiary Ledgers and Special Journals General Ledger Account Number 101 112 115 120 125 130 157 158 201 311 320 Account Title Cash Accounts Receivable Notes Receivable Merchandise Inventory Office Supplies Prepaid Insurance Equipment Accumulated Depreciation Accounts Payable Common Stock Retained Earnings January 1 Opening Balance $33,750 13,000 39,000 20,000 1,000 2,000 6,450 1,500 35,000 70,000 8,700 Accounts Receivable Subsidiary Ledger January 1 Opening Customer Balance R. Draves B. Hachinski S. Ingles $1,500 7,500 4,000 Accounts Payable Subsidiary Ledger January 1 Opening Creditor Balance S. Kosko R. Mikush D. Moreno $ 9,000 15,000 11,000 Jan. 3 5 7 8 9 9 10 11 12 13 15 16 17 18 20 21 21 22 23 25 27 28 31 31 Sell merchandise on account to B. Remy $3,100, invoice no. 510, and J. Fine $1,800, invoice no. 511. Purchase merchandise on account from S. Yost $3,000 and D. Laux $2,700. Receive checks for $4,000 from S. Ingles and $2,000 from B. Hachinski. Pay freight on merchandise purchased $180. Send checks to S. Kosko for $9,000 and D. Moreno for $11,000. Issue credit of $300 to J. Fine for merchandise returned. Summary cash sales total $15,500. Sell merchandise on account to R. Draves for $1,900, invoice no. 512, and to S. Ingles $900, invoice no. 513. Post all entries to the subsidiary ledgers. Pay rent of $1,000 for January. Receive payment in full from B. Remy and J. Fine. Pay cash dividend of $800. Purchase merchandise on account from D. Moreno for $15,000, from S. Kosko for $13,900, and from S. Yost for $1,500. Pay $400 cash for office supplies. Return $200 of merchandise to S. Kosko and receive credit. Summary cash sales total $17,500. Issue $15,000 note to R. Mikush in payment of balance due. Receive payment in full from S. Ingles. Post all entries to the subsidiary ledgers. Sell merchandise on account to B. Remy for $3,700, invoice no. 514, and to R. Draves for $800, invoice no. 515. Send checks to D. Moreno and S. Kosko in full payment. Sell merchandise on account to B. Hachinski for $3,500, invoice no. 516, and to J. Fine for $6,100, invoice no. 517. Purchase merchandise on account from D. Moreno for $12,500, from D. Laux for $1,200, and from S. Yost for $2,800. Pay $200 cash for office supplies. Summary cash sales total $22,920. Pay sales salaries of $4,300 and office salaries of $3,600. Broadening Your Perspective Instructions (a) Record the January transactions in the appropriate journalsales, purchases, cash receipts, cash payments, and general. (b) Post the journals to the general and subsidiary ledgers. Add and number new accounts in an orderly fashion as needed. (c) Prepare a trial balance at January 31, 2008, using a worksheet. Complete the worksheet using the following additional information. (1) Office supplies at January 31 total $700. (2) Insurance coverage expires on October 31, 2008. (3) Annual depreciation on the equipment is $1,500. (4) Interest of $30 has accrued on the note payable. (5) Merchandise inventory at January 31 is $15,000. (d) Prepare a multiple-step income statement and a retained earnings statement for January and a classified balance sheet at the end of January. (e) Prepare and post the adjusting and closing entries. (f) Prepare a post-closing trial balance, and determine whether the subsidiary ledgers agree with the control accounts in the general ledger. E35 (c) Trial balance totals $196,820; Adj. T/B totals $196,975 (d) Net income $9,685 Total assets $126,315 (f) Post-closing T/B totals $127,940 BROADENING YOUR PERSPECTIVE FINANCIAL REPORTING AND ANALYSIS Financial Reporting ProblemMini Practice Set BYPE-1 (You will need the working papers that accompany this textbook in order to work this mini practice set.) Bluma Co. uses a perpetual inventory system and both an accounts receivable and an accounts payable subsidiary ledger. Balances related to both the general ledger and the subsidiary ledger for Bluma are indicated in the working papers. Presented below are a series of transactions for Bluma Co. for the month of January. Credit sales terms are 2/10, n/30. The cost of all merchandise sold was 60% of the sales price. Jan. 3 Sell merchandise on account to B. Richey $3,100, invoice no. 510, and to J. Forbes $1,800, invoice no. 511. 5 Purchase merchandise from S. Vogel $5,000 and D. Lynch $2,200, terms n/30. 7 Receive checks from S. LaDew $4,000 and B. Garcia $2,000 after discount period has lapsed. 8 Pay freight on merchandise purchased $235. 9 Send checks to S. Hoyt for $9,000 less 2% cash discount, and to D. Omara for $11,000 less 1% cash discount. 9 Issue credit of $300 to J. Forbes for merchandise returned. 10 Summary daily cash sales total $15,500. 11 Sell merchandise on account to R. Dvorak $1,600, invoice no. 512, and to S. LaDew $900, invoice no. 513. 12 Pay rent of $1,000 for January. 13 Receive payment in full from B. Richey and J. Forbes less cash discounts. 14 Pay an $800 cash dividend. 15 Post all entries to the subsidiary ledgers. 16 Purchase merchandise from D. Omara $18,000, terms 1/10, n/30; S. Hoyt $14,200, terms 2/10, n/30; and S. Vogel $1,500, terms n/30. 17 Pay $400 cash for office supplies. 18 Return $200 of merchandise to S. Hoyt and receive credit. 20 Summary daily cash sales total $20,100. 21 Issue $15,000 note, maturing in 90 days, to R. Moses in payment of balance due. 21 Receive payment in full from S. LaDew less cash discount. 22 Sell merchandise on account to B. Richey $2,700, invoice no. 514, and to R. Dvorak $1,300, invoice no. 515. 22 Post all entries to the subsidiary ledgers. E36 Appendix E Subsidiary Ledgers and Special Journals 23 Send checks to D. Omara and S. Hoyt in full payment less cash discounts. 25 Sell merchandise on account to B. Garcia $3,500, invoice no. 516, and to J. Forbes $6,100, invoice no. 517. 27 Purchase merchandise from D. Omara $14,500, terms 1/10, n/30; D. Lynch $1,200, terms n/30; and S. Vogel $5,400, terms n/30. 27 Post all entries to the subsidiary ledgers. 28 Pay $200 cash for office supplies. 31 Summary daily cash sales total $21,300. 31 Pay sales salaries $4,300 and office salaries $3,800. Instructions (a) Record the January transactions in a sales journal, a single-column purchases journal, a cash receipts journal as shown on page E8, a cash payments journal as shown on page E14, and a two-column general journal. (b) Post the journals to the general ledger. (c) Prepare a trial balance at January 31, 2008, in the trial balance columns of the worksheet. Complete the worksheet using the following additional information. (1) Office supplies at January 31 total $900. (2) Insurance coverage expires on October 31, 2008. (3) Annual depreciation on the equipment is $1,500. (4) Interest of $50 has accrued on the note payable. (d) Prepare a multiple-step income statement and a retained earnings statement for January and a classified balance sheet at the end of January. (e) Prepare and post adjusting and closing entries. (f) Prepare a post-closing trial balance, and determine whether the subsidiary ledgers agree with the control accounts in the general ledger. llege /w eygand Exploring the Web BYPE-2 Great Plains Accounting is one of the leading accounting software packages. Information related to this package is found at its website. Address:, or go to Steps 1. Go to the site shown above. 2. Choose General Ledger. Perform instruction (a). 3. Choose Accounts Payable. Perform instruction (b). Instructions (a) What are three key features of the general ledger module highlighted by the company? (b) What are three key features of the payables management module highlighted by the company? www t m /co CRITICAL THINKING Decision Making Across the Organization BYPE-3 Hughey & Payne is a wholesaler of small appliances and parts. Hughey & Payne is operated by two owners, Rich Hughey and Kristen Payne.numbered sales invoices. Credit terms are always net/30 days. All parts sales and repair work. .w i l e y. c o Broadening Your Perspective Rich and Kristen each make a monthly drawing in cash for personal living expenses. The salaried repairman is paid twice monthly. Hughey & Payne currently has a manual accounting system. Instructions With the class divided into groups, answer the following. (a) Identify the special journals that Hughey & Payne should have in its manual system. List the column headings appropriate for each of the special journals. (b) What control and subsidiary accounts should be included in Hughey & Payne manual system? Why? E37 Communication Activity BYPE-4 Barb Doane, a classmate, has a part-time bookkeeping job. She is concerned about the inefficiencies in journalizing and posting transactions. Jim Houser is the owner of the company where Barb works. In response to numerous complaints from Barb and others, Jim hired two additional bookkeepers a month ago. However, the inefficiencies have continued at an even higher rate. The accounting information system for the company has only a general journal and a general ledger. Jim refuses to install an electronic accounting system. Instructions Now that Barb is an expert in manual accounting information systems, she decides to send a letter to Jim Houser explaining (1) why the additional personnel did not help and (2) what changes should be made to improve the efficiency of the accounting department. Write the letter that you think Barb should send. Ethics Case BYPE-5 Roniger Products Company operates three divisions, each with its own manufacturing plant and marketing/sales force. The corporate headquarters and central accounting office are in Roniger, and the plants are in Freeport, Rockport, and Bayport, all within 50 miles of Roniger. Corporate management treats each division as an independent profit center and encourages competition among them. They each have similar but different product lines. As a competitive incentive, bonuses are awarded each year to the employees of the fastest growing and most profitable division. Jose Molina is the manager of Ronigers centralized computer accounting operation that enters the sales transactions and maintains the accounts receivable for all three divisions. Jose came up in the accounting ranks from the Bayport division where his wife, several relatives, and many friends still work. As sales documents are entered into the computer, the originating division is identified by code. Most sales documents (95%) are coded, but some (5%) are not coded or are coded incorrectly. As the manager, Jose has instructed the data-entry personnel to assign the Bayport code to all uncoded and incorrectly coded sales documents. This is done he says, in order to expedite processing and to keep the computer files current since they are updated daily. All receivables and cash collections for all three divisions are handled by Roniger as one subsidiary accounts receivable ledger. Instructions (a) Who are the stakeholders in this situation? (b) What are the ethical issues in this case? (c) How might the system be improved to prevent this situation? Answers to Self-Study Questions 1. a 2. c 3. a 4. c 5. d 6. b 7. c 8. c Appendix F OBJECTIVE Other Significant Liabilities STUDY After studying this appendix, you should be able to: 1. Describe the accounting and disclosure requirements for contingent liabilities. 2. Contrast the accounting for operating and capital leases. 3. Identify additional fringe benefits associated with employee compensation. In addition to the current and long-term liabilities discussed in Chapter 11, several more types of liabilities may exist that could have a significant impact on a companys financial position and future cash flows. These other significant liabilities will be discussed in this appendix. They are: (a) contingent liabilities, (b) lease liabilities, and (c) additional liabilities for employee fringe benefits (paid absences and postretirement benefits). CONTINGENT LIABILITIES With notes payable, interest payable, accounts payable, and sales taxes STUDY OBJECTIVE 1 payable, we know that an obligation to make a payment exists. But sup- Describe the accounting and pose that your company is involved in a dispute with the Internal Revenue disclosure requirements for Service (IRS) over the amount of its income tax liability. Should you re- contingent liabilities. port the disputed amount as a liability on the balance sheet? Or suppose your company is involved in a lawsuit which, if you lose, might result in bankruptcy. How should you report this major contingency? The answers to these questions are difficult, because these liabilities are dependentcontingentupon some future event. In other words, a contingent liability is a potential liability that may become an actual liability in the future. How should companies report contingent liabilities? They use the following guidelines: 1. If the contingency is probable (if it is likely to occur) and the amount can be reasonably estimated, the liability should be recorded in the accounts. 2. If the contingency is only reasonably possible (if it could happen), then it needs to be disclosed only in the notes that accompany the financial statements. 3. If the contingency is remote (if it is unlikely to occur), it need not be recorded or disclosed. Recording a Contingent Liability Product warranties are an example of a contingent liability that companies should record in the accounts. Warranty contracts result in future costs that companies may incur in replacing defective units or repairing malfunctioning units. Generally, F1 F2 Appendix F Other Significant Liabilities a manufacturer, such as Black & Decker, knows that it will incur some warranty costs. From prior experience with the product, the company usually can reasonably estimate the anticipated cost of servicing (honoring) the warranty. The accounting for warranty costs is based on the matching principle. The estimated cost of honoring product warranty contracts should be recognized as an expense in the period in which the sale occurs. To illustrate, assume that in 2008 Denson Manufacturing Company sells 10,000 washers and dryers at an average price of $600 each. The selling price includes a one-year warranty on parts. Denson expects that 500 units (5%) will be defective and that warranty repair costs will average $80 per unit. In 2008, the company honors warranty contracts on 300 units, at a total cost of $24,000. At December 31, it is necessary to accrue the estimated warranty costs on the 2008 sales. Denson computes the estimated warranty liability as follows. Illustration F-1 Computation of estimated product warranty liability Number of units sold Estimated rate of defective units Total estimated defective units Average warranty repair cost Estimated product warranty liability 10,000 5% 500 $80 $40,000 The company makes the following adjusting entry. A L 40,000 Cash Flows no effect SE 40,000 Exp Dec. 31 Warranty Expense Estimated Warranty Liability (To accrue estimated warranty costs) 40,000 40,000 Denson records those repair costs incurred in 2008 to honor warranty contracts on 2008 sales as shown below. A 24,000 Cash Flows no effect L 24,000 SE Jan. 1 Dec. 31 Estimated Warranty Liability Repair Parts (To record honoring of 300 warranty contracts on 2008 sales) 24,000 24,000 The company reports warranty expense of $40,000 under selling expenses in the income statement. It classifies estimated warranty liability of $16,000 ($40,000 $24,000) as a current liability on the balance sheet. In the following year, Denson should debit to Estimated Warranty Liability all expenses incurred in honoring warranty contracts on 2008 sales. To illustrate, assume that the company replaces 20 defective units in January 2009, at an average cost of $80 in parts and labor. The summary entry for the month of January 2009 is: A 1,600 Cash Flows no effect L 1,600 SE Jan. 31 Estimated Warranty Liability Repair Parts (To record honoring of 20 warranty contracts on 2008 sales) 1,600 1,600 Disclosure of Contingent Liabilities When it is probable that a company will incur a contingent liability but it cannot reasonably estimate the amount, or when the contingent liability is only reasonably possible, only disclosure of the contingency is required. Examples of contingencies Lease Liabilities F3 that may require disclosure are pending or threatened lawsuits and assessment of additional income taxes pending an IRS audit of the tax return. The disclosure should identify the nature of the item and, if known, the amount of the contingency and the expected outcome of the future event. Disclosure is usually accomplished through a note to the financial statements, as illustrated by the following. YAHOO! INC. Notes to the Financial Statements Contingencies. From time to time, third parties assert patent infringement claims against the company. Currently the company is engaged in several lawsuits regarding patent issues and has been notified of a number of other potential patent disputes. In addition, from time to time the company is subject to other legal proceedings and claims in the ordinary course of business, including claims for infringement of trademarks, copyrights and other intellectual property rights.... The Company does not believe, based on current knowledge, that any of the foregoing legal proceedings or claims are likely to have a material adverse effect on the financial position, results of operations or cash flows. Illustration F-2 Disclosure of contingent liability The required disclosure for contingencies is a good example of the use of the full-disclosure principle. The full-disclosure principle requires that companies disclose all circumstances and events that would make a difference to financial statement users. Some important financial information, such as contingencies, is not easily reported in the financial statements. Reporting information on contingencies in the notes to the financial statements will help investors be aware of events that can affect the financial health of a company. LEASE LIABILITIES A lease is a contractual arrangement between a lessor (owner of a property) STUDY OBJECTIVE 2 and a lessee (renter of the property). It grants the right to use specific Contrast the accounting for property for a period of time in return for cash payments. Leasing is big operating and capital leases. business. U.S. companies leased an estimated $125 billion of capital equipment in a recent year. This represents approximately one-third of equipment financed that year. The two most common types of leases are operating leases and capital leases. Operating. For example, assume that a sales representative for Western Inc. leases a car from Hertz Car Rental at the Los Angeles airport and that Hertz charges a total of $275. Western, the lessee, records the rental as follows: A L Car Rental Expense Cash (To record payment of lease rental charge) 275 275 275 Cash Flows 275 SE 275 Exp F4 Appendix F Other Significant Liabilities The lessee may incur other costs during the lease period. For example, in the case above, Western will generally incur costs for gas. Western would report these costs as an expense. Capital Leases In most lease contracts, the lessee makes a periodic payment and records that payment in the income statement as rent expense. In some cases, however, the lease contract transfers to the lessee substantially all the benefits and risks of ownership. Such a lease is in effect a purchase of the property. This type of lease is a capital lease. Its name comes from the fact that the company capitalizes the present value of the cash payments for the lease and records that amount as an asset. Illustration F-3 indicates the major difference between operating and capital leases. Illustration F-3 Types of leases Operating lease Capital lease Only 3 more payments and this baby is ours! Have it back by 6:00 Sunday. HELPFUL HINT A capital lease situation is one that, although legally a rental case, is in substance an installment purchase by the lessee. Accounting standards require that substance over form be used in such a situation. OK! Lessor has substantially all of the benefits and risks of ownership Lessee has substantially all of the benefits and risks of ownership If any one of the following conditions exists, the lessee must record a lease as an assetthat is, as a capital lease: 1. The lease transfers ownership of the property to the lessee. Rationale: If during the lease term the lessee receives ownership of the asset, the lessee should report the leased asset as an asset on its books. 2. The lease contains a bargain purchase option. Rationale: If during the term of the lease the lessee can purchase the asset at a price substantially below its fair market value, the lessee will exercise this option. Thus, the lessee should report the lease as a leased asset on its books. 3. The lease term is equal to 75% or more of the economic life of the leased property. Rationale: If the lease term is for much of the assets useful life, the lessee should report the asset as a leased asset on its books. 4. The present value of the lease payments equals or exceeds 90% of the fair market value of the leased property. Rationale: If the present value of the lease payments is equal to or almost equal to the fair market value of the asset, the lessee has essentially purchased the asset. As a result, the lessee should report the leased asset as an asset on its books. Additional Liabilities for Employee Fringe Benefits F5 To illustrate, assume that Gonzalez Company decides to lease new equipment. The lease period is four years; the economic life of the leased equipment is estimated to be five years. The present value of the lease payments is $190,000, which is equal to the fair market value of the equipment. There is no transfer of ownership during the lease term, nor is there any bargain purchase option. In this example, Gonzalez has essentially purchased the equipment. Conditions 3 and 4 have been met. First, the lease term is 75% or more of the economic life of the asset. Second, the present value of cash payments is equal to the equipments fair market value. Gonzalez records the transaction as follows. Leased AssetEquipment Lease Liability (To record leased asset and lease liability) 190,000 190,000 Cash Flows no effect A 190,000 L 190,000 SE The lessee reports a leased asset on the balance sheet under plant assets. It reports the lease liability on the balance sheet as a liability. The portion of the lease liability expected to be paid in the next year is a current liability. The remainder is classified as a long-term liability. Most lessees do not like to report leases on their balance sheets. Why? ETHICS NOTE Because the lease liability increases the companys total liabilities. This, in Accounting standard setters turn, may make it more difficult for the company to obtain needed funds are attempting to rewrite from lenders. As a result, companies attempt to keep leased assets and rules on lease accounting because lease liabilities off the balance sheet by structuring leases so as not to meet of concerns that abuse of the curany of the four conditions mentioned on page F4. The practice of keeping rent standards is reducing the liabilities off the balance sheet is referred to as off-balance-sheet financing. usefulness of financial statements. ADDITIONAL LIABILITIES FOR EMPLOYEE F RINGE BENEFITS In addition to the three payroll tax fringe benefits discussed in Appendix STUDY OBJECTIVE 3 D (FICA taxes and state and federal unemployment taxes), employers in- Identify additional fringe benefits cur other substantial fringe benefit costs. Indeed, fringe benefits have been associated with employee growing faster than pay. In a recent year, benefits equaled 38 percent of compensation. wages and salaries. While vacations and other forms of paid leave still take the biggest bite out of the benefits pie, as shown in Illustration F-4, medical costs are the fastest-growing item. Illustration F-4 The fringe benefits pie BENEFITS 3% Disability and life insurance 13% Retirement income such as pensions 23% Legally required benefits such as Social Security 24% Medical benefits 37% Vacation and other benefits such as parental and sick leaves, child care We discuss two of the most important fringe benefitspaid absences and postretirement benefitsin this section. F6 Appendix F Other Significant Liabilities Paid Absences Employees often are given rights to receive compensation for absences when certain conditions of employment are met. The compensation may be for paid vacations, sick pay benefits, and paid holidays. When the payment for such absences is probable and the amount can be reasonably estimated, a liability should be accrued for paid future absences. When the amount cannot be reasonably estimated, companies should instead disclose the potential liability. Ordinarily, vacation pay is the only paid absence that is accrued. The other types of paid absences are only disclosed.1 To illustrate, assume that Academy Company employees are entitled to one days vacation for each month worked. If 30 employees earn an average of $110 per day in a given month, the accrual for vacation benefits in one month is $3,300. The liability is recognized at the end of the month by the following adjusting entry. A L 3,300 Cash Flows no effect SE 3,300 Exp Jan. 31 Vacation Benefits Expense Vacation Benefits Payable (To accrue vacation benefits expense) 3,300 3,300 This accrual is required by the matching principle. Academy would report Vacation Benefits Expense as an operating expense in the income statement, and Vacation Benefits Payable as a current liability in the balance sheet. Later, when Academy pays vacation benefits, it debits Vacation Benefits Payable and credits Cash. For example, if the above benefits for 10 employees are paid in July, the entry is: A 1,100 Cash Flows 1,100 L 1,100 SE July 31 Vacation Benefits Payable Cash (To record payment of vacation benefits) 1,100 1,100 The magnitude of unpaid absences has gained employers attention. Consider the case of an assistant superintendent of schools who worked for 20 years and rarely took a vacation or sick day. A month or so before she retired, the school district discovered that she was due nearly $30,000 in accrued benefits. Yet the school district had never accrued the liability. Postretirement Benefits Postretirement benefits are benefits provided by employers to retired employees for (1) health care and life insurance and (2) pensions. For many years the accounting for postretirement benefits was on a cash basis. Companies now account for both types of postretirement benefits on the accrual basis. The cost of postretirement benefits is getting steep. For example, General Motors pension and health-care costs for retirees in a recent year totaled $6.2 billion, or approximately $1,784 per vehicle produced. The average American has debt of approximately $10,000 (not counting the mortgage on their home) and has little in the way of savings. What will happen at retirement for these people? The picture is not prettypeople are living longer, the future of Social Security is unclear, and companies are cutting back on postretirement benefits. This situation may lead to one of the great social and moral dilemmas this country faces in the next 40 years. The more you know about postThe typical U.S. company provides an average of 12 days of paid vacation for its employees, at an average cost of 5% of gross earnings. 1 Additional Liabilities for Employee Fringe Benefits F7 retirement benefits, the better you will understand the issues involved in this dilemma. POSTRETIREMENT HEALTH-CARE AND LIFE INSURANCE BENEFITS Providing medical and related health-care benefits for retirees was at one time an inexpensive and highly effective way of generating employee goodwill. This practice has now turned into one of corporate Americas most worrisome financial problems. Runaway medical costs, early retirement, and increased longevity are sending the liability for retiree health plans through the roof. Many companies began offering retiree health-care coverage in the form of Medicare supplements in the 1960s. Almost all plans operated on a pay-as-you-go basis. The companies simply paid for the bills as they came in, rather than setting aside funds to meet the cost of future benefits. These plans were accounted for on the cash basis. But, the FASB concluded that shareholders and creditors should know the amount of the employers obligations. As a result, employers must now use the accrual basis in accounting for postretirement health-care and life insurance benefits. PENSION PLANS A pension plan is an agreement whereby an employer provides benefits (payments) to employees after they retire. Over 50 million workers currently participate in pension plans in the United States. The need for good accounting for pension plans becomes apparent when one appreciates the size of existing pension funds. Most pension plans are subject to the provisions of ERISA (Employee Retirement Income Security Act), a law enacted to curb abuses in the administration and funding of such plans. Three parties are generally involved in a pension plan. The employer (company) sponsors the pension plan. The plan administrator receives the contributions from the employer, invests the pension assets, and makes the benefit payments to the pension recipients (retired employees). Illustration F-5 indicates the flow of cash among the three parties involved in a pension plan. Illustration F-5 Parties in a pension plan Employer Contributions Plan Administrator Kear Trust Co. Pension Recipients Benefits Fund Assets: Investments and Earnings An employer-financed pension is part of the employees compensation. ERISA establishes the minimum contribution that a company must make each year toward employee pensions. The most popular type of pension plan used is the 401(k) plan. A 401(k) plan works as follows: As an employee, you can contribute up to a certain percentage of your pay into a 401(k) plan, and your employer will match a percentage of your contribution.These contributions are then generally invested in stocks and bonds through mutual funds. These funds will grow without being taxed and can be withdrawn beginning at age 59-1/2. If you must access the funds earlier, you may be able to do so, but a penalty usually occurs along with a payment of tax F8 Appendix F Other Significant Liabilities on the proceeds. Any time you have the opportunity to be involved in a 401(k) plan, you should avail yourself of this benefit! Companies record pension costs as an expense while the employees are working because that is when the company receives benefits from the employees services. Generally the pension expense is reported as an operating expense in the companys income statement. Frequently, the amount contributed by the company to the pension plan is different from the amount of the pension expense. A liability is recognized when the pension expense to date is more than the companys contributions to date. An asset is recognized when the pension expense to date is less than the companys contributions to date. Further consideration of the accounting for pension plans is left for more advanced courses. The two most common types of pension arrangements for providing benefits to employees after they retire are defined-contribution plans and defined-benefit plans. Defined-Contribution Plan. In a defined-contribution plan, the plan defines the employers contribution but not the benefit that the employee will receive at retirement. That is, the employer agrees to contribute a certain sum each period based on a formula. A 401(k) plan is typically a defined-contribution plan. The accounting for a defined-contribution plan is straightforward: The employer simply makes a contribution each year based on the formula established in the plan. As a result, the employers obligation is easily determined. It follows that the company reports the amount of the contribution required each period as pension expense. The employer reports a liability only if it has not made the contribution in full. To illustrate, assume that Alba Office Interiors Corp. has a defined-contribution plan in which it contributes $200,000 each year to the pension fund for its employees. The entry to record this transaction is: Pension Expense Cash (To record pension expense and contribution to pension fund) 200,000 200,000 A 200,000 Cash Flows 200,000 L SE 200,000 To the extent that Alba did not contribute the $200,000 defined contribution, it would record a liability. Pension payments to retired employees are made from the pension fund by the plan administrator. Defined-Benefit Plan. In a defined-benefit plan, the benefits that the employee will receive at the time of retirement are defined by the terms of the plan. Benefits are typically calculated using a formula that considers an employees compensation level when he or she nears retirement and the employees years of service. Because the benefits in this plan are defined in terms of uncertain future variables, an appropriate funding pattern is established to ensure that enough funds are available at retirement to meet the benefits promised. This funding level depends on a number of factors such as employee turnover, length of service, mortality, compensation levels, and investment earnings. The proper accounting for these plans is complex and is considered in more advanced accounting courses. POSTRETIREMENT BENEFITS AS LONG-TERM LIABILITIES While part of the liability associated with (1) postretirement health-care and life insurance benefits and (2) pension plans is generally a current liability, the greater portion of these liabilities extends many years into the future. Therefore, many companies are required to report significant amounts as long-term liabilities for postretirement benefits. Self-Study Questions F9 Before You Go On... REVIEW IT 1. What is a contingent liability? 2. How are contingent liabilities reported in financial statements? 3. What accounts are involved in accruing and paying vacation benefits? 4. What basis should be used in accounting for postretirement benefits? SUMMARY OF STUDY OBJECTIVES 1 Describe the accounting and disclosure requirements for contingent liabilities. If it is probable that the contingency will happen (if it is likely to occur) and the amount can be reasonably estimated, the liability should be recorded in the accounts. If the contingency is only reasonably possible (it could occur), then it should be disclosed only in the notes to the financial statements. If the possibility that the contingency will happen is remote (unlikely to occur), it need not be recorded or disclosed. 2 Contrast the accounting for operating and capital leases. For an operating lease, lease (or rental) payments are recorded as an expense by the lessee (renter). For a capital lease, the lessee records the asset and related obligation at the present value of the future lease payments. 3 Identify additional fringe benefits associated with employee compensation. Additional fringe benefits associated with wages are paid absences (paid vacations, sick pay benefits, and paid holidays), postretirement health care and life insurance, and pensions. The two most common types of pension arrangements are a defined-contribution plan and a defined-benefit plan. GLOSSARY Capital lease A contractual arrangement that transfers substantially all the benefits and risks of ownership to the lessee so that the lease is in effect a purchase of the property. (p. F4). Contingent liability A potential liability that may become an actual liability in the future. (p. F1). Defined-benefit plan A pension plan in which the benefits that the employee will receive at retirement are defined by the terms of the plan. (p. F8). Defined-contribution plan A pension plan in which the employers contribution to the plan is defined by the terms of the plan. (p. F8). Lease A contractual arrangement between a lessor (owner of a property) and a lessee (renter of the property). (p. F3). Operating lease A contractual arrangement giving the lessee temporary use of the property, with continued ownership of the property by the lessor. (p. F3). Pension plan An agreement whereby an employer provides benefits to employees after they retire. (p. F7). Postretirement benefits Payments by employers to retired employees for health care, life insurance, and pensions. (p. F6). SELF-STUDY QUESTIONS (SO 1) Answers are at the end of the appendix. 1. A contingency should be recorded in the accounts when: a. It is probable the contingency will happen but the amount cannot be reasonably estimated. b. It is reasonably possible the contingency will happen and the amount can be reasonably estimated. c. It is reasonably possible the contingency will happen but the amount cannot be reasonably estimated. d. It is probable the contingency will happen and the amount can be reasonably estimated. 2. At December 31, Anthony Company prepares an adjust- (SO 1) ing entry for a product warranty contract. Which of the following accounts are included in the entry? a. Warranty Expense. b. Estimated Warranty Liability. c. Repair Parts/Wages Payable. d. Both (a) and (b). 3. Lease A does not contain a bargain purchase option, but (SO 2) the lease term is equal to 90 percent of the estimated economic life of the leased property. Lease B does not F10 Appendix F Other Significant Liabilities 4. Which of the following is not an additional fringe benefit? (SO 3) a. Salaries. b. Paid absences. c. Paid vacations. d. Postretirement pensions. transfer ownership of the property to the lessee by the end of the lease term, but the lease term is equal to 75 percent of the estimated economic life of the lease property. How should the lessee classify these leases? Lease A a. b. c. d. Operating lease Operating lease Capital lease Capital lease Lease B Capital lease Operating lease Capital lease Operating lease QUESTIONS 1. What is a contingent liability? Give an example of a contingent liability that is usually recorded in the accounts. 2. Under what circumstances is a contingent liability disclosed only in the notes to the financial statements? Under what circumstances is a contingent liability not recorded in the accounts nor disclosed in the notes to the financial statements? 3. (a) What is a lease agreement? (b) What are the two most common types of leases? (c) Distinguish between the two types of leases. 4. Orbison Company rents a warehouse on a month-tomonth basis for the storage of its excess inventory. The company periodically must rent space when its production greatly exceeds actual sales. What is the nature of this type of lease agreement, and what accounting treatment should be accorded it? 5. Costello Company entered into an agreement to lease 12 computers from Estes Electronics Inc. The present value of the lease payments is $186,300. Assuming that this is a capital lease, what entry would Costello Company make on the date of the lease agreement? 6. Identify three additional types of fringe benefits associated with employees compensation. 7. Often during job interviews, the candidate asks the potential employer about the firms paid absences policy. What are paid absences? How are they accounted for? 8. What are the two types of postretirement benefits? During what years does the FASB advocate expensing the employers costs of these postretirement benefits? 9. What basis of accounting for the employers cost of postretirement health-care and life insurance benefits has been used by most companies, and what basis does the FASB advocate in the future? Explain the basic difference between these methods in recognizing postretirement benefit costs. 10. Identify the three parties in a pension plan. What role does each party have in the plan? 11. Brenna Ottare and Caitlin Wilkes are reviewing pension plans. They ask your help in distinguishing between a defined-contribution plan and a defined-benefit plan. Explain the principal difference to Brenna and Caitlin. Go to the books website,, for Additional Self-Study questions. BRIEF EXERCISES Prepare adjusting entry for warranty costs. (SO 1) Prepare entries for operating and capital leases. (SO 2) BEF-1 On December 1, Vina Company introduces a new product that includes a 1-year warranty on parts. In December 1,000 units are sold. Management believes that 5% of the units will be defective and that the average warranty costs will be $60 per unit. Prepare the adjusting entry at December 31 to accrue the estimated warranty cost. BEF-2 Prepare the journal entries that the lessee should make to record the following transactions. 1. 2. The lessee makes a lease payment of $80,000 to the lessor in an operating lease transaction. Zander Company leases a new building from Joel Construction, Inc.The present value of the lease payments is $900,000. The lease qualifies as a capital lease. Record estimated vacation benefits. (SO 3) BEF-3 In Alomar Company, employees are entitled to 1 days vacation for each month worked. In January, 50 employees worked the full month. Record the vacation pay liability for January assuming the average daily pay for each employee is $120. Exercises: Set B F11 EXERCISES EF-1 Boone Company sells automatic can openers under a 75-day warranty for defective merchandise. Based on past experience, Boone Company estimates that 3% of the units sold will become defective during the warranty period. Management estimates that the average cost of replacing or repairing a defective unit is $15. The units sold and units defective that occurred during the last 2 months of 2006 are as follows. Month November December Units Sold 30,000 32,000 Units Defective Prior to December 31 600 400 Record estimated liability and expense for warranties. (SO 1) Instructions (a) Determine the estimated warranty liability at December 31 for the units sold in November and December. (b) Prepare the journal entries to record the estimated liability for warranties and the costs (assume actual costs of $15,000) incurred in honoring 1,000 warranty claims. (c) Give the entry to record the honoring of 500 warranty contracts in January at an average cost of $15. EF-2 Larkin Online Company has the following liability accounts after posting adjusting entries: Accounts Payable $63,000, Unearned Ticket Revenue $24,000, Estimated Warranty Liability $18,000, Interest Payable $8,000, Mortgage Payable $120,000, Notes Payable $80,000, and Sales Taxes Payable $10,000. Assume the companys operating cycle is less than 1 year, ticket revenue will be earned within 1 year, warranty costs are expected to be incurred within 1 year, and the notes mature in 3 years. Instructions (a) Prepare the current liabilities section of the balance sheet, assuming $40,000 of the mortgage is payable next year. (b) Comment on Larkin Online Companys liquidity, assuming total current assets are $300,000. EF-3 1. 2. Presented below are two independent situations. Speedy Car Rental leased a car to Rundgren Company for 1 year. Terms of the operating lease agreement call for monthly payments of $500. On January 1, 2008, Miles Inc. entered into an agreement to lease 20 computers from Halo Electronics. The terms of the lease agreement require three annual rental payments of $40,000 (including 10% interest) beginning December 31, 2008. The present value of the three rental payments is $99,474. Miles considers this a capital lease. Prepare journal entries for operating lease and capital lease. (SO 2) Prepare the current liabilities section of the balance sheet. (SO 1) Instructions (a) Prepare the appropriate journal entry to be made by Rundgren Company for the first lease payment. (b) Prepare the journal entry to record the lease agreement on the books of Miles Inc. on January 1, 2008. EF-4 Bunill Company has two fringe benefit plans for its employees: 1. It grants employees 2 days vacation for each month worked. Ten employees worked the entire month of March at an average daily wage of $80 per employee. 2. It has a defined contribution pension plan in which the company contributes 10% of gross earnings. Gross earnings in March were $30,000. The payment to the pension fund has not been made. Instructions Prepare the adjusting entries at March 31. llege Prepare adjusting entries for fringe benefits. (SO 3) /w eygand t Visit the books website at, and choose the Student Companion site, to access Exercise Set B. .w i l e y. c o EXERCISES: SET B www m /co F12 Appendix F Other Significant Liabilities PROBLEMS: SET A Prepare current liability entries, adjusting entries, and current liabilities section. (SO 1) PF-1A On January 1, 2008, the ledger of Shumway Software Company contains the following liability accounts. Accounts Payable $42,500 Sales Taxes Payable 5,800 Unearned Service Revenue 15,000 During January the following selected transactions occurred. Jan. 1 Borrowed $15,000 in cash from Amsterdam Bank on a 4-month, 8%, $15,000 note. 5 Sold merchandise for cash totaling $10,400 which includes 4% sales taxes. 12 Provided services for customers who had made advance payments of $9,000. (Credit Service Revenue.) 14 Paid state treasurers department for sales taxes collected in December 2007 ($5,800). 20 Sold 700 units of a new product on credit at $52 per unit, plus 4% sales tax. This new product is subject to a 1-year warranty. 25 Sold merchandise for cash totaling $12,480, which includes, 2008. Assume no change in accounts payable. Analyze three different lease situations and prepare journal entries. (SO 2) PF-2A Presented below are three different lease transactions in which Ortiz Enterprises engaged in 2008. Assume that all lease transactions start on January 1, 2008. In no case does Ortiz receive title to the properties leased during or at the end of the lease term. Lessor Schoen Inc. Type of property Bargain purchase option Lease term Estimated economic life Yearly rental Fair market value of leased asset Present value of the lease rental payments Bulldozer None 4 years 8 years $13,000 $80,000 $48,000 Casey Co. Truck None 6 years 7 years $15,000 $72,000 $62,000 Lester Inc. Furniture None 3 years 5 years $4,000 $27,500 $12,000 Instructions (a) Identify the leases above as operating or capital leases. Explain. (b) How should the lease transaction with Casey Co. be recorded on January 1, 2008? (c) How should the lease transactions for Lester Inc. be recorded in 2008? PROBLEMS: SET B Prepare current liability entries, adjusting entries, and current liabilities section. (SO 1) PF-1B On January 1, 2008, the ledger of Zaur Company contains the following liability accounts. Accounts Payable $52,000 Sales Taxes Payable 7,700 Unearned Service Revenue 16,000 During January the following selected transactions occurred. Jan. 5 Sold merchandise for cash totaling $17,280, which includes 8% sales taxes. 12 Provided services for customers who had made advance payments of $10,000. (Credit Service Revenue.) Broadening Your Perspective 14 Paid state revenue department for sales taxes collected in December 2007 ($7,700). 20 Sold 600 units of a new product on credit at $50 per unit, plus 8% sales tax. This new product is subject to a 1-year warranty. 21 Borrowed $18,000 from UCLA Bank on a 3-month, 9%, $18,000 note. 25 Sold merchandise for cash totaling $12,420, which includes 8% sales taxes. UCLA Bank note.) (c) Prepare the current liabilities section of the balance sheet at January 31, 2008. Assume no change in accounts payable. PF-2B Presented below are three different lease transactions that occurred for Milo Inc. in 2008. Assume that all lease contracts start on January 1, 2008. In no case does Milo receive title to the properties leased during or at the end of the lease term. Lessor Gibson Delivery Type of property Yearly rental Lease term Estimated economic life Fair market value of leased asset Present value of the lease rental payments Bargain purchase option Computer $ 8,000 6 years 7 years $44,000 $41,000 None Eller Co. Delivery equipment $ 4,200 4 years 7 years $19,000 $13,000 None Louis Auto Automobile $ 3,700 2 years 5 years $11,000 $6,400 None F13 Analyze three different lease situations and prepare journal entries. (SO 2) Instructions (a) Which of the leases above are operating leases and which are capital leases? Explain. (b) How should the lease transaction with Eller Co. be recorded in 2008? (c) How should the lease transaction for Gibson Delivery be recorded on January 1, 2008? llege /w eygand t Visit the books website at w ww.wiley.com/college/weygandt , and choose the Student Companion site, to access Problem Set C. BROADENING YOUR PERSPECTIVE FINANCIAL REPORTING AND ANALYSIS Financial Reporting Problems BYPF-1 Refer to the financial statements of PepsiCo and the Notes to Consolidated Financial Statements in Appendix A to answer the following questions about contingent liabilities, lease liabilities, and pension costs. (a) Where does PepsiCo report its contingent liabilities? (b) What is managements opinion as to the ultimate effect of the various claims and legal proceedings pending against the company? (c) Where did PepsiCo report the details of its lease obligations? What amount of rent expense from operating leases did PepsiCo incur in 2005? What was PepsiCos total future minimum annual rental commitment under noncancelable operating leases as of December 31, 2005? (d) What type of employee pension plan does PepsiCo have? (e) What is the amount of postretirement benefit expense (other than pensions) for 2005? .w i l e y. c o PROBLEMS: SET C www m /co F14 Appendix F Other Significant Liabilities BYPF-2 Presented below is the lease portion of the notes to the financial statements of CF Industries, Inc. CF INDUSTRIES, INC. Notes to the Financial Statements Leases The present value of future minimum capital lease payments and the future minimum lease payments under noncancelable operating leases at December 31, 2006, are: (in millions) Capital Lease Operating Lease Payments Payments 2007 2008 2009 2010 2011 Thereafter Future minimum lease payments Less: Equivalent interest Present value Less: Current portion $ 7,733 6,791 6,730 6,788 6,785 13,441 48,268 11,391 36,877 5,570 $31,307 Rent expense for operating leases was $7.0 million for the year ended December 31, 2006, $5.3 million for 2005, and $5.6 million for 2004. Instructions What type of leases does CF Industries, Inc. use? What is the amount of the current portion of the capital lease obligation? $3,067 2,052 1,056 918 86 6 $7,185 CRITICAL THINKING Decision Making Across the Organization BYPF-3 Presented below is the condensed balance sheet for Express, Inc. as of December 31, 2008. EXPRESS, INC. Balance Sheet December 31, 2008 Current assets Plant assets $ 800,000 1,600,000 Current liabilities Long-term liabilities Common stock Retained earnings Total $1,200,000 700,000 400,000 100,000 $2,400,000 Total $2,400,000 Express has decided that it needs to purchase a new crane for its operations. The new crane costs $900,000 and has a useful life of 15 years. However, Expresss bank has refused to provide any help in financing the purchase of the new equipment, even though Express is willing to pay an above-market interest rate for the financing. The chief financial officer for Express, Lisa Colder, has discussed with the manufacturer of the crane the possibility of a lease agreement. After some negotiation, the crane manufacturer agrees to lease the crane to Express under the following terms: length of the lease 7 years; payments $100,000 per year. The present value of the lease payments is $548,732. Broadening Your Perspective The board of directors at Express is delighted with this new lease. They reason they have the use of the crane for the next 7 years. In addition, Lisa Colder notes that this type of financing is a good deal because it will keep debt off the balance sheet. Instructions F15 With the class divided into groups, answer the following. (a) Why do you think the bank decided not to lend money to Express, Inc.? (b) How should this lease transaction be reported in the financial statements? (c) What did Lisa Colder mean when she said leasing will keep debt off the balance sheet? Answers to Self-Study Questions 1. d 2. d 3. c 4. a PHOTO CREDITS Chapter 1 Page 3: Dinodia Images/Alamy Limited. Page 9: Hai Wen China Tourism Press/Getty Images, Inc. Page 11: Brent Holland/iStockphoto. Page 23: iStockphoto. Page 47: NBAE/Getty Images. Page 56: Koichi Kamoshida/AsiaPac/Getty Images, Inc. Page 58: Mike Stewart/Corbis Sygma Page 70 PhotoDisc, Inc./Getty Images. Page 93: Witte Thomas E/Gamma Presse, Inc. Page 96: Kevin Winter/Getty Images, Inc. Page 100: Chris Weeks/Getty Images, Inc. Page 104: iStockphoto. Page 143: Brian Bahr/Getty Images, Inc. Page 155: M. Tcherevkoff/Getty Images, Inc. Page 160: Christian Lagereek/iStockphoto. Page 164: Digital Vision Page 164: Nikki Ward/iStockphoto. Page 165: Brand X/PictureArts. Page 166: iStockphoto. Page 166: iStockphoto. Page 195: Stone/Getty Images, Inc. Page 199: Courtesy Morrow Snowboards Inc. Page 205: iStockphoto. Page 213: Victor Prikhoddko/iStockphoto. Page 245: Pathaithai Chungyam/iStockphoto. Page 247: Bjorn Kindler/iStockphoto. Page 248: iStockphoto. Page 257: PhotoDisc, Inc./Getty Images. Page 262: Courtesy Samsung Electronics America. Page 293: image (c)2000 Artville, Inc. Page 301: iStockphoto. Page 302: Barbara Nessim/Stock Illustration Source/Images.com. Page 310: Steve Forney/SUPERSTOCK. Page 312: Olney Vasan/Stone/Getty Images. Page 339: Valerie Loiseleux/iStockphoto. Page 343: Gianni Dagli Orti/Corbis Images. Page 344: Terence John/Retna. Page 346: Nick Koudis/AFP/Getty Images. Page 357: Ingvald Kaldhussaeter/iStockphoto. Chapter 9 Page 385: Jorg Greuel/AFP/Getty Images. Page 388: Alice Millikan/iStockphoto. Page 394: Joe Polillio/Getty Images, Inc. Page 397: Michael Braun/ iStockphoto. Page 402: Jamie Evans/iStockphoto. Chapter 2 Chapter 10 Page 425: David Trood/Getty Images, Inc. Page 429: iStockphoto. Page 438: AFP/Getty Images. Page 445: Andy Lions/Photonica/Getty Images, Inc. Chapter 11 Page 473: Cary Westfall/iStockphoto. Page 478: Catherine dee Auvil/iStockphoto. Page 486: iStockphoto. Page 494: Greg Nicholas/iStockphoto. Page 495: Corbis Stock Market. Chapter 12 Page 533: David Young-Wolf/PhotoEdit. Page 537: Reuters NewMedia Inc/Corbis Images. Page 541: Brandon Laufenberg/iStockphoto. Page 548: Alex Fevzer/Corbis Images. Page 555 Tomasz Resiak/iStockphoto. Page 561: Arpad Benedek/iStockphoto. Chapter 13 Page 595: Warner Bros. David James/The Kobal Collection, Ltd. Page 608: John Lamb/Stone/Getty Images, Inc. Chapter 14 Page 637: Rudi Von Briel/PhotoEdit. Page 641: Elle Wagner and Lisa Gee/John Wiley & Sons. Page 644: Corbis Digital Stock. Page 655: PhotoDisc, Inc./Getty Images. Chapter 15 Page 697: Jeremy Edwards/iStockphoto. Page 700. Don Wilkie/iStockphoto 700 Don Wilkie/iStockphoto. Page 707: Nora Good/Masterfile. Page 715: Royalty-Free/Corbis Images. Page 720: Martina Misar/iStockphoto. Page 724: iStockphoto. Page 724: iStockphoto. Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 PC-1 C O M PA N Y I N D E X A ABC, 445 Ace Hardware, 271 Adelphia, 10 Advanced Micro, 540 AIG, 8 Alcatel-Alsthom, 298 Alliance Atlantis Communications Inc., 676 Altria Group, 470, 603, 634 Aluminum Company of America (Alcoa), 581 Amazon.com, 560, 696 America Bank, 412 American Airlines, 102, 479 American Cancer Society, 534 American Eagle Outfitters, 350 American Express, 396, 473 American Standard, 707 America Online (AOL), 470, 595, 597 Anaheim Angels, 604 AOL Time Warner, 597 Apple Computer, 6, 115, 298, 443, 715 Arthur Andersen, 537 AT&T, 4, 603 Avis, 425, 431, 604 B Babies R Us, 604 BankAmerica, 314 Bank of America, 11 Bank One Corporation, 70 Batten Ltd., 325326 Baylor University, 514 Berkshire Hathaway, 470 Best Buy, 9, 104, 140 Bill and Melinda Gates Foundation, 29, 534 Black & Decker Manufacturing Company, 255 Boeing Capital Corporation, 429 Boeing Company, 440, 470, 485, 552, 710 Boise Cascade, 434 Book-of-the-Month Club, 595 Breyer, 470 Bristol-Myers Squibb, 215, 255, 722 Budget, 425 C Cadbury-Schweppes, 10 Campbell Soup Company, 255, 433, 707 Capital Cities/ABC, Inc., 604 Cargill Inc., 535 Caterpillar Inc., 244246, 257, 480, 481, 535 Caterpillar Logistics Services, Inc., 245 Cendant Corp., 314, 604 Century 21, 604 Chase, 70 Chevron, 434 Cisco Systems, 155, 193, 289, 404, 470, 722 Citibank, 409 Citigroup, 11 CNN, 595 Coca-Cola Amatil Limited, B2 The Coca-Cola Company, 3, 5, 10, 11, 42, 43, 87, 100, 137, 163, 191, 215, 240, 243, 289, 333, 381, 421, 468, 470, 480, 528, 589, 618, 632, 692, 744, B1B4 Coca-Cola Enterprises Inc., B2 Coca-Cola FEMSA, S.A. de C.V., B2 Coca-Cola Hellenic Bottling Company S.A., B2 Coldwell Banker, 604 Columbia Sportswear Company, 676 Computer Associates International, 106 ConAgra Foods, 212 Consolidated Edison, 711 Continental Bank, 429 Costco Wholesale Corp., 641, D1 Craig Consumer Electronics, 249 Crane Company, 551 Cypress Semiconductor Corporation, 676 D DaimlerChrysler Corporation, 296, 491 Dairy Queen, 470 DeKalb Genetics Corporation, 626 Dell Computer, 60, 247, 298, 605 Dell Financial Services, 429 Delta Air Lines, 23, 45, 95, 102, 440 Discover, 395 Disney Company, see The Walt Disney Company Disneyland, 604 DisneyWorld, 604 Dun & Bradstreet, 309, 699 Dunkin Donuts, 23, 45 DuPont, 484, 485 Dynegy, Inc., 644, 694 E EarthLink, 552 Eastman Kodak Company, 346, 363, 606, 638 eBay, 357 Enron, 8, 29, 213, 301, 314, 340, 537, 591, 722, 746 ESPN, 445, 470, 604 Este Lauder Companies, Inc., 724725 ExxonMobil, 11, 296, 298, 470, 547 F Fannie Mae, 70, 108 Fidelity Investments, 47, 48 First National Bank, 12 Florida Citrus Company, 718 Ford Motor Company, 4, 11, 198, 258, 296, 532536, 547 Frito-Lay, 315, A9, A10, A12, A13 G GE, see General Electric General Dynamics, 745 General Electric (GE), 7, 10, 204, 213, 298, 301, 341, 470, 535, 597 General Mills, 433 General Motors (GM), 67, 11, 194, 296, 302, 410, 538, 650, 695, 721, 728 Global Crossing, 301, 340 GM, see General Motors Goldman Sachs, 11 Google, 29, 534, 540 Gulf Oil, 538 H Harley-Davidson, 215 Harolds Club, 301, 335 HBO, 595 HealthSouth, 8 Hershey Foods Corp., 552 Hertz, 425, F3 Hewlett-Packard, 298 Hilton, 429 Home Depot, 4, 247, 271, 428 Howard Johnson, 604 I IBM, 71, 213, 242, 298, 535, 539n.2 Imaginarium, 604 Intel Corporation, 325, 535, 555, 560 InterContinental, 429 International Harvester, 3 IT&T, 2 J J. Crew, 350 J.C. Penney Company, Inc., 349, 387, 412413, 641, 698700, 704715 John Deere Capital Corporation, 429 Johnson & Johnson, 310, 325 J.P. Morgan Leasing, 429 K Kellogg Company, 12, 29, 495, 560, 565, 566 Kids R Us, 604 Kmart, 196, 310, 699, 710, 717 Kodak, see Eastman Kodak Company Kohls Corporation, 641 KPMG LLP, A27, A29 Kraft Foods, Inc., 597, 603, 634 Krispy Kreme Doughnuts, 215 Kroger Stores, 198, 255, 310, 710, 711 K2, Inc., 164 L Leslie Fay Cos., 248, 290 The Limited, 296 Limited Brands, 100 Little, Brown & Co., 595 L.L. Bean, 296 Lockheed Martin Corporation, 155, 440, 561 Long Beach City College, 334 Lotus, 71 Lucent Technologies, 314 M McDonalds Corporation, 11, 443, 455, 470, 493, 534 McKesson Corporation, 196, 290 Major League Baseball Players Association, 7 Marriott, 429, 433 Marshall Farms, 626 Massachusetts General Hospital, 11 MasterCard, 395397, 412, 416 Merck, 310 Merrill Lynch, 11 Microsoft Corporation, 6, 11, 56, 89, 204, 295, 302, 443, 470, 547, 555, 636637, 655, 728 Mighty Ducks, 604 Minnesota Mining and Manufacturing Company, see 3M Mitsubishi Motors, 402, 423 Moodys, 529, 699 Morgan Stanley, 608 Morrow Snowboards, Inc., 199 Motorola, 255, 325, 720 N NationsBank, 314 NBC, 470 New York Stock Exchange, 609 Nike, Inc., 4, 100, 542, 552, 553, 558, 705706 Nordstrom, Inc., 166, 397, 423, 732733 Nortel Networks, 394, 715 North American Van Lines, 542 Northern Virginia Community College, 11 Northwest Airlines, 94, 166 I-1 I-2 Company Index Regions Financial Corp., 567 Rent-A-Wreck, 425428, 431, 433, 443, 445, 448, 470 Rent-Way Inc., 293 Republic Carloading, 160 Reynolds Company, 653654 Rhino Foods, Inc., 142144 Robert Half and Co., 30 Royal Dutch/Shell Group, 442, 445 S Safeway, 310, 710 Salvation Army, 534 SAMS CLUB, 261 Samsung Electronics Co., 262, 291 Sears, Roebuck, and Company, 195, 349, 387 Sears Holdings, 296 Shell, see Royal Dutch/Shell Group Snack Ventures Europe, A5 Southern Company, 325 Sports Illustrated, 479 Springfield ReManufacturing Corporation, 24 Standard & Poors, 699 Starbucks, 29, 255, 695 Stephanies Gourmet Coffee and More, 338341, 344, 345 Sunbeam Corporation, 243, 292293 Sunset Books, 595 SunTrust Banks Inc., 346 T Taco Bell, 445, 470 Target Corporation, 196, 247, 355, 641, 739 Tektronix Inc., 561 Texaco Oil Company, 409 3M Company, 411, 518 Tiffany & Co., 710 Time-Life Books, 595 Time Warner, Inc., 7, 164, 298, 470, 547, 594597, 601, 603 TNT, 595 Toys R Us, Inc., 325, 346, 604 Trek, 11 True Value Hardware, 247 Turner Broadcasting, 597, 601, 603 Twentieth Century Fox, 96 Tyco, 340 U U.S. Olympic Committee, 71 United Airlines, 7, 102, 479, 638 United Stationers, 196 USAir, 491 US Bancorp Equipment Finance, 429 USX Corp., 491 V Veritas Software, 71 Verizon Communications, 310 Visa, 395397, 406, 409, 411, 413, 419 W Walgreen Drugs, 196, 255 Wall Street Journal, 8, 537, 541 Wal-Mart Stores, Inc., 11, 58, 90, 194196, 199, 200, 204, 247, 248, 261262, 271, 291, 310, 349, 355, 555, 641, 710, 739, A11 The Walt Disney Company, 7, 23, 45, 95, 295, 470, 560, 603, 604 Warner Bros., 595 Waste Management Company, 70 Wells Fargo Bank, 340 Wendys International, 255, 470 Weyerhaeuser Co., 718 Whirlpool, 707 Whitehall-Robins, 384385, 393 WorldCom, Inc., 8, 29, 93, 301, 314, 340, 438, 470, 644, 694, 722 X Xerox, 93 Y Yahoo! Inc., 163, 696, F3 Yale Express, 160, 193 YUM! Brands, 470, A22 O Office Depot, 196 Oracle Corporation, 655 Owens-Illinois, 446, 447 P PACE Membership Warehouse, 717 PayLess Drug Stores Northwest, 717 PayPal, 357 PepsiAmericas, A20 Pepsi Bottling Group, A19A20, A22, A23 PepsiCo, Inc., 36, 10, 13, 4243, 45, 53, 8687, 90, 95, 100, 104, 137, 140, 168, 190191, 193, 213, 214, 239240, 243, 257, 258, 288289, 291, 298, 315, 333, 336, 365, 380381, 383, 394, 421, 423, 430, 468, 471, 480, 481, 492, 528, 531, 534, 541, 546, 550, 551, 559, 565, 588589, 592, 604, 632, 635, 642, 692, 695, 743744, 747, A1A30, F13 PepsiCo Beverages North America, A9, A10, A12, A13 PepsiCo International, A9, A10, A12, A13 Pfizer, 310 P&G, see Procter & Gamble Company Philip Morris, 470, 597, 603 Pizza Hut, 470 Planet Hollywood, 514 PNC Financial Services Group Inc., 567 Policy Management Systems, 292 Procter & Gamble Company (P&G), 11, 446, 447, 470, 542, 720 Prudential Real Estate, 11 Q Quaker, A24 Quaker Foods, 257, A9, A10, A12, A13, A30 Qualcomm, 538 Qwest, 310 R Radio Shack, 71, 90 Ramada Inn, 604 Red Cross, 29 Reebok International Ltd., 255, 541, 548 SUBJECT INDEX A Absences, paid, F6 Accelerated-depreciation method, 435 Account(s), 4853 chart of, 6061 control, E1E2 T, 48 three-column form of, 58 Accounting: basic activities of, 45 career opportunities in, 2930 Accounting cycle: optional steps in, 162164 required steps in, 161162 Accounting cycle tutorial adjusting entries, 97 preparing financial statements and closing the books, 148 recording process, 61 Accounting data, users of, 67 Accounting principle, changes in, 720 Accounts payable subsidiary ledger, E1 Accounts receivable, 386398 defined, 386 disposing of, 395398 recognizing, 387 types of, 386 valuing, 388395 Accounts receivable subsidiary ledger, E1 Accounts receivable turnover ratio, 403404. See also Receivables turnover Accruals, adjusting entries for, 97, 105110 expenses, accrued, 106109 revenues, accrued, 105106 Accrual-basis accounting, cash-basis vs., 95 Accrued expenses, 106109 Accrued interest, 107108 Accrued revenues, 105106 Acid-test (quick) ratio, 707708 Additional paid-in capital, 564 Additions and improvements, 438 Adjustable-rate mortgages, 493 Adjusted trial balance: preparation of, 112 preparing financial statements from, 113114, 116 Adjusting entries, 97111 for accruals, 105110 expenses, accrued, 106109 revenues, accrued, 105106 classes of, 9798 for deferrals, 98105 prepaid expenses, 98102 unearned revenues, 102103 example of journalizing/posting, 110111 for merchandising operations, 207 preparing, from worksheets, 148, 150, 152, 154 purpose of, 97 Administrative expenses, 211 Affiliated (subsidiary) company, 602 Agents: collection, 476 of corporations, 535 Aging schedule, 393 Aging the accounts receivable, 393 Allowance for Doubtful Accounts, 390, 393 Allowance method, 389394 Alternative accounting methods, 721722 Amortization, 443 of bonds, 506513 straight-line method, 509513 Annual report(s), 71, A1 Annuity(-ies): defined, C5, C10 future value of an, C5C7 present value of an, 502503, C10C12, C16 Assets, 12 depreciable, 431 in double-entry system, 49 return on, 311, 711 Asset turnover ratio, 446447, 710711 Assumptions, accounting, 911, 298299 Auditing, as area of public accounting, 29 Auditors, internal, 345346 Authorized stock, 540 Auto loans, calculating, C17 Available-for-sale securities, 605, 607608 Averages, industry, 699 Average collection period, 404 Average-cost method, 254255, 267268 B Bad Debts Expense, 388, 391 Balance sheet, 2123. See also Classified balance sheet consolidated, 615618 effects of cost flow methods on, 257 effects of inventory errors on, 260261 horizontal analysis of, 700701 investments on, 608609 stockholders equity section of, 564565 vertical analysis of, 703 Bank(s), 355362 deposits to, 355 and writing checks, 355357 Bank accounts, reconciling, 359362 Banking, investment, 540 Bank reconciliation, 355, 359362 entries from, 361362 example of, 360361 procedure for, 359360 Bank service charges, 357 Bank statements, 357358 Basic accounting equation, 1113 expansion of, 5253 using, 1420 Bearer (coupon) bonds, 484 Best-efforts contracts, 540n.3 Blank, Arthur, 4 Bond(s), 482492, 500513 amortization of, 506513 effective-interest method, 506509 straight-line method, 509513 bearer, 484 callable, 484 conversion of, to common stock, 491492 defined, 482 determining market value of, 486 discounting of, 487488, 503504 issuance of: accounting for, 487490 at discount, 488489 at face value, 487 at premium, 489490 procedures for, 484 premiums on, 487488 present value of, 504505 and present value of annuity, 502503 present value of face value of, 500502 pricing of, 500505 recording acquisition of, 598 recording interest from, 598 recording sale of, 598599 redemption of: at maturity, 491 before maturity, 491 registered, 484 retirement of, 489492 secured, 483 trading of, 484485 Bond discount, 488489 amortization of, 506508, 510511 defined, 488 Bonding, 346 Bond premium, 489490 amortization of, 508509, 511513 defined, 488 Bonuses, D4 Bookkeeping, 5 Book value, 102, 431 Book value per share, 571572 Buildings, 428 Business documents, 54, 203 By-laws, 538 C Calculator, using a, C16C17 Calendar year, 95 Callable bonds, 484 Canceled checks, 357 Capital, 305 ability of corporations to acquire, 535 corporate, 542 paid-in, 542 working, 309310, 481, 707 Capital expenditures, 438 Capital leases, F4F5 Capital stock, 564 Careers, accounting, 2930 Carrying (book) value: of convertible bonds, 491492 defined, 489 Carrying (book) value method, 492 Cash: defined, 348 net change in: direct method, 671 indirect method, 652654 reporting, 363, 365366 restricted, 363 Cash-basis accounting, accrual-basis vs., 95 Cash controls, 347355 disbursements, 351355 receipts, 348351 Cash disbursements journal, see Cash payments journal Cash dividends, 51, 552555 Cash equivalents, 363 Cash flow(s): classification of, 639640 free, 654655 statement of, see Statement of cash flows Cash payments journal, E13E15 Cash (net) realizable value, 389, 400 Cash receipts journal, E7E11 Cash register tapes, 203 Cash sales, credit card sales as, 396397 Castle, Ted, 142143 CEO (chief executive officer), 536 Certified public accountants (CPAs), 29 Changes in accounting principle, 720 Channel stuffing, 215, 722 Charter, 538 I-3 I-4 Subject Index characteristics of, 535537 classification of, 534535 defined, 10, 534 formation of, 538 issuance of stock by, 540542 owners equity in, 542543 ownership of, 538539 Correcting entries, 158160 Cost(s): depreciable, 432 expired/unexpired, 300 organization, 538 of plant assets, 427430 research and development, 446 Cost flow assumptions, 251255, 266269 Cost method: and stock investments, 600 for valuation of treasury stock, 547 Cost of goods sold: defined, 196 determining, under periodic system, 216218 and matching principle, 300 Cost principle, 302, 598 Coupon (bearer) bonds, 484 CPAs (certified public accountants), 29 Credit, 49 Credit cards: sales via, 396397 using, 405 Credit memoranda, 358 Creditors, long- vs. short-term, 698 Creditors subsidiary ledger, E1 Credit sales, journalizing, E5 Credit terms, 201202 Cumulative dividend, 551552 Current assets: on classified balance sheet, 162163 and current liabilities, 474 Current liabilities, 474482 changes in, 649 on classified balance sheet, 165166 and current assets, 474 defined, 474 long-term debt, current maturities of, 479480 notes payable, 475476 payroll and payroll taxes payable, 476478 sales taxes payable, 476 statement presentation/analysis of, 480482 unearned revenues, 479 Current maturities of long-term debt, 479480 Current ratio, 309, 706707 Current replacement cost, 258 Customers subsidiary ledger, E1 D Days in inventory, 262 Debenture bonds, 484 Debit, 49 Debit memoranda, 357 Debt investments, 597598 Debt to total assets ratio, 311312, 495, 714715 Declaration date, 553 Declining-balance method, 434435 Deferrals, adjusting entries for, 97105 prepaid expenses, 98102 unearned revenues, 102103 Deficits, 560 Defined-benefit plans, F8 Defined-contribution plans, F8 Depletion, 442 Deposits, bank, 355 Deposits in transit, 359 Depreciable assets, 431 Depreciable cost, 432 Depreciation: declining-balance method of, 434435 defined, 101, 430 of plant assets, 430438 computation, 431432 and income taxes, 436 methods, 432436 revisions in estimate of, 436437 as prepaid expense, 101 straight-line method of, 432433 units-of-activity method of, 433434 Depreciation expense, 646647 Direct method (of preparing statement of cash flows), 644, 665671 investing/financing activities, 670671 net change in cash, 671 operating activities, cash provided/used by, 666670 Direct write-off method, 388389 Disbursements, cash, 351355 and EFT system, 352353 and petty cash fund, 353355 and voucher system, 351352 Discontinued operations, 717718 Discount(s): bonds issued at, 488489, 506508, 510511 purchase, 201202 sales, 205 Discounting the future amount, C7, C12 Discount period, 202 Dishonored notes, 401402 Disposal: of accounts receivable, 395398 of notes receivable, 401402 of plant assets, 439441 retirement, 439440 sale, 440441 of treasury stock, 548550 Dividend(s), 51, 558 cash, 552555 cumulative, 551552 defined, 13, 552 preferred, 712 stock, 556558 stock splits, 558559 Dividends in arrears, 551552 Documentation procedures, 344 Double-declining-balance method, 435 Double-entry system, 49 Dunlap, Chainsaw Al, 292293 Duties, segregation of, 342343 E Earnings: gross, D4 statement of, D10 Earning power, 717 Earnings management, 300 Earnings per share (EPS), 307308, 712713 Economic entity assumption, 10 Effective-interest amortization method, 506509 EFT, see Electronic funds transfers Egypt, ancient, 343 Electronic controls, 344, 345 Electronic funds transfers (EFT), 352353 Employee earnings record, D8 Employee fringe benefits, liabilities for, F5F8 Employee Retirement Income Security Act (ERISA), F7 Employees: bonding of, 346 hiring of, D2 Employees Withholding Allowance Certificate (W-4), D6 The End of Work (Jeremy Rifkin), 194 Endorsements, restrictive, 350 Environmental liabilities, 115 EPS, see Earnings per share Equipment, 428429, 647 Chart of accounts, 6061 Check(s): canceled, 357 outstanding, 359 paying payroll via, D4 writing, 355357 Check register, 352 Chief executive officer (CEO), 536 Classified balance sheet, 161166, 168170, 305306 current assets on, 162163 current liabilities on, 165166 examples of, 168170 intangible assets on, 164 long-term investments on, 163 long-term liabilities on, 166 for merchandising operations, 213, 214 property, plant, and equipment on, 164 stockholders equity on, 166 valuing/reporting of investments on, 610611 Classified financial statements, 305308 Classified income statement, 306308 Closing entries: for merchandising operations, 207 posting of, 153154, 157158 preparation of, 151153, 155157 Closing the books, 154161 defined, 154 and posting of closing entries, 157158 and preparation of closing entries, 155157 and preparation of post-closing trial balance, 159161 Collection agents, 476 Collection period, average, 404 Collusion, 347 Common stock, 50, 538546 cash dividend allocation, 554, 555 issuance of, 540546 and owners equity, 542543 and ownership rights of stockholders, 538539 par-value vs. no-par-value, 544546 for services or noncash assets, 545546 Common stockholders equity, return on, 311, 566, 711712 Comparability of accounting information, 296 Comparative analysis, 698699 Compensating balances, 363 Compound entries, 5556 Compound interest, C2C3 Comprehensive income, 610, 720 Computer controls, 344 Conceptual framework, 295 Conservatism, 258, 304 Consigned goods, 249 Consistency, of accounting information, 296 Consistency principle, 257258 Consolidated balance sheet, 615618 Consolidated income statement, 618 Constraints, accounting, 303304 Consumerism, 194, 195 Consumption, 194195 Contingent liabilities, F1F3 Continuous life (of corporation), 536 Contra asset accounts, 101, 488 Contracts, best-efforts, 540n.3 Contractual interest rate, 484, 488 Contra-revenue accounts, 204 Contra stockholders equity account, 547548 Controls, internal, see Internal control(s) Control accounts, E1E2 Controller, 536 Controlling interest, 603 Convertible bonds, 484, 491492 Copyrights, 444 Corporate capital, 542 Corporation(s), 532543 book value per share of, 571572 Subject Index Equity: stockholders, 1213 trading on the, 712 Equity method, 601602 ERISA (Employee Retirement Income Security Act), F7 Errors: on bank statements, 359360 in inventory, 259261 balance sheet effects, 260261 income statement effects, 259260 Ethics: in financial reporting, 89 in personal financial reporting, 25 Exchange of intangible assets, 452454 gain treatment, 453454 loss treatment, 452453 Expense(s), 51 accrued, 106109 administrative, 211 defined, 13 operating, 210211 prepaid, 98102, 118119 selling, 211 Expired costs, 300 External transactions, 14 External users of accounting data, 6 Extraordinary operations, 718720 F Face value, 488 of bonds, 484, 487 of notes receivable, 400 present value of, 500502 Factors, 395396 FAFSA form, 25 Fair value, 605607 FASB, see Financial Accounting Standards Board Federal Bureau of Investigation (FBI), 4 Federal Insurance Contribution Act (FICA), D5 Federal unemployment taxes, D11D12 Federal Unemployment Tax Act (FUTA), D11 FICA (Federal Insurance Contribution Act), D5 FICA taxes: employer contribution for, D11 payroll deduction for, D5D6 FIFO method, see First-in, first-out method Financial accounting, database concept of, 302 Financial Accounting Standards Board (FASB), 9, 294295, 297, 313 Financial calculator, using a, C16C17 Financial statement presentation and analysis: of intangible assets, 446447 Financial statements, 5, 2124, 294 analysis of, 308312, 698726 classified, 305308 for Coca-Cola Company, B1B4 current liabilities on, 480482 and determination of earning power, 717 elements of, 297 and global economy, 312313 horizontal analysis of, 699703 inventories on: cost flow methods, 255257 presentation and analysis, 261262, 264 irregular items on, 717720 long-term liabilities on, 494495, 497498 for merchandising operations, 209214 classified balance sheet, 213, 214 multiple-step income statement, 209213 single-step income statement, 213, 214 operating guidelines for preparation of, 297 for PepsiCo, Inc., A1A30 preparing, from adjusted trial balance, 113114, 116 preparing, from worksheets, 148, 152, 153 and quality of earnings, 721722, 724726 ratio analysis of, 705717 receivables on, 403404 retained earnings on, 564566 retained earnings statement, 562563 tools for, 699 vertical analysis of, 703705 Financing activities, cash inflow/outflow from, 639, 640 direct method, 670671 indirect method, 651652 Finished goods inventory, 246 First-in, first-out (FIFO) method, 252253, 266 Fiscal year, 95 Fixed assets, 426. See also Plant assets Fixed-rate mortgages, 493 FOB (free on board), 200, 248 FOB destination, 200, 248, 249 FOB shipping point, 200, 248, 249 Ford, Henry, 533534 For Deposit Only, 350 Forensic accounting, 30 Form W-2 (Wage and Tax Statement), D13D14 Form W-4 (Employees Withholding Allowance Certificate), D6 Franchises, 445 Free Application for Federal Student Aid (FAFSA) form, 25 Free cash flow, 654655 Free on board (FOB), 200, 248 Freight costs, 200201 Fringe benefits, liabilities for, F5F8 Full disclosure principle, 301, F3 FUTA (Federal Unemployment Tax Act), D11 Future value, C3C7 of an annuity, C5C7 of a single amount, C3C5 G GAAP, see Generally accepted accounting principles Geneen, Harold, 2 General journal, 54, E16E17 General ledger (ledger), 5761 Generally accepted accounting principles (GAAP), 9, 294 and allowance method, 389 and alternative accounting methods, 721, 722 and cash-basis accounting, 95 and materiality, 303 Global economy, and financial statement presentation, 312313 Going concern assumption, 298299, 431 Goods in transit, 248249 Goodwill, 445446 Government, accounting career opportunities in, 30 Government regulation, of corporations, 537 Gross earnings, D4 Gross profit, 210 Gross profit method (for estimating inventories), 270271 Gross profit rate, 210 H Health insurance, cost of, 496 Held-to-maturity securities, 605 Hiring employees, D2 Home-equity loans, 567 Honor (of notes receivable), 401 Horizontal analysis, 699703 of balance sheet, 700701 of income statement, 701702 of retained earnings statement, 702703 Human resources (HR), 344, D2 I-5 I IASB, see International Accounting Standards Board Identity theft, 364 Imprest system, 353 Improper recognition, 722 Improvements: additions and, 438 land, 427428 Income: comprehensive, 610, 720 pro forma, 722 Income statement, 21, 22 classified, 306308 consolidated, 618 effects of cost flow methods on, 255257 effects of inventory errors on, 259260 horizontal analysis of, 701702 for merchandising operations, 209214 multiple-step income statement, 209213 single-step income statement, 213, 214 vertical analysis of, 703705 Income taxes (income taxation): on classified income statement, 306307 of corporations, 537 and depreciation of plant assets, 436 effects of cost flow methods on, 257 payroll deduction for, D6 remitting, D13 Independent internal verification, 344346 Indirect method (of preparing statement of cash flows), 643654 investing/financing activities, 651652 net change in cash, 652654 operating activities, cash provided/used by, 646650 worksheets, using, 659664 Industry averages (norms), 699 Information, accounting, 295297 Insurance, as prepaid expense, 100 Intangible assets, 443447 accounting for, 443446 amortization of, 443 on classified balance sheet, 164 copyrights, 444 exchange of, 452454 gain treatment, 453454 loss treatment, 452453 franchises and licenses, 445 goodwill, 445446 patents, 444 research and development costs, 446 statement presentation/analyis of, 446447 trademarks and trade names, 444 Intercompany comparisons, 699 Intercompany eliminations, 615, 616, 618 Intercompany transactions, 615, 618 Interest, C1C3 accrued, 107108 on checking accounts, 358 compound, C2C3 defined, C1 on notes receivable, 400 simple, C1C2 Interest rate, C1 Interim periods, 95 Internal auditors, 345346 Internal control(s), 340347 defined, 340 and documentation procedures, 344 and establishment of responsibility, 341, 342 and independent internal verification, 344346 limitations of, 346347 for payroll, D1D4 physical/mechanical/electronic controls, 344, 345 and Sarbanes-Oxley Act, 341 and segregation of duties, 342343 I-6 Subject Index J JIT (just-in-time) inventory, 247 Johnson, Matthew, 715 Journal, 5457 Journalizing, 5455, 6768, 110111 Just-in-time (JIT) inventory, 247 K Knight, Phil, 4 L Land, 427 Land improvements, 427428 Large stock dividend, 556 Last-in, first-out (LIFO) method, 253254, 267 LCM (lower-of-cost-or-market), 258 Leases, F3F5 capital, F4F5 operating, F3F4 Lease liabilities, F3F5 Ledger, see General ledger Legal capital, 541 Letter to the stockholders, A2A3 Leverage, 712 Leveraging, 712 Liabilities, 12, 472499 contingent, F1F3 current, 474482 long-term debt, current maturities of, 479480 notes payable, 475476 payroll and payroll taxes payable, 476478 sales taxes payable, 476 statement presentation/analysis of, 480482 unearned revenues, 479 in double-entry system, 49 for employee fringe benefits, F5F8 environmental, 115 lease, F3F5 long-term, 482495, 497498 bonds, 482492, 500513 notes payable, long-term, 492493 statement presentation/analysis of, 494495, 497498 Licenses, 445 LIFO conformity rule, 257 LIFO method, see Last-in, first-out method Limited liability, of corporate stockholders, 535 Liquidating dividend, 553 Liquidation preference, 552 Liquidity, 309310, 481 Liquidity ratios, 706710 acid-test ratio, 707708 current ratio, 706707 inventory turnover, 709710 receivables turnover, 708709 Long-term debt, current maturities of, 479480 Long-term debt due within one year, 480 Long-term investments, 163, 608, 609 Long-term liabilities, 482495, 497498 bonds, 482492, 500513 on classified balance sheet, 166 notes payable, long-term, 492493 postretirement benefits as, F8 present value of, C12C15 statement presentation/analysis of, 494495, 497498 Long-term notes payable, 492493 Lower-of-cost-or-market (LCM), 258 Lucas, George, 96 M MACRS (Modified Accelerated Cost Recovery System), 436 Mail receipts, 350351 Maker, 398 Management (of corporation), 536 Management consulting, as area of public accounting, 29 Managements discussion and analysis (MD&A), A3 Managerial accounting, 6, 29 Market interest rate, 486, 488 Market value: book value vs., 102, 572 of stock, 541 Marshall, John, 534 Matching principle, 9596, 300 Materiality (materiality principle), 303, 438 Maturity date (of promissory note), 399 MD&A (managements discussion and analysis), A3 Mechanical controls, 344, 345 Medicare, D5n.1 Merchandising operations, 194224 completing the accounting cycle for, 206208 adjusting entries, 207 closing entries, 207 cost of goods sold in, 216218 financial statements for, 209214 classified balance sheet, 213, 214 multiple-step income statement, 209213 single-step income statement, 213, 214 inventory systems in, 197199 periodic system, 198 perpetual system, 197198, 219222 operating cycles in, 196197 recording purchases of merchandise in, 199203 freight costs, 200201 purchase discounts, 201202 purchase returns and allowances, 201 recording sales of merchandise in, 203205 sales discounts, 205 sales returns and allowances, 204205 Merchandising profit, 210 Mintenko, Stephanie, 338339 MNCs (multinational corporations), 312 Modified Accelerated Cost Recovery System (MACRS), 436 Monetary unit assumption, 10, 298 Mortgage bonds, 483 Mortgage loans, calculating, C17 Mortgage notes payable, 493 Multinational corporations (MNCs), 312 Multiple-step income statement, 209213 N Natural resources, 442443 Net change in cash: direct method, 671 indirect method, 652654 Net pay, D7 Net (cash) realizable value, 389, 400 Net sales, 209210 Net worth, 167 Noncash activities, significant, 640641 Noncash current assets, changes in, 647648 Nonoperating activities, 211213 No-par-value stock, 542, 544545 Norms, industry, 699 Normal balance, 50 Notes payable, 475476 Notes receivable, 398403 computing interest for, 400 defined, 386 disposing of, 401402 maturity date of, 399 recognizing, 400 valuing, 400401 Not-for-profit corporations, 534 NSF (not sufficient funds), 358 Internal Revenue Service (IRS), 436 Internal transactions, 14 Internal users of accounting data, 6 International Accounting Standards Board (IASB), 9, 313 Intracompany comparisons, 699 Inventory(-ies), 244272 classification of, 246247 costing of: average-cost method for, 254255, 267268 balance sheet effects, 257 and consistency principle, 257 and cost flow assumption, 251252 FIFO method for, 252253, 266 financial statement effects, 255257 LIFO method for, 253254, 267 lower-of-cost-or-market method for, 258 and quality of earnings, 721 specific identification method for, 250251 tax effects, 257 days in, 262 determining quantities of, 247249 and ownership of goods, 248249 physical inventory, 247248 errors in, 259261 balance sheet effects, 260261 income statement effects, 259260 estimating, 269272 gross profit method for, 270271 retail inventory method for, 271272 finished goods, 246 just-in-time, 247 in merchandising operations, 197199, 219222 periodic inventory system, 198 perpetual inventory systems, 197198, 219222, 266269 statement presentation and analysis of, 261262, 264 taking, 247248 theft of, 263 Inventory turnover, 261262, 709710 Investee, 600 Investing activities, cash inflow/outflow from, 639, 640 direct method, 670671 indirect method, 651652 Investments, 594614 debt, 598599 purchase of, by corporations, 596597 short- vs. long-term, 608609 stock, 600605 between 20% and 50%, holdings, 601602 less than 20%, holdings of, 600601 more than 50%, holdings of, 602603 valuing/reporting of, 605611, 613 available-for-sale securities, 607608 on balance sheet, 608609 on classified balance sheet, 610611 realized/unrealized gain/loss presentation, 609610, 613 trading securities, 605607 Investment banking, 540 Investment portfolio, 600 Investments, long-term, see Long-term investments Invoice(s): purchase, 199, 200 sales, 203 Irregular items, 717720 changes in accounting principle, 720 comprehensive income, 720 discontinued operations, 717718 extraordinary operations, 718720 IRS (Internal Revenue Service), 436 Subject Index O Obsolescence, 431 Off-balance-sheet financing, F5 Open-book management, 3 Operating activities, cash inflow/outflow from, 639, 640 direct method, 666670 indirect method, 646650 Operating cycles, in merchandising operations, 196197 Operating expenses, 210211, 300 Operating leases, F3F4 Ordinary repairs, 438 Organization costs, 538 Other expenses and losses, 211 Other receivables, 386 Other revenues and gains, 211 Outstanding checks, 359 Outstanding stock, 548 Over-the counter receipts, 349350 P Paid absences, F6 Paid-in capital, 542, 564 Paper (phantom) profit, 256 Parent company, 602603 Partnerships, 10 Par-value stock, 541, 544, 546 Passwords, computer, 344 Patents, 444 Payee, 398 Payment date (dividends), 554 Payout ratio, 713714 Payroll, D1D15 defined, D1 determining, D4D7 internal control of, D1D4 recording, D8D10 Payroll and payroll taxes payable, 476478 Payroll deductions, D5D7 for FICA taxes, D5 for income taxes, D6 Payroll register, D8D9 Payroll taxes, 476, D11D15 federal unemployment taxes, D11D12 FICA, D11 filing/remitting, D13D15 recording, D12D13 state unemployment taxes, D12 PCAOB (Public Company Accounting Oversight Board), 341 Pension plans, F7F8 P-E ratio, see Price-earnings ratio Percentage-of-receivables basis, 393394 Percentage-of-sales basis, 392393 Periodic inventory system, 198, 219222 merchandise purchases in, 220221 merchandise sales in, 221222 Permanent accounts, 150151, 154155 Perpetual inventory system(s), 197198 inventory cost flow methods in, 266269 periodic vs., 219222 Personal annual report, 71 Personal financial reporting, ethics in, 25 Petty cash fund, 353355 establishment of, 353 making payments from, 353354 replenishment of, 354355 Phantom (paper) profit, 256 Physical controls, 344, 345 Pickard, Thomas, 4 Plan administrator (pensions), F7 Plant and equipment, see Plant assets Plant assets, 426441 buildings, 428 defined, 426 depreciation of, 430438 computation, 431432 and income taxes, 436 methods, 432436 revisions in estimate of, 436437 determining cost of, 427430 disposal of, 439441 retirement, 439440 sale, 440441 equipment, 428429 exchange of, 452454 gain treatment, 453454 loss treatment, 452453 expenditures during useful life of, 438 land, 427 land improvements, 427428 loss on sale of, 647 Post-closing trial balance, 155157, 159161 Posting, 5960, 6768, 110111 Postretirement benefits, F6F8 Preferred dividend, 712 Preferred stock, 550552, 554555 Premium, bonds issued at, 488490 Prepaid expenses (prepayments), 98102, 118119 Present value, C7C16 of an annuity, 502503, C10C12, C16 and bond pricing, 500505 defined, C7 of a long-term note or bond, C12C15 and market value of bonds, 486 of a single amount, C8C10, C1516 variables affecting, C7 Present value of 1 factors, C9 Price-earnings (P-E) ratio, 307n.4, 713 Principal, C1 Principle(s) of accounting, 294295, 299303 cost principle as, 302 full disclosure as, 301 matching as, 300 revenue recognition as, 299300 Prior period adjustments, 562 Private accounting, 29. See also Managerial accounting Privately held corporations, 535 Profit: gross, 210 as purpose of corporation, 534 Profitability, 310 Profitability ratios, 710714 asset turnover, 710711 earnings per share, 712713 payout ratio, 713714 price-earnings ratio, 713 profit margin, 710 return on assets, 711 return on common stockholders equity, 711712 Profit margin (profit margin percentage), 310, 710 Pro forma income, 722 Promissory notes, 398 Property, plant, and equipment, 164. See also Plant assets Proprietorships, 10 Public accounting, 29 Public Company Accounting Oversight Board (PCAOB), 341 Publicly held corporations, 534535 Purchase allowances, 201 Purchase discounts, 201202 Purchase invoices, 199, 200 Purchase returns, 201 Purchases, recording, 199203 discounts, 201202 freight costs, 200201 returns and allowances, 201 Purchases journal, E11E13 Purchasing activities, and segregation of duties, 342 I-7 Q Quality of earnings, 721722, 724726 and alternative accounting methods, 721722 and improper recognition, 722 and pro forma income, 722 Quick (acid-test) ratio, 707708 R Ratio analysis, 699, 705717 liquidity ratios, 706710 profitability ratios, 710714 solvency ratios, 714715 summary of ratios, 716 Raw materials, 246 R&D (research and development) costs, 446 Receipts, cash, 348351 mail receipts, 350351 over-the counter receipts, 349350 Receivables, 384408 accounts receivable, 386398 disposing of, 395398 recognizing, 387 types of, 386 valuing, 388395 defined, 386 notes receivable, 398403 computing interest for, 400 disposing of, 401402 maturity date of, 399 recognizing, 400 valuing, 400401 statement presentation/analysis for, 403404 trade, 386 Receivables turnover, 708709. See also Accounts receivable turnover ratio Recessions, inventory fraud during, 260 Recognition, improper, 722 Reconciliation, see Bank reconciliation Record date (dividends), 554 Recording process, 4674 and accounts, 4853 illustrated example of, 6168 for payroll, D8D10 for payroll taxes, D12D13 steps in, 5361 journalizing, 5457 ledger, transfer to, 5761 transaction analysis, 1520 and trial balance, 6870, 7273 Registered bonds, 484 Relevance, of accounting information, 296 Reliability, of accounting information, 296 Reporting: of cash, 363, 365366 ethics in, 89 Research and development (R&D) costs, 446 Responsibility, establishment of, 341, 342 Restricted cash, 363 Restrictive endorsements, 350 Retailers, 196 Retail inventory method, 271272 Retained earnings, 51, 542543, 560565 defined, 560 and prior period adjustments, 562 restrictions on, 560561 statement of, 562563 statement presentation/analysis of, 564566 Retained earnings restrictions, 561 Retained earnings statement, 2123, 562563 horizontal analysis of, 702703 statement presentation/analysis of, 564565, 568 Retirement, of plant assets, 439440 I-8 Subject Index State income taxes, D6 Statement of cash flows, 21, 22, 24, 638671 classification of cash flows on, 639640 direct method of preparing, 644, 665671 investing/financing activities, 670671 net change in cash, 671 operating activities, 666670 evaluating a company using, 654655, 657658 format of, 641 indirect method of preparing, 643654 investing/financing activities, 651652 net change in cash, 652654 operating activities, 646650 worksheets, using, 659664 preparation of, 642643 preparing, from worksheets, 659664 and significant noncash activities, 640641 usefulness of, 638639 Statement of earnings, D10 State unemployment taxes, D12 State unemployment tax acts (SUTA), D12 Stock: authorized, 540 book value of, 571572 deciding to invest in, 723 issuance of, 540546 market value of, 541, 572 par vs. no-par-value, 541542, 544546 preferred, 550552 treasury, 546550 disposal of, 548550 purchase of, 547548 Stock certificate, 538 Stock dividends, 556558 Stockholders: financial statement analysis by, 698 letter to the, A2A3 limited liability of, 535 ownership rights of, 538539 Stockholders equity, 1213 on classified balance sheet, 166 return on common stockholders equity, 311, 566, 711712 Stockholders equity account, 557 Stockholders equity statement, 565, 570571 Stock investments, 600605 between 20% and 50%, holdings, 601602 less than 20%, holdings of, 600601 more than 50%, holdings of, 602603 Stock splits, 558559 Straight-line method, 432433, 509513 Su, Vivi, 384385 Subsidiary (affiliated) company, 602 Subsidiary ledger(s), E1E4 advantages of, E3 defined, E1 example, E1E2 Supplies, as prepaid expense, 99 SUTA (state unemployment tax acts), D12 T T account, 48 Taking inventory, 247248 Taxes and taxation. See also Income taxes (income taxation); Payroll taxes as area of public accounting, 29 burden of, 478 corporate, 537 sales taxes payable, 476 Temporary accounts, 150, 154 Term bonds, 484 Theft, inventory, 263 Three-column form of account, 58 Time cards, D3 Timekeeping, D3 Time periods, and discounting of bonds, 503504 Time period assumption, 94, 298 Times interest earned ratio, 495, 715 Time value of money, C1C18 and discounting, C12 future value, C3C7 and interest, C1C3 and market value of bonds, 486 present value, C7C16 and use of financial calculator, C16C17 Timing issue(s), 9496 accrual- vs. cash-basis accounting as, 95 fiscal/calendar years as, 95 recognizing revenues/expenses as, 9596 selection of accounting time period as, 94 Trademarks and trade names, 444 Trade receivables, 386 Trading on the equity, 712 Trading securities, 605607 Transactions, 14 Transaction analysis, 1520 Transfer, of corporate ownership rights, 535 Transit, goods in, 248249 Transposition errors, 70 Treasurer, 536 Treasury stock, 546550 disposal of, 548550 purchase of, 547548 Trend analysis, see Horizontal analysis Trial balance, 6870, 7273 defined, 68 limitations of, 69 locating errors in, 6970 post-closing, 155157, 159161 steps in preparation of, 69 use of dollar signs in, 70 Trustee (of bond), 484 Turnover: asset, 446447, 710711 inventory, 261262, 709710 receivables, 403404, 708709 U Uncollectible accounts: allowance method for, 389394 direct write-off method for, 388389 Underwriting, of stock issues, 540 Unearned revenues, 102103, 119120, 479 Unemployment taxes: federal, D11D12 state, D12 Unexpired costs, 300 Units-of-activity method, 433434, 442 Unsecured bonds, 484 Useful life, 101, 431, 432, 438 V Valuation: of accounts receivable, 388395 of notes receivable, 400401 Vertical analysis, 699, 703705 of balance sheet, 703 of income statement, 703705 Virtual close, 155 Vouchers, 351, 352 Voucher register, 352 Voucher systems, 351352 W W-2 (Wage and Tax Statement), D13D14 W-4 (Employees Withholding Allowance Certificate), D6 Wages, D1 Wages and salaries payable, 476 Wage and Tax Statement (Form W-2), D13D14 Return on assets, 311, 711 Return on common stockholders equity, 311, 566, 711712 Returns and allowances: merchandise purchases, 201 for merchandise sales, 204205 Revenue(s), 51 accrued, 105106 defined, 13 sales, 196 unearned, 102103, 119120, 479 Revenue expenditures, 438 Revenue recognition principle, 95, 299300 Reversing entries, 158, 171173 Rifkin, Jeremy, 194 Rowling, J. K., 443 Rubino, Carlos, 696697 S Salaries, 108109, D1, D4 Sale(s): of bonds, 598599 credit card, 396397 net, 209210 of plant assets, 440441, 647 of receivables, 395396 recording, 203205 discounts, 205 returns and allowances, 204205 Sales activities, and segregation of duties, 342343 Sales invoices, 203 Sales journal, E5E7 Sales revenue, 196 Sales taxes payable, 476 Salvage value, 431, 432 Sarbanes-Oxley Act of 2002 (SOX; Sarbox), 8, 29, 341 and human resources, 344 and identity theft, 364 and restatements, 159 Saving, personal, 612 SEC, see Securities and Exchange Commission Secured bonds, 483 Securities and Exchange Commission (SEC), 9, 294, 537 Segregation of duties, 342343 Selling expenses, 211 Semiannually payable interest, C12, C13 Serial bonds, 484 Service charges, bank, 357 Short-term investments, 608609 Short-term paper, 609n.4 Significant noncash activities, 640641 Simple entries, 55 Simple interest, C1C2 Single-step income statement, 213, 214 Sinking fund bond, 483 Small stock dividend, 556 Social Security taxes, see FICA taxes Solvency, 311 Solvency ratios, 714715 debt to total assets ratio, 714715 times interest earned, 715 SOX, see Sarbanes-Oxley Act of 2002 Special journals, E4E18 cash payments journal, E13E15 cash receipts journal, E7E11 effects of, on general journal, E16E17 purchases journal, E11E13 sales journal, E5E7 usefulness of, E4 Specific identification method, 250251 Stack, Jack, 2 Star Wars, 96 Stated value, 542, 544545 Subject Index Wear and tear, 431 Weighted-average unit cost, 254 Wholesalers, 196 Withholding taxes, 476. See also Payroll taxes Working capital, 309310, 481, 707 Working capital ratio, 707 Work in process, 246 Worksheet(s), 144154 defined, 144 for merchandising company, 222224 preparing adjusting entries from, 148, 150, 152, 154 preparing consolidated balance sheets from, 616617 I-9 preparing financial statements from, 148, 152, 153 preparing statement of cash flows from, 659664 steps in preparation of, 144152 Z Zero-interest bonds, 486 RAPID REVIEW Chapter Content BASIC ACCOUNTING EQUATION (Chapter 2) Basic Equation Expanded Basic Equation Debit/Credit Effects Assets = Liabilities + Stockholders Equity INVENTORY (Chapters 5 and 6) Ownership Freight Terms FOB Shipping point FOB Destination Ownership of goods on public carrier resides with: Buyer Seller Assets Dr. Cr. + = Liabilities Dr. Cr. + + ADJUSTING ENTRIES (Chapter 3) Type Deferrals 1. Prepaid expenses 2. Unearned revenues 1. Accrued revenues 2. Accrued expenses Adjusting Entry Dr. Expenses Dr. Liabilities Dr. Assets Dr. Expenses Cr. Assets Cr. Revenues Cr. Revenues Cr. Liabilities Common Stock Dr. Cr. + + Retained Earnings Dr. Cr. + Dividends Dr. + Cr. + Revenues Dr. Cr. + Expenses Dr. + Cr. Perpetual vs. Periodic Journal Entries Event Purchase of goods Perpetual Inventory Cash (A/P) Inventory Cash Cash (or A/P) Inventory Cash (or A/R) Sales Cost of Goods Sold Inventory No entry Periodic* Purchases Cash (A/P) Freight-In Cash Cash (or A/P) Purchase Returns and Allowances Cash (or A/R) Sales No entry Accruals Freight (shipping point) Note: Each adjusting entry will affect one or more income statement accounts and one or more balance sheet accounts. Interest Computation Interest Face value of note Annual interest rate Time in terms of one year Return of goods Sale of goods CLOSING ENTRIES (Chapter 4) Purpose: (1) Update the Retained Earnings account in the ledger by transferring net income (loss) and dividends to retained earnings. (2) Prepare the temporary accounts (revenue, expense, dividends) for the next periods postings by reducing their balances to zero. Process 1. 2. Debit each revenue account for its balance (assuming normal balances), and credit Income Summary for total revenues. Debit Income Summary for total expenses, and credit each expense account for its balance (assuming normal balances). STOP AND CHECK: Does the balance in your Income Summary Account equal the net income (loss) reported in the income statement? 3. 4. Debit (credit) Income Summary, and credit (debit) Retained Earnings for the amount of net income (loss). Debit Retained Earnings for the balance in the Dividends account and credit Dividends for the same amount. STOP AND CHECK: Does the balance in your Retained Earnings account equal the ending balance reported in the balance sheet and the retained earnings statement? Are all of your temporary account balances zero? End of period Cost Flow Methods Specific identification First-in, first-out (FIFO) Closing or adjusting entry required Weighted average Last-in, first-out (LIFO) CONCEPTUAL FRAMEWORK OF ACCOUNTING (Chapter 7) Characteristics Relevance Comparability Reliability Assumptions Monetary unit Economic entity Time period Going concern Principles Revenue recognition Matching Full disclosure Cost Constraints Materiality Conservatism INTERNAL CONTROL AND CASH (Chapter 8) Principles of Internal Control Establishment of responsibility Segregation of duties Documentation procedures Bank Reconciliation Physical, mechanical, and electronic controls Independent internal verification Other controls ACCOUNTING CYCLE (Chapter 4) 1 Analyze business transactions Bank Balance per bank statement Add: Deposit in transit Deduct: Outstanding checks Adjusted cash balance Books Balance per books Add: Unrecorded credit memoranda from bank statement Deduct: Unrecorded debit memoranda from bank statement Adjusted cash balance 9 Prepare a post-closing trial balance 2 Journalize the transactions 8 Journalize and post closing entries 3 Post to ledger accounts Note: 1. Errors should be offset (added or deducted) on the side that made the error. 2. Adjusting journal entries should only be made on the books. STOP AND CHECK: Does the adjusted cash balance in the Cash account equal the reconciled balance? 7 Prepare financial statements: Income statement Retained earnings statement Balance sheet 4 Prepare a trial balance *Items with an asterisk are covered in a chapter-end appendix. 5 Journalize and post adjusting entries: Prepayments/Accruals 6 Prepare an adjusted trial balance Optional steps: If a worksheet is prepared, steps 4, 5, and 6 are incorporated in the worksheet. If reversing entries are prepared, they occur between steps 9 and 1 as discussed below. EP-1 RAPID REVIEW Chapter Content RECEIVABLES (Chapter 9) Methods to Account for Uncollectible Accounts STOCKHOLDERS EQUITY (Chapter 12) No-Par Value vs. Par Value Stock Journal Entries No-Par Value Cash Common Stock Par Value Cash Common Stock (par value) Paid-in Capital in Excess of Par Value Direct write-off method Record bad debts expense when the company determines a particular account to be uncollectible. At the end of each period estimate the amount of credit sales uncollectible. Debit Bad Debts Expense and credit Allowance for Doubtful Accounts for this amount. As specific accounts become uncollectible, debit Allowance for Doubtful Accounts and credit Accounts Receivable. At the end of each period estimate the amount of uncollectible receivables. Debit Bad Debts Expense and credit Allowance for Doubtful Accounts in an amount that results in a balance in the allowance account equal to the estimate of uncollectibles. As specific accounts become uncollectible, debit Allowance for Doubtful Accounts and credit Accounts Receivable. Allowance methods: Percentage-of-sales Comparison of Dividend Effects Cash Cash dividend Stock dividend Stock split No effect No effect Common Stock No effect No effect Retained Earnings No effect Percentage-of-receivables Debits and Credits to Retained Earnings Retained Earnings Debits (Decreases) 1. Net loss 2. Prior period adjustments for overstatement of net income 3. Cash dividends and stock dividends 4. Some disposals of treasury stock Credits (Increases) 1. Net income 2. Prior period adjustments for Understatement of net income PLANT ASSETS (Chapter 10) Presentation Tangible Assets Property, plant, and equipment Intangible Assets Intangible assets (Patents, copyrights, trademarks, franchises, goodwill) Natural resources INVESTMENTS (Chapter 13) Comparison of Long-Term Bond Investment and Liability Journal Entries Event Investor Debt Investments Cash Cash Interest Revenue Investee Cash Bonds Payable Interest Expense Cash Computation of Annual Depreciation Expense Cost Salvage value Useful life (in years) Depreciable cost Useful life (in units) Units of activity during year Straight-line Units-of-activity Declining-balance Purchase / issue of bonds Interest receipt / payment Book value at beginning of year Declining balance rate* *Declining-balance rate 1 Useful life (in years) Note: If depreciation is calculated for partial periods, the straight-line and decliningbalance methods must be adjusted for the relevant proportion of the year. Multiply the annual depreciation expense by the number of months expired in the year divided by 12 months. Comparison of Cost and Equity Methods of Accounting for Long-Term Stock Investments Event Acquisition Cost Stock Investments Cash No entry Equity Stock Investments Cash Stock Investments Investment Revenue Cash Stock Investments BONDS (Chapter 11) Premium Face Value Discount Market interest rate Market interest rate Market interest rate Contractual interest rate Contractual interest rate Contractual interest rate Investee reports earnings Investee pays dividends Cash Dividend Revenue Computation of Annual Bond Interest Expense Interest expense Interest paid (payable) (OR Amortization of discount Amortization of premium) Trading and Available-for-Sale Securities Trading Available-forsale Report at fair value with changes reported in net income. Report at fair value with changes reported in the stockholders equity section. Straight-line amortization Effective-interest amortization (preferred method) Bond discount (premium) Number of interest periods Bond interest expense Carrying value of bonds at beginning of period Effective interest rate Bond interest paid Face amount of bonds Contractual interest rate EP-2 RAPID REVIEW Chapter Content STATEMENT OF CASH FLOWS (Chapter 14) Cash flows from operating activities (indirect method) Net income Add: Losses on disposals of assets Amortization and depreciation Decreases in noncash current assets Increases in current liabilities Deduct: Gains on disposals of assets Increases in noncash current assets Decreases in current liabilities Net cash provided (used) by operating activities $X X X X (X) (X) (X) $X Cash flows from operating activities (direct method) Cash receipts (Examples: from sales of goods and services to customers, from receipts of interest and dividends on loans and investments) $X Cash payments (Examples: to suppliers, for operating expenses, for interest, for taxes) (X) Cash provided (used) by operating activities $X PRESENTATION OF NON-TYPICAL ITEMS (Chapter 15) Prior period adjustments (Chapter 12) Discontinued operations Statement of retained earnings (adjustment of beginning retained earnings) Income statement (presented separately after Income from continuing operations) Income statement (presented separately after Income before extraordinary items) In most instances, use the new method in current period and restate previous years results using new method. For changes in depreciation and amortization methods, use the new method in the current period, but do not restate previous periods. Extraordinary items Changes in accounting principle EP-3 RAPID REVIEW Financial Statements Order of Preparation Statement Type 1. Income statement 2. Retained earnings statement 3. Balance sheet 4. Statement of cash flows Date For the period ended For the period ended As of the end of the period For the period ended Retained Earnings Statement Name of Company Retained Earnings Statement For the Period Ended Retained earnings, beginning of period Add: Net income (or deduct net loss) Deduct: Dividends Retained earnings, end of period $X X X X $X Income Statement (perpetual inventory system) Name of Company Income Statement For the Period Ended Sales revenues Sales Less: Sales returns and allowances Sales discounts Net sales Cost of goods sold Gross profit Operating expenses (Examples: store salaries, advertising, delivery, rent, depreciation, utilities, insurance) Income from operations Other revenues and gains (Examples: interest, gains) Other expenses and losses (Examples: interest, losses) Income before income taxes Income tax expense Net income Income Statement (periodic inventory system) Name of Company Income Statement For the Period Ended Sales revenues Sales Less: Sales returns and allowances Sales discounts Net sales Cost of goods sold Beginning inventory Purchases $X Less: Purchase returns and allowances X Net purchases X Add: Freight in X Cost of goods purchased Cost of goods available for sale Less: Ending inventory Cost of goods sold Gross profit Operating expenses (Examples: store salaries, advertising, delivery, rent, depreciation, utilities, insurance) Income from operations Other revenues and gains (Examples: interest, gains) Other expenses and losses (Examples: interest, losses) Income before income taxes Income tax expense Net income STOP AND CHECK: Net income (loss) presented on the retained earnings statement must equal the net income (loss) presented on the income statement. Balance Sheet $X X X $X X X Name of Company Balance Sheet As of the End of the Period Assets Current assets (Examples: cash, short-term investments, accounts receivable, merchandise inventory, prepaid expenses) Long-term investments (Examples: investments in bonds, investments in stocks) Property, plant, and equipment Land Buildings and equipment $X Less: Accumulated depreciation X Intangible assets Total assets Liabilities and Stockholders Equity Liabilities Current liabilities (Examples: notes payable, accounts payable, accruals, unearned revenues, current portion of notes payable) Long-term liabilities (Examples: notes payable, bonds payable) Total liabilities Stockholders equity Common stock Retained earnings Total liabilities and stockholders equity $X X $X X X X $X X X X X X X X $X $X X X X X $X $X X X $X X STOP AND CHECK: Total assets on the balance sheet must equal total liabilities and stockholders equity; and, ending retained earnings on the balance sheet must equal ending retained earnings on the retained earnings statement. X X X X X Statement of Cash Flows Name of Company Statement of Cash Flows For the Period Ended Cash flows from operating activities Note: May be prepared using the direct or indirect method Cash provided (used) by operating activities Cash flows from investing activities (Examples: purchase / sale of long-term assets) Cash provided (used) by investing activities Cash flows from financing activities (Examples: issue / repayment of long-term liabilities, issue of stock, payment of dividends) Net cash provided (used) by financing activities Net increase (decrease) in cash Cash, beginning of the period Cash, end of the period X X X X X X X $X $X X X X X $X STOP AND CHECK: Cash, end of the period, on the statement of cash flows must equal cash presented on the balance sheet. EP-4 RAPID REVIEW Using the Information in the Financial Statements Ratio Liquidity Ratios 1. Current ratio Current assets Current liabilities Cash Short-term investments Receivables (net) Current liabilities Net credit sales Average net receivables Cost of goods sold Average inventory Measures short-term debt-paying ability. Formula Purpose or Use 2. Acid-test (quick) ratio Measures immediate short-term liquidity. 3. Receivables turnover Measures liquidity of receivables. 4. Inventory turnover Measures liquidity of inventory. Profitability Ratios 5. Profit margin Net income Net sales Net sales Average assets Net income Average total assets Net income Average common stockholders equity Net income Weighted average common shares outstanding Market price per share of stock Earnings per share Cash dividends Net income Measures net income generated by each dollar of sales. Measures how efficiently assets are used to generate sales. Measures overall profitability of assets. 6. Asset turnover 7. Return on assets 8. Return on common stockholders equity 9. Earnings per share (EPS) Measures profitability of stockholders investment. Measures net income earned on each share of common stock. Measures the ratio of the market price per share to earnings per share. Measures percentage of earnings distributed in the form of cash dividends. 10. Price-earnings (P-E) ratio 11. Payout ratio Solvency Ratios 12. Debt to total assets ratio Total debt Total assets Income before income taxes and interest expense Interest expense Cash provided by operating activities Capital expenditures Cash dividends Measures percentage of total assets provided by creditors. Measures ability to meet interest payments as they come due. Measures the amount of cash generated during the current year that is available for the payment of additional dividends or for expansion. 13. Times interest earned 14. Free cash flow EP-5 ... View Full Document | http://www.coursehero.com/file/6215053/Week-3-Appendix-E-Online-text/ | CC-MAIN-2014-41 | en | refinedweb |
22 February 2010 17:20 [Source: ICIS news]
LONDON (ICIS news)--European methyl di-p-phenylene isocyanate (MDI) buyers are bemused by Bayer Material Science's (BMS) decision to restart its idled MDI facility at Brunsbuettel, Germany, amid ongoing poor demand in the construction sector and an oversupply situation, sources said on Monday.
A BMS source confirmed that its MDI Brunsbuettel unit came back online in mid-February and was producing again.
The seller declined to comment about precise operating rates or the reasons for the start-up. The Brunsbuettel MDI plant was idled on 1 April 2009 until further notice due to a lack of demand amid the poor economic climate.
Buyers and some resellers questioned the timing of the start-up: “The restart is a bit early due to the ongoing poor weather and market conditions,” said one MDI source.
Another buyer noted: “It was quite a courageous move. Maybe they anticipate that demand will come back, as there is the approaching spring season. However, there is already too much MDI in the market before the restart. Demand is still poor and there are no signs of improvement yet.”
A trader said: “The start-up is a big mistake, it is very strange. Demand is still poor. Everyday, suppliers are trying to push volumes,” which was echoed by a consumer, who stated: “The MDI market is as long as I have ever seen it.”
The source estimated that construction activity, one of the main outlets for crude MDI, was down by approximately 50% in the ?xml:namespace>
The customer added that it also expected a drop-off in the automotive sector, another outlet for MDI, following the end of the various government incentive programmes.
Buyers also anticipated that the BMS restart “would put pay to any targeted price increases”.
BASF, and most recently Dow, announced plans to raise prices by €200/tonne ($274/tonne) due to unsustainable price levels, driven by the uptrend in feedstock costs and the price erosion for MDI.
Sellers’ reactions were mixed. One producer considered the Bayer MDI restart to be good news, noting: “They are seeing the light at the end of the tunnel. They are seeing the crisis abating a bit.”
A few weeks ago, another manufacturer said it had no immediate plans to bring back online its mothballed MDI unit in the first half of 2010. The source had said it did not want to jeopardise the market balance. No further update on the status of this facility was available.
Regarding the proposed hikes of up to €200/tonne for MDI over the next few months, some sellers maintained a firm stance. One producer stressed that increases were vital due to the need for re-investment economics, alongside the benzene feedstock cost pressure.
Crude MDI prices were assessed in February between €1,470-1,510/tonne FD (free delivered) NWE (northwest
The BMS plant at | http://www.icis.com/Articles/2010/02/22/9336781/bayers-brunsbuettel-mdi-restart-amid-poor-market-surprises-buyers.html | CC-MAIN-2014-41 | en | refinedweb |
XML.
Using jQuery to parse XML is vaguely reminiscent of LINQ in the recent .NET frameworks. That's a good thing, since LINQ made parsing XML in .NET vastly easier than previous techniques. With jQuery, when you receive XML from a callback, you're not actually getting raw text, you're actually getting a DOM (document object model) that jQuery can traverse very quickly and efficiently to give you the data you need.
Let's start by looking at the example XML document we'll be parsing today. I made a file that contains most things you'd see in a typical XML document - attributes, nested tags, and collections.
<?xml version="1.0" encoding="utf-8" ?> <RecentTutorials> <Tutorial author="The Reddest"> <Title>Silverlight and the Netflix API</Title> <Categories> <Category>Tutorials</Category> <Category>Silverlight 2.0</Category> <Category>Silverlight</Category> <Category>C#</Category> <Category>XAML</Category> </Categories> <Date>1/13/2009</Date> </Tutorial> <Tutorial author="The Hairiest"> <Title>Cake PHP 4 - Saving and Validating Data</Title> <Categories> <Category>Tutorials</Category> <Category>CakePHP</Category> <Category>PHP</Category> </Categories> <Date>1/12/2009</Date> </Tutorial> <Tutorial author="The Tallest"> <Title>Silverlight 2 - Using initParams</Title> <Categories> <Category>Tutorials</Category> <Category>Silverlight 2.0</Category> <Category>Silverlight</Category> <Category>C#</Category> <Category>HTML</Category> </Categories> <Date>1/6/2009</Date> </Tutorial> <Tutorial author="The Fattest"> <Title>Controlling iTunes with AutoHotkey</Title> <Categories> <Category>Tutorials</Category> <Category>AutoHotkey</Category> </Categories> <Date>12/12/2008</Date> </Tutorial> </RecentTutorials>
The first thing you're going to have to do is write some jQuery to request the XML document. This is a very simple AJAX request for the file.
$(document).ready(function() { $.ajax({ type: "GET", url: "jquery_xml.xml", dataType: "xml", success: parseXml }); });
Now that that's out of the way, we can start parsing the XML. As you can
see, when the request succeeds, the function
parseXML is called.
That's where I'm going to put my code. Let's start by finding the author
of each tutorial, which are stored as attributes on the Tutorial tag.
function parseXml(xml) { //find every Tutorial and print the author $(xml).find("Tutorial").each(function() { $("#output").append($(this).attr("author") + "<br />"); }); // Output: // The Reddest // The Hairiest // The Tallest // The Fattest }
The quickest way to parse an XML document is to make use of jQuery's
powerful selector system, so the
first thing I do is call
find to get a collection of every Tutorial
element. Then I call
each, which executes the supplied function on
every element. Inside the function body,
this now points to a Tutorial
element. To get an attribute's value, I simply call
attr and pass it
the name of what attribute I want. In this example, I have a simple HTML
span object with an id of "output". I call append on this element to
populate it with data. You would probably do something a little more
exciting, but I just wanted a simple way to display the results.
See how easy that is? Let's now look at a slightly more complicated one. Here I want to print the publish date of each tutorial followed by the title.
//print the date followed by the title of each tutorial $(xml).find("Tutorial").each(function() { $("#output").append($(this).find("Date").text()); $("#output").append(": " + $(this).find("Title").text() + "<br />"); }); // Output: // 1/13/2009: Silverlight and the Netflix API // 1/12/2009: Cake PHP 4 - Saving and Validating Data // 1/6/2009: Silverlight 2 - Using initParams // 12/12/2008: Controlling iTunes with AutoHotkey
This is very similar to the previous example, except now the values are
stored inside element text instead of attributes. Again, I want to go
through every Tutorial tag, so I first use
find and
each. Once I'm
inside a Tutorial, I need to find the Date, so I use
find again. To
get the text inside an XML element, simply call
text. I repeat the
same process again for the Title, and that's it.
We've now parsed every piece of information except the categories that each tutorial belongs to. Here's the code to do that.
//print each tutorial title followed by their categories $(xml).find("Tutorial").each(function() { $("#output").append($(this).find("Title").text() + "<br />"); $(this).find("Category").each(function() { $("#output").append($(this).text() + "<br />"); }); $("#output").append("<br />"); }); // Output: // Silverlight and the Netflix API // Tutorials // Silverlight 2.0 // Silverlight // C# // XAML // Cake PHP 4 - Saving and Validating Data // Tutorials // CakePHP // PHP // Silverlight 2 - Using initParams // Tutorials // Silverlight 2.0 // Silverlight // C# // HTML // Controlling iTunes with AutoHotkey // Tutorials // AutoHotkey
Once again, I get every Tutorial by using
find and
each. I then get
the Title in the same was as the previous example. Since a tutorial can
belong to several categories, I call
find and
each to iterate over
each Category element inside a tutorial. Once I'm inside a Category
element, I simple print out its contents using the
text function.
Being able to parse elements, attributes, and collections should cover almost every form of XML you'd ever see, and making use of jQuery selectors to get the job done makes parsing XML in JavaScript a breeze. That does it for this tutorial. Hopefully we all learned something about jQuery and XML.
Source Files:
Thanks for this. Can you give any recommendations for how one can iterate through each instance of ".each" as a separate , say, where you could advance (or go back) through all of the children individually via forward / backward button clicks?
You should just be able to access the result by index instead of iterating over them. Something kind of like:
Thanks, Brandon -- my example is a little more complex than that. I have a well-nested XML file I am teasing out multiple child nodes (some themselves nested/ing) and displaying it all as one blob to a certain div class, e.g.:
$(response).find("mistake").each(function() { $(".myclass").append("ID: #" + $(this).find("ID").text() + "
"); $(".myclass").append($(this).find("ype").text() + "
"); $(".myclass").append($(this).find("sessionID").text() + "
"); $(".myclass").append("Mistake #" + $(this).find("index").text() + "
"); $('.myclass').append("Here's the word you got wrong:
" + $(this).find("wrong").text() + "
"); $(".myclass").append($(this).find("nestedlang").text() + "
"); $(this).find("nestedword").each(function() { $(".myclass").append($(this).text() + "
"); }); });
I can output this blob just fine -- how would I advance or go back through EACH instance of the child nodes within "mistake"? Thanks again for your prompt reply.
is this code is compatible with IE browsers????
Šime on twitter answered it for you: "Cache the "#output" query". @simevidas is his twitter handle, if you would like thank him.
Impressed!.. Thanks for sharing this one..
I'm working a project that uses this method nicely. My question: the output is pulling a single portion of the XML file based on the date. For example, there's data for today and it's putting that data only. However, I'd like to see the previous dates data and the data for the tomorrow or two days from now (all predefined in the XML).
How would I go about making a "Previous/Next" rotating script
Hi Reddest,
Just browsing for xml jquery samples and reached on this page. Nice one and was pretty explaining...
I have a doubt on this. I just need to get the XML refreshed at even intervals for loading new values. Is this possible in JQuery.
Also updating the values in the the placeholder. Can you let me know any example on this.
Thanks in Advance...
I am populating a Listview in jquery mobile from an xml file, but it is outputting all the xml fields into one li, instead of each xml record appearing in it's own li.
js snippet below
and the html listview code....
The listview list displays both the HEADING and BODY which is great, but it displays all three HEADINGS and BODY'S from the xml in the one list.
BTW: when i add
to the end of the append statement, it only shows one record from the xml file. aargh.
Can anyone help with the syntax?
Try this:
the above code is working if it is i used in server but not working locally
hi, iam trying to populate states according to the countries.i.e if you select country.states should be populated in another dropdown according to the cound.but my code is displaying all the states irrespective of the countries.iam trying it in jquery. my xml file is
Sorry if my question sounds like opening a Pandora's box but... I would like to simply be able to populate a description text in HTML from the same XML file loaded in Flash without the need to touch the HTML file itself, so that I can simply modify the XML to update the content in both versions of my website. I'm totally new to this and never found anything truly explaining how to do it. Does anybody know any source from where I can learn how to do what I need? Thanks in advance, I hope my question can help other people in my same situation... I'm sure are many!
Hi,
Thanks for this article, has great examples which I'm using. I have one problem though, I have a path set in my XML and when parsing I get an error.
I don't have direct access to the XML, I know I have to replace the backslashes with double backslashes but when I try it with javascript it doesn't work. If I manually replace the backslashes with double backslashes the script runs without errors and gives results. I've tried the following techniques which make the script run without errors but without results:
I'd think this wouldn't be this hard, is there something I'm missing?
Hi,
code snippet given by you does not work in firefox, this question was already posted by some of the user in above conversation but no body has given the solution.Can you please tell me how to make it cross browser compatible.Thanks in advance
Love this. Thanks for sharing.
I've added a select options list. The value of the selected option is then used to define what it looks for in the xml file like this:
\$(xml).find(\$("#DATE option:selected").val()).each(function () { *....
It works great the first time the page loads, but I can't get it to fire again once a different option has been selected. I've tried adding a button that calls the parseXml function like this:
But it doesn't work. Feel like I'm close to getting it working but what am I missing??
Any help greatly appreciated!!
Somewhere there seems to be an error in what is in this page. As given, it does not work in Firefox or Safari.
How can I get for instance "Size 2" (namely the second group's size value) for the following xml?
I tried
\$(this).find("group").each(function() {var size = \$(this).find("size").text();}
but it did not get specifically the second group's size value. Should I use some other function instead of find or each?
You could try:
can someone help?
Can someone please help me. I am basically trying to build a table from XML to HTML, dynamically with jQuery. Nothing is showing up. I am new at this. This is my xml file:
This is my Javascrip: [javascrip]
[/javascript]
what am i doing wrong?
Sorry the javascript file didnt come out right. Let me try again>
Does the Javascript error console say anything?
Yes it says: 'Uncaught SyntaxError: Unexpected token )' Is there some kind of a syntax error in the HTML code?
I should mention that there are other javascript functions in the html page. When I take the above mentioned javascript out of the html page, javascript error cosole doesnt reflect any errors. But when I put it back in, it says that there is a syntax error(i.e. see below) So it appears that there is some kind of a syntax error in the javascript. I cant seem to pinpoint it. Sorry Im really new at this. Any help would be appreciated.
I ran it in firefox and in firebug script tab that part of code doesnt even appear. I should also say that the alert statement doesnt work in that part of code. Its a mystery. Do you have any ideas.
I have tried modifying the javascript a little bit. And the error in chrome says this: XMLHttpRequest cannot load. Origin null is not allowed by Access-Control-Allow-Origin. And this is the modified code:
I feel so stupid, the code is good. I parse the wrong xml file. Please ignore my post above :(
Hi, I am reading your tutorial but I cannot get my code to work is there something wrong you can spot? Please help me:
I'm trying to read and parse a XML file from Google Finance API:
I have this Javascript:
However, I get an error:
Can anybody help solve the problem?
What does your browser's error console say? My guess is that this is a cross-domain issue. Most of the time javascript cannot make callbacks to a domain that is different than the one hosting the javascript.
Am I correct to assume that you are replying to my posting?
How do I view my browser's error console? (I'm using Google Chrome)
You wrote: "My guess is that this is a cross-domain issue. Most of the time javascript cannot make callbacks to a domain that is different than the one hosting the javascript."
Where in my code is it making a callback? I'm not sure I understand what you mean. Are you referring to the line that has the URL to the web service providing the XML file?
I have the Javascript file (jquery.js) residing on the same webserver as the index.html file.
In Chrome, the console is located here: Settings (wrench icon) -> Tools -> JavaScript console
What I mean by cross-domain is that your website (let's say example.com) is not at the same domain as the the one being requested (google.com). That \$.ajax call is telling the browser to go download some information from google.com, and since your site is on example.com, this is a security violation.
Iam planning to get the imagename using jquery , Ill display one image at a time. If i click on next or prev button i want the details of image names. Please find the sample xml listed below. Thanks in advance.
How does #output work?
From the article: "In this example, I have a simple HTML span object with an id of 'output'. I call append on this element to populate it with data."
"#id" is a jQuery selector that finds the HTML object with the specified id. So "#output" will find my HTML span object which I gave the id, "output".
Hello, I am trying to parse the XML given below.
I want to display Quote types, states and respective product in drop down menu/Tree structure. Can anyone suggest me way to work this out ??
Try this:
This isn't really about Parsing XML as much as traversing xml... Parsing deals with the nuts and bolts of changing the text/xml representation INTO the DOM representation
I'm looking to find out how jQuery can parse an XML document where I don't have the parent and child nodes known to me. I'm wondering if there is something that I could use to get me the parent node names and then the children node names underneath the parent.
I want to calculate the time taken for each root by subtracting the arrival tag value with departs tag value. After doing all subtractions I need to find out the trip which has taken least time. Pls post ur ideas
I'd just grab any javascript library capable of parsing that date format. Datejs is the first one I found using a Google search. Create an object to hold the details for each flight and populate a collection of them as you parse the XML.
To calculate the duration of each flight, populate the built-in javascript date object using the values parsed by the date library. The date object has a function, getTime, which returns the milliseconds since the epoch. Get the milliseconds for the departure time and arrival times and subtract them - this will give you the duration of each flight, in milliseconds.
To find the fastest, iterate over the collection of objects and find the one with the smallest duration in milliseconds.
Hello everybody
Is it possible to store images in the xml-file and display them in the same manner as done with the text?
Thanks
Hi All, I have a requirement.. I have right and left navigation menus and submenus(with links), all these data has to come from the xml using jquery. Organised in Div blocks and we need to just populate the data on these divs.Also there is an attribute (expand) in xml set to either true or false such that we will show/hide div block as default.Below is the snippet
Right navigation should be shown by default and left navigation should be hidden based on the expand value.
Please help me.
Thanks
Hey Hi,
I have an issue with parsing xml while using jquery for the same : I am loading content into a div tag and then trying to load XML into the html page using jquery. My problem is that it works for the first two link clicks but after that it does not work. I know i need to rebind the whole thing but i am not able to figure out the way of doing it. Here's the code snippet :
any kinda help is needed :)
Thank you so much =]
Hi,
Here is a cross browser XML parsing / helper library:
-------------------- You can find me on:
one two three
can anybody plese tell how to retrieve this xml.. i am using IE8 browser. I tried
but its not working.
Oh yeah, as you might have guessed I abandoned the .XML file entirely and created a JSON Object within a .js file. I set a cookie for the language, or allow the user to set one from a select, then I use that to construct the path to a language-specific .js file, which I now generate from a mySQL database (using ColdFusion).
I remained unable to solve this until I happened on the article here:
I copy/pasted their final example to the top of my .js file and presto--it works.
A fix for \$.getScript() :
This was a huge help, but when I tried to modify it for my situation I come up short. It works perfectly when I appended the output to a div. But I want to set string variables to different languages stored in XML files.
I tried...
(I also tried that with a success function that just said return xml; and got the same result)
I then call it like so...
The alert says "undefined"
What am I not getting?
Hi, I was wondering if anybody knows how to get a JSON response from an xml file using ajax.
If your xml file has dates for given events (or maybe in your case, dates for live tutorial sessions in your tutorial xml); how would you pull dates from this year (2010) and next (2011), and ones from 'today's date' forward. Also, how would you incorporate it into your jquery request to NOT grab the ones before 'today's date', or after this year (2010).
And how would you differentiate the two years in your xml file as, for instance, you had some tutorial live viewings for tutorials that are a two-day viewing (Sep 10 and 11, 2010 for instance).
HI. Thanks so much for this article. I've learned a lot. I have one quick question = how can I parse the XML and create a list with a link to a detail page from the same data?
For instance I have a feed where I just want to first display a list with the title but make it a link that will then display all of the detailed desc and info?
thank you !! Any help or pointers would be much appreciated! Ya know, I think that to do the type of manipulation I want, I need to convert the XML to JSON. I'll try that.
cheers, brent.
very nice post. thank you. bye.
The Reddest, I am only getting [object Object] returned as what should be parsed xml parts (see below). I want to do something very similar like you described in your article - inserting a list of customers from an xml file into my html (as list elements). I obviously have something wrong somewhere in my code that I am unable to find after staring at it for hours ... I am running jQuery 1.4.2 on Mac OS X 10.6.4. Same in Safari 5 and Firefox. Any help appreciated. alex
Oops - posted too early and after staring at it 10 minutes more I found the easy, stupid thing ... The iteration in the each loop obviously _is_ an object and not [yet ?] a valid [X]HTML snippet that could be inserted as is into the html (although in my case it could ...).
I needed to pick out the attributes [
] or the inner text of the element [
] to produces results.
I hope that somebody else can profit from that mistake ... alex
I have a two xml files.
One contains orders for my store and the other contains order details.
Could I use this jquery scripts to grab the data from one file and the order details from the other file and merge them together?
Example:
xmlfile 1: 1 1 John Doe 11111 Where Ever Yep MD 111111 111-111-1111 0
xml file 2:
1 testRZ
I would like to be able the grab all the order details and attach it to the orders.
I would create two objects to hold the information contained in the two XML files.
I would then parse the 2nd XML file into a collection of OrderDetail objects. Then I'd parse the 1st XML file. When each Order is parsed, I'd lookup the corresponding OrderDetails object by the orderid and set the property to that object.
SoftXpath - Lightweight cross browser JavaScript library for querying complex XML documents using powerful Xpath expressions.
Demo
im having a problem with the find() function. For me it works in firefox and chrome perfectly, but any version of explorer returns 'undefinded'
Internet Explorer is annoying as usual about this and you need to make exceptions for it. I was writing a photo gallery plugin that reads from XML and every browser except for IE was working great. Here's what I had to do:
And then in my "success" function, I had to add this:
Thank you Timothy, it worked for me. Thanks a lot.... Happy Coding...
GREAT, Could solve my problem with this great tutorial.
Hi There,
Thanks for the great tutorial, I have the following XML file and wondering how I would go about adding feature to turn off induvidual announcements by adding something like status="on" / status="off"?
Thanks! David
Thanks for your help. It helped finnish my project.
I put a link back as resources used in my blog post:
This is how I used it to read xml with Json, maybe people read this blog who try to achieve the same thing:
I am trying to a simple plain POST with a parameter name and value. But my response is in XML. How do I parse the response, how do I set the data type since my input is plain name value pair. Can I use similar without JSON? I am new the JQurey so I am trying to make is very simple for understanding.
Thank you.
Any way to parse data from an xml file whose tags are not known before?
I mean is there any way to generalize it to parse any xml doc on the go?
Hi, thanks for this tutorial it has been illuminating.
I'v just one problem - it takes about 6-12 seconds to parse.. :( please help, here is my situation:
XML example: (XML is dynamically created)
(there are about 1500 'contact' nodes) -> IS IT POSSIBLE THAT IT'S SLOW BECAUSE EACH WILL ITERATE TROUGH EVERY CHILD - CAN I RESTRICT SOMEHOW SOMETHING?)
BUGS:
?1) WORKS IN SAFARI/IE - DOES NOT WORK IN FIREFOX (ParseError - it doesn't even call the ParseXML function, just err)
?2) Why so slow?? Why cca 10 seconds?
PLEASE HELP THX
ps.
would it be faster if i used NATIVE DOM METHODS? I really really like jQuery, it's so programmer friendly...
I get confused at:
where is the parameter/variable "xml" declared and instantiated with jquery_xml.xml? Does it magically inherit it from the ajax request. Could someone explain this for me?
the xml variable is passed into the function automatically by jQuery. parseXml is supplied to jQuery as part of the ajax request.
i am making a video gallery. i found a flash template that creates xml file.
XML data looks this:
The gallery works. When I click each thumbnails, the FLV runs.
I want to put film description in my markup. How will i do this for every film currently running?
Thank you...
Don't work with IE :(
can anyone help me to read this xml file please? I just took over from the previous developer. 1. I like to load course into the drop down box1 with the course 2. If user click on the course, it populate the time into drop down box2.
It was using javascript to read before. I though that JQuery may do a better job so I do this for my learning curve with JQuery.
Since I resend this question. Thanks for helps
Restructure your Xml-File like this:
You can access the xml tags with the following jQuery code:
can anyone help me to read this xml file please? I just took over from the previous developer.
It was using javascript to read before. I though that JQuery may do a better job so I do this for my learning curve with JQuery. Thanks for helps
Try posting the comment again. use the
language tag.
should be
You're definitely right about that. I'll blame that on a copy/paste error. Where I was using this code I had an another function call in that body. I've corrected the post. Thanks for finding it.
Be wary of namespaces!
Thanks for the warning! Unfortunately, I didn't think about namespaces when I wrote the article. I guess I should have mentioned somewhere that jQuery doesn't directly support them.
can you help me out with a project, I want all my forms to be developed in jquery and output xml to be further processed. you can skype me, my userid is silverbuyer. This is for everyone on here that is capable and needs the money! Cheers!
Very nice post | http://tech.pro/tutorial/877/xml-parsing-with-jquery | CC-MAIN-2014-41 | en | refinedweb |
Java.io.Writer.write() Method
Advertisements
Description
The java.io.Writer.write(int c) method writes a single character. The character to be written is contained in the 16 low-order bits of the given integer value; the 16 high-order bits are ignored.
Declaration
Following is the declaration for java.io.Writer.write() method
public void write(int c)
Parameters
c -- int specifying a character to be written
Return Value
This method does not return a value
Exception
IOException -- If an I/O error occurs
Example
The following example shows the usage of java.io.Writer.write() method.
package com.tutorialspoint; import java.io.*; public class WriterDemo { public static void main(String[] args) { int c = 70; // create a new writer Writer writer = new PrintWriter(System.out); try { // write an int that will be printed as ASCII writer.write(c); // flush the writer writer.flush(); // write another int that will be printed as ASCII writer.write(71); // flush the stream again writer.flush(); } catch (IOException ex) { ex.printStackTrace(); } } }
Let us compile and run the above program, this will produce the following result:
FG | http://www.tutorialspoint.com/java/io/writer_write_char.htm | CC-MAIN-2014-41 | en | refinedweb |
In this article we will continue our exploration of RestKit, an iOS framework for working with web services. It is assumed that the reader has read Part I and has a working knowledge of RestKit. Building on the foundation we established in the introduction, we will examine the advanced capabilities of the library and learn how to accelerate our iOS development efforts.
What is covered?
- Advanced Networking: Multi-part requests, reachability, the request queue and background upload/download are all covered.
- Advanced Object Mapping: Key-value coding and relationship mapping.
- Core Data: Integration between the object mapper and Apple’s Core Data persistence framework are discussed at length. This includes configuration, relationship management, database seeding, etc.
- Integration Layers: We’ll briefly touch on the integration points exposed by the library for working with Ruby on Rails backends and interaction with Facebook’s Three20 framework.
Companion Example Code
To aid the reader in following the concepts presented here, an accompanying example application is provided with the RestKit distribution. Each section of the tutorial will refer you to a specific example in the RKCatalog example, found in the RestKit/Examples/RKCatalog directory.
At the time of this writing, RestKit is currently at version 0.9.2. Library source and example code can be downloaded from the RestKit Downloads Page.
Advanced Networking
We’ve already been introduced to the key players in the RestKit Network layer: RKClient, RKRequest, and RKResponse. These three classes provide a simple, clean API for making requests to a remote web service. In this section we’ll see how RestKit scales up when things get more complicated.
Request Serialization: Under the Hood
In the introduction to the Network layer, we learned to use RestKit to send requests using the
get,
put and
delete methods. These methods abstract away the details of constructing a full URL, building a request, populating the request body, and asynchronously sending the request.
We also learned how to embed parameters into our requests by providing an NSDictionary of key/value pairs. Under the covers, RestKit takes this dictionary and constructs a URL encoded HTTP body to attach to the request. The Content-Type header is set to ‘application/x-www-form-urlencoded’ and the request is sent off for processing. This is a great convenience over having to construct the request bodies ourselves and it is this support that forms the basis of the object mapper.
But what about requests that can’t be represented by dictionaries or be loaded into memory all at once? We must look beyond the simplicity afforded by NSDictionary and take a look at two new entities: RKRequestSerializable and RKParams.
Though we often use NSDictionary to represent the parameters for many of our requests, RestKit does not specifically bless NSDictionary. If you looked at the method signature for RKRequest’s params argument, you’ll note that it does not specify NSDictionary at all. Instead you’ll see:
@property(nonatomic, retain) NSObject<RKRequestSerializable>* params;
Note the RKRequestSerializable protocol here. RKRequestSerializable defines a very lightweight protocol that allows arbitrary classes to serialize themselves for use in the body of an RKRequest. When you import RestKit, it adds a category to NSDictionary providing an implementation of the RKRequestSerializable protocol. RKRequestSerializable defines just a couple of methods that you need to implement to make any object type serializable:
- HTTPHeaderValueForContentType - This method returns an NSString value to be used as the Content-Type header for the request. For NSDictionary, we encode the keys/values using form encoding and return ‘application/x-www-form-urlencoded’ for this method.
- HTTPHeaderValueForContentLength - This optional method returns an NSUInteger value to be used as the Content-Length header for the request. This is useful in long running upload requests for the purposes of tracking progress.
- HTTPBody - This method returns an NSData object containing the data you want to send as the body of the request. For NSDictionary, we walk through the key/value pairs and construct a URL encoded string. The string is then coerced into an NSData by using the NSUTF8StringEncoding encoding. This method is optional if you provide an implementation of HTTPBodyStream (see below).
- HTTPBodyStream - This method returns an NSStream object that should be used to read data for use in the request body. HTTPBodyStream will be consulted ahead of HTTPBody during request construction. HTTPBodyStream can be used to provide support for uploading files that exist on disk or are too big to fit into main memory. RestKit will efficiently stream the data off the disk and send it for processing.
RKParams: Sending Multi-part Requests
Now that we understand how RestKit coerces arbitrary objects into serializable representations, we can look at another implementation of RKRequestSerializable that ships with the library: RKParams. RKParams provides a simple interface for building more complex multi-part requests. You can think of RKParams as an HTTP-aware dictionary implementation. In addition to providing simple objects for the values in our parameters, RKParams also allows us to provide NSData and paths to files on disk. We are also able to set the file name and MIME type of each parameter individually. To illustrate how this works, let’s look at some code:
NSString* myFilePath = @"/some/path/to/picture.gif"; RKParams* params = [RKParams params]; // Set some simple values -- just like we would with NSDictionary [params setValue:@"Blake" forParam:@"name"]; [params setValue:@"blake@restkit.org" forParam:@"email"]; // Create an Attachment RKParamsAttachment* attachment = [params setFile:myFilePath forParam:@"image1"]; attachment.MIMEType = @"image/gif"; attachment.fileName = @"picture.gif"; // Attach an Image from the App Bundle UIImage* image = [UIImage imageNamed:@"another_image.png"]; NSData* imageData = UIImagePNGRepresentation(image); [params setData:imageData MIMEType:@"image/png" forParam:@"image2"]; // Let's examine the RKRequestSerializable info... NSLog(@"RKParams HTTPHeaderValueForContentType = %@", [params HTTPHeaderValueForContentType]); NSLog(@"RKParams HTTPHeaderValueForContentLength = %d", [params HTTPHeaderValueForContentLength]); // Send a Request! [[RKClient sharedClient] post:@"/uploadImages" params:params delegate:self];
Essentially what we are doing here is creating a stack of RKParamsAttachment objects that are contained within the RKParams instance. With every call to
setValue,
setFile, or
setData we are instantiating a new instance of RKParamsAttachment and adding it to the stack. Each of these methods returns the RKParamsAttachment object it has created for you so that you can further customize it if need be. We see this used to set the
MIMEType and
fileName properties for image. When we assign the params object to the RKRequest, it is serialized into a multipart/form-data document and read as a stream by the underlying NSURLConnection. This streaming behavior allows RKParams to be used for reading very large files off of disk without exhausting memory on an iOS device.
Example Code - See RKParamsExample in RKCatalog
The Request Queue
While you have been happily sending requests and processing responses with RKClient, RKRequest, and RKResponse another part of RestKit has been quietly operating behind the scenes, without your knowledge: RKRequestQueue. The Request Queue is an important support player in the RestKit world and becomes increasingly important as your application grows in scope. RKRequestQueue has three primary responsibilities: managing request memory, ensuring the network does not get overly burdened, and managing request life cycle.
Memory management can become very tiresome in Cocoa applications, as so much work happens asynchronously. RestKit is all about reducing the complexity and ceremony associated with working with web services in Cocoa and as such provides RKRequestQueue to shift the memory management concerns away from the application developer and into the framework. Recall what a typical RestKit request/response looks like:
- (void)sendARequest { RKRequest* request = [[RKClient sharedClient] get:@"/some/path" delegate:self]; NSLog(@"Sent a request! %@", request); } - (void)request:(RKRequest*)request didLoadResponse:(RKResponse*)response { if ([response isJSON]) { NSLog(@"Got a JSON response back!"); } }
Notice that there isn’t a single call to retain, release or autorelease anywhere in sight. This is where the request queue comes into play. When we ask to send an RKRequest object, it isn’t immediately dispatched. Instead it is retained by the RKRequestQueue sharedQueue instance and sent as soon as possible. RestKit watches for notifications generated by RKRequest & RKResponse and releases its hold on the request after processing has completed. This allows us to work with web services with very little thought about the memory management.
In addition to retaining & releasing RKRequest instances, RKRequestQueue also serves as a gatekeeper to the network access itself. When the application is first launched or returns from a background state, RestKit uses its integration with the System Configuration Reachability API’s to determine if any network access is available. When talking to a remote server by hostname, there can be a delay between launch and the determination of network availability. During this time, RestKit is in an indeterminate reachability state and RKRequestQueue will defer sending any requests until network reachability can be determined. Once reachability is determined, RKRequestQueue prevents the network from becoming overburdened by limiting the number of concurrent requests to five.
Once your user interfaces begin spanning multiple controllers and users are navigating the controller stack quickly, you may begin generating a number of requests that do not need to be completed because the user has dismissed the view. Here we turn to RKRequestQueue as well. Let’s imagine that we have a controller that immediately begins loading some data when the view appears. But the controller also has a number of buttons that the user can quickly access to change perspectives, making the request we kicked off no longer of interest. We can either hold on to the instances of RKRequest that we generate or we can let RKRequestQueue do the work for us. Let’s see how this would work:
- (void)viewWillAppear:(BOOL)animated { /** * Ask RKClient to load us some data. This causes an RKRequest object to be created * transparently pushed onto the RKRequestQueue sharedQueue instance */ [RKClient sharedClient] get:@"/some/data.json" delegate:self]; } // We have been dismissed -- clean up any open requests - (void)dealloc { [[RKRequestQueue sharedQueue] cancelRequestsWithDelegate:self]; [super dealloc]; } // We have been obscured -- cancel any pending requests - (void)viewWillDisappear:(BOOL)animated { [[RKRequestQueue sharedQueue] cancelRequestsWithDelegate:self]; }
Rather than managing the request ourselves and doing the housekeeping, we can just ask RKRequestQueue to cancel any requests that we are the delegate for. If there are none currently processing, no action will be taken.
A sharedQueue singleton instance is created for you at framework initialization time. It is also possible to create additional ad-hoc queues to manage groups of requests
more granularly. For example, an ad-hoc queue could be useful for downloading or uploading content in the background, while keeping the main shared queue free for responding to user actions. Let’s take a look at an example of using an ad-hoc queue:
- (IBAction)queueRequests { RKRequestQueue* queue = [[RKRequestQueue alloc] init]; queue.delegate = self; queue.concurrentRequestsLimit = 1; queue.showsNetworkActivityIndicatorWhenBusy = YES; // Queue up 4 requests ]]; // Start processing! [queue start]; }
In this example we have created an ad-hoc queue that dispatches one request at a time and spins the system network activity indicator. There are a number of delegate methods available for the request queue to make managing groups of requests easier. Check out the RKRequestQueue example in RKCatalog for detailed examples.
Example Code - See RKRequestQueueExample in RKCatalog
Reachability
As iOS developers, one of the annoying realities we face and must deal with is that of intermittent connectivity. As our users move throughout their world with our applications, connectivity is guaranteed to come and go. Coding for this reality can complicate our logic and distort the intent in our code with conditional logic.
To make matters worse, the SCReachability API’s available to us for monitoring network status are implemented as low level C API’s. To help ease this burden and provide a platform for higher level functionality, RestKit ships with a wrapper around the low level SCReachability API’s: RKReachabilityObserver.
RKReachabilityObserver abstracts away the SCReachability C API’s and instead presents a very straight-forward Objective-C interface for determining network status. Let’s take a look at some code:
- (void)workWithReachability { /** * Initialize an observer against a hostname. Note that we can also provide an IP address in the hostname * string and RestKit will configure the observer to watch the network address instead of the host */ RKReachabilityObserver* observer = [RKReachabilityObserver reachabilityObserverWithHostName:@"restkit.org"]; // Let the run-loop execute so reachability can be determined [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; if ([observer isNetworkReachable]) { NSLog(@"We have network access! Huzzah!"); if ([observer isConnectionRequired]) { NSLog(@"Network is available if we open a connection..."); } if (RKReachabilityReachableViaWiFi == [observer networkStatus]) { NSLog(@"Online via WiFi!"); } else if (RKReachabilityReachableViaWWAN == [observer networkStatus]) { NSLog(@"Online via 3G or Edge..."); } } else { NSLog(@"No network access."); } }
Now that we’ve seen how to initialize and work with RKReachabilityObserver, it’s worth noting that most of the time we don’t have to! When you initialize RKClient or RKObjectManager with a base URL, RestKit goes ahead and initializes an instance of RKReachabilityObserver targeted at the hostname specified in your base URL. This observer is available to you via the
baseURLReachabilityObserver property on RKClient. RKReachabilityObserver also emits notifications whenever network state changes. Typically these events are all that you are interested in. Observing the reachability events is easy:
@implementation ReachabilityInterestedClass - (id)init { if ((self = [super init])) { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(reachabilityChanged:) name:RKReachabilityStateChangedNotification object:nil]; } return self; } - (void)reachabilityChanged:(NSNotification*)notification { RKReachabilityObserver* observer = (RKReachabilityObserver*)[notification object]; if ([observer isNetworkAvailable]) { NSLog(@"We're online!"); } else { NSLog(@"We've gone offline!"); } } @end
Example Code - See RKReachabilityExample in RKCatalog
We’ll explore how RestKit leverages Reachability internally to provide transparent offline access in the Core Data object caching section.
Background Upload/Download
With iOS 4.0, Apple introduced multi-tasking support for applications. The multi-tasking support relies on putting apps into a background state, where they can be quickly restored to full interactive mode, but not consume resources while in the background. This can present a problem for network applications such as those built with RestKit: important long-running requests can be interrupted by the user switching out of the app. Thankfully Apple also provided limited support for extending the life of your process by creating a background task using the
beginBackgroundTaskWithExpirationHandler method on
UIApplication. This method accepts an Objective-C block and returns a
UIBackgroundTaskIdentifier value for the background task that was created. Care must be taken when creating the background task so that backward compatibility with iOS 3.0 deployments is maintained.
RestKit seeks to ease this burden on the developer by providing simple, transparent support for background tasks during the request cycle. Let’s take a look at some code:
- (void)backgroundUpload { RKRequest* request = [[RKClient sharedClient] post:@"somewhere" delegate:self]; request.backgroundPolicy = RKRequestBackgroundPolicyNone; // Take no action with regard to backgrounding request.backgroundPolicy = RKRequestBackgroundPolicyCancel; // If the app switches to the background, cancel the request request.backgroundPolicy = RKRequestBackgroundPolicyContinue; // Continue the request in the background request.backgroundPolicy = RKRequestBackgroundPolicyRequeue; // Cancel the request and place it back on the queue for next activation }
The default policy is RKRequestBackgroundPolicyNone. Once you have set your policy and sent your request, RestKit handles the rest — switching in and out of the app will cause the appropriate action to happen.
Example Code - See RKBackgroundRequestExample in RKCatalog
Advanced Object Mapping
In part one of our series, we introduced the concept of Object Mapping — the RestKit process for converting JSON payloads into local domain objects. In its most simple form, Object Mapping reduces the tedium associated with parsing simple fields out of a dictionary and assigning them to a target object. But that’s not the end of the story for the object mapper.
RestKit also provides support for mapping hierarchies of objects expressed through relationships. This is a powerful feature for importing a large amount of data via a single HTTP request. In this section we’ll explore relationship mapping in detail and look at how RestKit supports non-idiomatic JSON structures via key-value coding.
Dealing with Alternate JSON Structures
Most of the object mapping examples we have examined so far have performed simple mappings from one field to another (i.e. created_at becomes createdAt). If you have complete control over the JSON output of the backend system or are exactly modeling the server side output, this may be all that you ever need to do. But sometimes the realities of the backend system we need to integrate with do not fit so neatly with RestKit’s view of the world. If your target JSON contains nested data that you wish to access without decomposing the structures into multiple object types, you will need to leverage the power of key-value coding in your mappings.
Key-value coding is a mechanism for accessing data indirectly in Cocoa by use of a string containing property names, rather than invoking accessor methods or directly accessing instance variables. Key-value coding is one of the most important patterns in Cocoa and is the foundation of RestKit’s object mapping system. When RestKit encounters a data payload it knows how to handle, the payload is handed off to the parser for evaluation. The parser then evaluates the data and returns a key-value coding compliant representation of the data in the form of arrays, dictionaries, and basic types. From here, the object mapper begins analyzing the representation and using key-value accessors to retrieve data and assign it to the target object instance. Every time you register an element to class mapping or define an element to property mapping, you are providing a Key-value coding compliant key path to the mapper. This is an important point — you have the full power of the key-value pattern at your disposal. For example, you can traverse the object graph via dot notation syntax and utilize collection operators to perform actions on collections within your payload.
Let’s take a look at some code to get a better understanding of how key-value coding works within RestKit. Consider the following JSON structure for a simplified bank account application:
{ "id": 1234, "name": "Personal Checking", "balance": 5013.26, "transactions": [ {"id": 1, "payee": "Joe Blow", "amount": 50.16}, {"id": 2, "payee": "Grocery Store", "amount": 200.15}, {"id": 3, "payee": "John Doe", "amount": 325.00}, {"id": 4, "payee": "Grocery Store", "amount": 25.15}] }
We are going to use key-value coding to access some information within the payload: the number of transactions, the average amount of the transactions, and the distinct group of payees in the transactions list. Here is our model:
@interface SimpleAccount : RKObject { NSNumber* _accountID; NSString* _name; NSNumber* _balance; NSNumber* _transactionsCount; NSNumber* _averageTransactionAmount; NSArray* _distinctPayees; } @property (nonatomic, retain) NSNumber* accountID; @property (nonatomic, retain) NSString* name; @property (nonatomic, retain) NSNumber* balance; @property (nonatomic, retain) NSNumber* transactionsCount; @property (nonatomic, retain) NSNumber* averageTransactionAmount; @property (nonatomic, retain) NSArray* distinctPayees; @end @implementation SimpleAccount + (NSDictionary*)elementToPropertyMappings { return [NSDictionary dictionaryWithKeysAndObjects: @"id", @"accountID", @"name", @"name", @"balance", @"balance", @"transactions.@count", @"transactionsCount", @"transactions.@avg.amount", @"averageTransactionAmount", @"transactions.@distinctUnionOfObjects.payee", @"distinctPayees", nil]; } @end // -- snip -- - (void)workWithKVC { [[RKObjectManager sharedManager] loadObjectsAtResourcePath:@"/accounts.json" objectClass:[SimpleAccount class] delegate:self]; } - (void)objectLoader:(RKObjectLoader*)objectLoader didLoadObjects:(NSArray*)objects { SimpleAccount* account = [objects objectAtIndex:0]; // Will output "The count is 4" NSLog(@"The count is %@", [account transactionsCount]); // Will output "The average transaction amount is 150.115" NSLog(@"The average transaction amount is %@", [account averageTransactionAmount]); // Will output "The distinct list of payees is: Joe Blow, Grocery Store, John Doe" NSLog(@"The distinct list of payees is: %@", [[account distinctPayees] componentsJoinedByString:@", "]); }
Now things are getting interesting. Note the new syntax utilized after the balance mapping is defined. We have used key-value coding to traverse down to the transactions array and perform operations on the data. From here, we see the use of several Key-value collection operators. These operators are provided to us by Cocoa and detailed in the “Key-Value Coding Programming Guide” available in the Xcode documentation. The key lesson here is to remember that the object mapper is built with key-value coding in mind and there is additional firepower available beyond simple one-to-one mappings.
Taking advantage of key-value coding in your mappings becomes very useful when working with large, complex JSON payloads where you only care about a subset of the data. But sometimes we actually do care about all that extra information — we just wish it was available in a more accessible format. In these circumstances we can instead turn to the use of relationship modeling to help RestKit transform a big data payload into an object graph.
Example Code - See RKKeyValueMappingExample in RKCatalog
Modeling Relationships
Relationship modeling is expressed via the elementToRelationshipMappings method on the RKObjectMappable protocol. This method instructs the mapper to take a nested JSON dictionary, perform mapping operations, and assign the result object (or objects) to the property with the given name. This is process is repeated for each mapping operation, allowing object graphs of arbitrary depth to be constructed.
To understand how this works, let’s take a look at an example. We are going to walk through the implementation of a Task List data model to illustrate the principles of relationship modeling. The task list example code is contained within the RKRelationshipMappingExample code in the RKCatalog application. Within RKRelationshipMappingExample, there are three data models that are related to one another: Users, Projects, and Tasks. Users are people working within the system. Projects contain a discrete set of steps that work toward a concrete goal that can be completed. Tasks represent each of these concrete steps within a Project. The relationships between them are:
- User has many Projects
- Each Project belongs to a single User
- Each Task belongs to a single Project
- Each Task can be assigned to a single User
The data models can be found in the Code/Models directory of the sample application.
Our application is very simple from a user interface standpoint. We have a single table view that shows all the Projects in the system and the name of the User who created the Project. Clicking on the Project pushes a secondary table view into view that shows all the Tasks contained in the Project. Rather than making multiple requests to individual resource collections to build the view, we are going to request the entire object graph from a single resource path ‘/task_list’. The JSON returned by the resource path looks like:
[{"project": { "id": 123, "name": "Produce RestKit Sample Code", "description": "We need more sample code!", "user": { "id": 1, "name": "Blake Watters", "email": "blake@twotoasters.com" }, "tasks": [ {"id": 1, "name": "Identify samples to write", "assigned_user_id": 1}, {"id": 2, "name": "Write the code", "assigned_user_id": 1}, {"id": 3, "name": "Push to Github", "assigned_user_id": 1}, {"id": 4, "name": "Update the mailing list", "assigned_user_id": 1} ] }}, {"project": { "id": 456, "name": "Document Object Mapper", "description": "The object mapper could really use some docs!", "user": { "id": 2, "name": "Jeremy Ellison", "email": "jeremy@twotoasters.com" }, "tasks": [ {"id": 5, "name": "Mark up methods with Doxygen markup", "assigned_user_id": 2}, {"id": 6, "name": "Generate docs and review formatting", "assigned_user_id": 2}, {"id": 7, "name": "Review docs for accuracy and completeness", "assigned_user_id": 1}, {"id": 8, "name": "Publish to Github", "assigned_user_id": 2} ] }}, {"project": { "id": 789, "name": "Wash the Cat", "description": "Mr. Fluffy is looking like Mr. Scruffy! Time for a bath!", "user": { "id": 3, "name": "Rachit Shukla", "email": "rachit@twotoasters.com" }, "tasks": [ {"id": 9, "name": "Place cat in bathtub", "assigned_user_id": 3}, {"id": 10, "name": "Run water", "assigned_user_id": 3}, {"id": 11, "name": "Try not to get scratched", "assigned_user_id": 3} ] }}]
This JSON collection is oriented around an array of Project models, with nested relationship structures. Let’s look at the implementation of our Project class:
@interface Project : RKObject { NSNumber* _projectID; NSString* _name; NSString* _description; User* _user; NSArray* _tasks; } @property (nonatomic, retain) NSNumber* projectID; @property (nonatomic, retain) NSString* name; @property (nonatomic, retain) NSString* description; @property (nonatomic, retain) User* user; @property (nonatomic, retain) NSArray* tasks; @end @implementation Project + (NSDictionary*)elementToPropertyMappings { return [NSDictionary dictionaryWithKeysAndObjects: @"id", @"projectID", @"name", @"name", @"description", @"description", nil]; } + (NSDictionary*)elementToRelationshipMappings { return [NSDictionary dictionaryWithKeysAndObjects: @"user", @"user", @"tasks", @"tasks", nil]; } @end
Here we see the new invocation to elementToRelationshipMappings. If you glance back at the JSON structure, you can see that the declaration is instructing the object mapper to take the data contained in ‘user’ and ‘tasks’ sub-dictionaries, map them into objects, and assign the User and array of Task objects to the Project. When all of this has been completed, the object mapper will return the results and the complete object graph will be sent to your object loader delegate for processing.
Example Code - See RKRelationshipMappingExample in RKCatalog
Persistence with Core Data
NOTE - It is assumed for the purposes of this tutorial that the reader is familiar with Core Data.
Perhaps the most powerful weapon in RestKit’s arsenal is the seamless integration with Apple’s Core Data technology. Core Data provides a queryable, object persistence framework that is available on OS X and iOS. Building on the foundation of object mapping, RestKit enables the developer to create a persistent mirror of data contained within a remote backend system with very little code. There are a lot of moving pieces involved in providing such a high level of abstraction, so let’s meet the key players before diving into the details:
- RKManagedObjectStore - The object store wraps the initialization and configuration of internal Core Data classes including NSManagedObjectModel, NSPersistentStoreCoordinator, and NSManagedObjectContext. The object store is also responsible for managing object contexts for each thread and managing changes between threads. In general, the object store seeks to remove as much boilerplate Core Data code as possible from the main application.
- RKManagedObject - The superclass of all RestKit persistent objects. RKManagedObject inherits from NSManagedObject and extends the API with a number of helpful methods. This is an RKObjectMappable class and is configured for mapping in the same way as transient RKObject instances.
- RKManagedObjectLoader - When Core Data support has been linked into your application, this descendant of
RKObjectLoader handles the processing of object load requests. It knows how to uniquely identify Core Data backed objects and hides the complexities of passing NSManagedObject’s across threads. Also responsible for deleting objects from the local store when a DELETE is processed successfully.
- RKManagedObjectCache - The object cache protocol defines a single method for mapping resource paths to a collection of fetch requests for pulling local copies of objects that ‘live’ at a given resource path. We’ll cover this in detail below.
- RKManagedObjectSeeder - The object seeder provides an interface for creating a SQLite database that is loaded with local copies of remote objects. This can be used to bootstrap a large local database so that no lengthy synchronization process is necessary when the app is first downloaded from the App Store. Seeding is covered in detail below as well.
It is worth noting that there is nothing special about RestKit’s utilization of Core Data. The framework uses standard API’s and streamlines common tasks. You can integrate RestKit into an existing Core Data backed application without much trouble. Once integrated, you can use all the familiar Core Data API’s — you don’t have to stick to what RestKit exposes via RKManagedObject.
Getting Started with Core Data
Enabling persistent object mapping is a relatively straight-forward process. It differs from transient object mapping in only a few ways:
- libRKCoreData.a must be linked into your target
- Apple’s CoreData.framework must be linked to your target
- A Data Model Resource must be added to your target and configured within Xcode
- The RestKit Core Data headers must be imported via
#import <RestKit/CoreData/CoreData.h>
- An instance of RKManagedObjectStore must be configured and assigned to the object manager
- Persistent models inherit from RKManagedObject rather than RKObject
- A Primary Key property must be defined on each persistent model by implementing the
primaryKeyPropertymethod
- Implementation for properties on managed objects are generated via @dynamic rather than @synthesize
Once these configuration changes have been completed, RestKit will load & map payloads into Core Data backed classes.
There are a couple of common gotchas and things to keep in mind when working with Core Data:
- You can utilize a mix of persistent and transient models within the application — even within the same JSON payload. RestKit will determine if the target object is backed by Core Data at runtime and will
return managed and unmanaged objects as appropriate.
- RestKit expects that each instance of an object be uniquely identifiable via a single primary key that is present in the payload. This allows the mapper to differentiate between new, updated and removed objects.
- When configuring your Data Model resource, care must be taken to ensure that the destination class is set to your desired model class. It defaults to NSManagedObject and must be updated appropriately. Failure to do this will result in exceptions from within the mapper when RestKit methods are invoked on an instance of NSManagedObject.
- Use of threading in Core Data requires some special care. You cannot safely pass managed object instances across thread boundaries. They must be serialized to NSManagedObjectID and handed off between threads and then refetched from the managed object context. RKObjectLoader performs JSON parsing and object mapping on background threads and handles the thread jumping & object fetching for you. But you must take care if you introduce threading (including the use of performSelector:withDelay:) in your application code.
- Apple recommends utilizing one managed object context instance per thread. When you retrieve a managed object context from RKManagedObjectStore, a new instance is created and stored onto thread local storage if the calling thread is not the main thread. You don’t need to worry about managing the life-cycle of the managed object contexts or merging changes — the object store observes these thread-local contexts and handles merging changes back into the main object context.
- RestKit makes some blanket assumptions about how you are using Core Data that may not be appropriate for your application. This includes the merge policy used on object contexts, the options provided during initialization of the persistent store coordinator, etc. If you need more flexibility than is provided out of the box, reach out to the team and we’ll help loosen up these assumptions.
- RestKit assumes that you use an entity with the same name as your model class in the data model.
- There is not currently any framework level help for working with store migrations.
For help getting started with Core Data, please refer to the RKTwitter and RKTwitterCoreData projects in the Examples/ directory of the RestKit distribution. These projects provide identical implementations of a simple modeling of the Twitter timeline except that one is persistently backed by Core Data.
Working with Core Data
Now that we have a grounding in the basic requirements for adding Core Data to a RestKit project, let’s take a look at some code to help us make things happen. All the code in this section is contained in the Examples/RKCoreDataExamples project for reference. Please pop the project open and follow along as we work through the examples.
First, we need to actually get RestKit and Core Data initialized. Open RKCDAppDelegate.m and note the following snippets of code:
// Import RestKit's Core Data support #import <RestKit/CoreData/CoreData.h> RKObjectManager* manager = [RKObjectManager objectManagerWithBaseURL:@""]; manager.objectStore = [RKManagedObjectStore objectStoreWithStoreFilename:@"RKCoreDataExamples.sqlite"];
What we have done here is instantiated an instance of the object manager and an instance of the managed object store. Within the internals of RKManagedObject, an NSManagedObjectModel and NSPersistentStoreCoordinator has been created for you. A persistent store file is created or reopened for you within the application’s documents directory and is configured to use SQLite as the backing technology. From here you have a working Core Data environment ready to go.
Now let’s take a look at the rather anemic model in Examples/RKCatalog/Examples/RKCoreDataExample/RKCoreDataExample.m:
@implementation Article + (NSDictionary*)elementToPropertyMappings { return [NSDictionary dictionaryWithKeysAndObjects: @"id", @"articleID", @"title", @"title", @"body", @"body", nil]; } + (NSString*)primaryKeyProperty { return @"articleID"; } @end
Here we see the familiar elementToPropertyMappings method from RKObjectMappable. The only thing new here is the implementation of a method indicating the primary key. This allows the object mapper to know that when working with instances of Article, it should consult the
articleID property to obtain the primary key value for the instance. This allows RestKit to update the properties for this object no matter what resource path it is loaded from.
Now let’s explore some of the API’s exposed via RKManagedObject. Loading all objects of a given type is trivial:
- (void)loadAllObjects { NSArray* objects = [Article allObjects]; NSLog(@"We loaded %d objects", [objects count]); }
Here we are retrieving all the objects for a given class from Core Data. This wraps the initialization, configuration and execution of a fetch request targeting the entity for our class. We can also configure our own fetch requests or utilize a number of helper methods to quickly perform common tasks:
- (void)funWithFetchRequests { // Grab a fetch request configured to target the Article entity NSFetchRequest* fetchRequest = [Article fetchRequest]; NSLog(@"My fetch request is: %@", fetchRequest); // Configure a fetch request to sort the results by title NSFetchRequest* sortedRequest = [Article fetchRequest]; NSSortDescriptor* sortDescriptor = [NSSortDescriptor sortDescriptorWithKey:@"title" ascending:YES]; [sortedRequest setSortDescriptors:[NSArray arrayWithObject:sortDescriptor]]; NSArray* sortedObjects = [Article objectsWithFetchRequest:fetchRequest]; NSLog(@"Here are the objects sorted: %@", sortedObjects); // Fetch an object by primary key Article* firstArticle = [Article objectWithPrimaryKeyValue:[NSNumber numberWithInt:1]]; NSLog(@"This is the Article with ID 1: %@", firstArticle); // Find Articles where the body contains the word 'something' case insensitively NSPredicate* predicate = [NSPredicate predicateWithFormat:@"body CONTAINS[c] %@", @"something"]; NSArray* matches = [Article objectsWithPredicate:predicate]; NSLog(@"Found the following Articles that match: %@", matches); }
All of these methods are defined on RKManagedObject and provide short-cuts for features provided directly by Core Data. You can certainly configure your own fetch request entirely:
- (NSFetchRequest*)constructMyOwnFetchRequest { NSFetchRequest *fetchRequest = [[[NSFetchRequest alloc] init] autorelease]; NSEntityDescription *entity = [Article entity]; // The entity is available via RKManagedObject helper method... [fetchRequest setEntity:entity]; return fetchRequest; }
These Core Data helpers methods are used to drive a simple table view in the RKCoreDataExample in RKCatalog.
Example Code - See RKCoreDataExample in RKCatalog
Automagic Relationship Management
One of nicest benefits of using Core Data with RestKit is that you wind up with a nicely hydrated object graph that let’s you traverse object relationships naturally. Relationship population is handled through the use of the
elementToRelationshipMappings method we introduced in the previous section on modeling relationships. Recall that
elementToRelationshipMappings instructs
the mapper to look for associated objects nested as a sub-dictionary within the JSON payload. But this can present a problem for a Core Data backed app — if you do not return all the relationships you have modeled within your payload, the graph can become stale and out of sync with the server. And not to mention that returning all relationships is often incorrect from an API design or performance perspective. So what are we to do?
RestKit solves this problem by introducing a new mapper configuration directive specific to Core Data objects:
relationshipToPrimaryKeyPropertyMappings. The relationship to primary key mappings definition instructs the mapper to connect a Core Data relationship by using the value stored in another property to lookup the target object. This is easily understood by returning to the Task List data model we explored earlier. Recall that the JSON for an individual task looked like this:
{"id": 5, "name": "Place cat in bathtub", "assigned_user_id": 3}
Note the
assigned_user_id element in the payload — this is the primary key value for the User object that the Task has been assigned to. Let’s look at the code:
@interface Task : RKManagedObject { } @property (nonatomic, retain) NSNumber* taskID; @property (nonatomic, retain) NSString* name; @property (nonatomic, retain) NSNumber* assignedUserID; @property (nonatomic, retain) User* assignedUser; @end @implementation Task + (NSDictionary*)elementToPropertyMappings { return [NSDictionary dictionaryWithKeysAndObjects: @"id", @"taskID", @"name", @"name", @"assigned_user_id", @"assignedUserID", nil]; } + (NSDictionary*)relationshipToPrimaryKeyPropertyMappings { return [NSDictionary dictionaryWithObject:@"assignedUserID" forKey:@"assignedUser"]; } @end
Note the definition of
relationshipToPrimaryKeyPropertyMappings — we have informed the mapper that the
assignedUserID property contains the value of the primary key for the
assignedUser relationship. When the mapper sees this, it will reflect on the relationship to determine it’s type (in this case, a User) and assign
object.user = User.findByPrimaryKeyValue(object.assignedUserID). The target object must exist within the local data store or the relationship will be set to nil.
Example Code - See RKRelationshipMappingExample in RKCatalog
Going Offline: Using the Object Cache
A primary use case for RestKit’s Core Data integration is to provide offline access to remote content. In fact it was from this need that RestKit was born — when development began on GateGuru (an excellent, essential app for the iPhone wielding airline traveler) a primary requirement was the ability to access information at 30,000 feet. What we really wanted was a common programming interface that would work regardless of online or offline status — if we have network connectivity, ping the remote server and give me the results, otherwise hit the cache and give me the most recent local results available. Because we designed our web services RESTfully, we could easily construct URL’s that would access the data for a particular airport, terminal, etc. We realized that we could reach our API nirvana by utilizing the resource path we load remote objects for as a key into our persistent store. This is precisely what the
RKManagedObjectCache protocol allows you to do:
/** * Must return an array containing NSFetchRequests for use in retrieving locally * cached objects associated with a given request resourcePath. */ - (NSArray*)fetchRequestsForResourcePath:(NSString*)resourcePath;
To utilize the object cache, you need to provide an implementation of
RKManagedObjectCache and assign it to the managed object store. The implementation of the method needs to parse the resource path and construct one or more fetch requests that can be used to retrieve objects that ‘live’ at that resource path. For example, in an app like GateGuru that has a collection of airport objects, your implementation might look something like this:
@interface MyObjectCache : NSObject <RKManagedObjectCache> { } @end @implementation MyObjectCache - (NSArray*)fetchRequestsForResourcePath:(NSString*)resourcePath { if ([resourcePath isEqualToString:@"/airports"]) { NSFetchRequest* fetchRequest = [Airport fetchRequest]; // A fetch request with an entity set, but nothing else fetches all objects return [NSArray arrayWithObject:fetchRequest]; } return nil; } @end MyObjectCache* cache = [[MyObjectCache alloc] init]; [RKObjectManager sharedManager].objectStore.managedObjectCache = cache; [cache release];
An array of fetch requests is returned to support the case where your remote endpoint returns objects of more than one type — Core Data fetch requests can only target a single entity. Once you have provided an implementation and covered all your resource paths, you can retrieve the locally cached objects for a resource path from the object store:
NSArray* objects = [[RKObjectManager sharedManager].objectStore objectsForResourcePath:@"/airports"];
The object cache is used extensively within the Three20 integration layer we will discuss in more detail below. In summary, if you are using Three20 in your application RestKit ships with an object cache aware implementation of TTModel that can be used to populate a table with data from an object loader or the cache.
Handling Remote Object Deletion
In addition to providing the basis for offline support, the object cache provides another important feature: intelligent handling of server-side object deletion. If you have provided an implementation of the object cache in your application, RestKit will prune objects that currently exist in the local store but have disappeared from the remote payload for a cached resource path. If your application has many resource paths that can load the same objects, it is important that you handle each path and return fetch requests covering all the objects.
If you are not using the object cache, you must manually handle object server-side object deletion by some other means.
Database Seeding
In a Core Data backed application, it can be highly desirable to ship your application to the App Store with content already available in the local store. RestKit includes a simple object seeding implementation for this purpose via the
RKObjectSeeder class.
RKObjectSeeder provides an interface for opening one or more JSON files stored in the local bundle, processing them with the mapper, and then outputting instructions for how to obtain the seed database for use in your application. The seeding process typically looks like this:
- Generate a dump file for each of your persistent object types from your backend system in JSON format.
- Duplicate your existing application target and name the new target “Generate Seed Database”.
- View the Build Settings for your target and find the GCC - Preprocessing section.
- In the section named “Preprocessor Macros”, add new preprocessor macro:
RESTKIT_GENERATE_SEED_DB. This value will be defined when we build and run the seeder target.
- Add your JSON dump files to the “Generate Seed Database” target and ensure they are copied into the application bundle.
- Update your application delegate to check for
RESTKIT_GENERATE_SEED_DBand instantiate an instance of
RKObjectSeeder.
- Initialize an instance of
RKObjectSeederwith your fully configured instance of
RKObjectManager
- Invoke the appropriate methods on the
RKObjectSeederinstance for each of your JSON dump files.
- When finished, invoke the
finalizeSeedingAndExitmethod on the
RKObjectSeederinstance.
The seeder is designed to be run in the Simulator on your Mac. When you invoke
finalizeSeedingAndExit, the library will log details out to the console about where you can obtain the SQLite seed database. Once you have obtained a copy of the seed database, you add it to your project as a resource to copy into the app bundle. Once you have added the seed database to your application, you simply modify your initialization of RKManagedObjectStore to indicate that you have a seed database to start with rather than a blank slate.
Let’s take a look at some example code, taken from the RKTwitterCoreData example, that highlights how to work with the seeder:
// Database seeding is configured as a copied target of the main application. There are only two differences // between the main application target and the 'Generate Seed Database' target: // 1) RESTKIT_GENERATE_SEED_DB is defined in the 'Preprocessor Macros' section of the build setting for the target // This is what triggers the conditional compilation to cause the seed database to be built // 2) Source JSON files are added to the 'Generate Seed Database' target to be copied into the bundle. This is required // so that the object seeder can find the files when run in the simulator. #ifdef RESTKIT_GENERATE_SEED_DB RKManagedObjectSeeder* seeder = [RKManagedObjectSeeder objectSeederWithObjectManager:objectManager]; // Seed the database with instances of RKTStatus from a snapshot of the RestKit Twitter timeline [seeder seedObjectsFromFile:@"restkit.json" toClass:[RKTStatus class] keyPath:nil]; // Seed the database with RKTUser objects. The class will be inferred via element registration [seeder seedObjectsFromFiles:@"users.json", nil]; // Finalize the seeding operation and output a helpful informational message [seeder finalizeSeedingAndExit]; // NOTE: If all of your mapped objects use element -> class registration, you can perform seeding in one line of code: // [RKManagedObjectSeeder generateSeedDatabaseWithObjectManager:objectManager fromFiles:@"users.json", nil]; #endif // Initialize object store objectManager.objectStore = [RKManagedObjectStore objectStoreWithStoreFilename:@"RKTwitterData.sqlite" usingSeedDatabaseName:RKDefaultSeedDatabaseFileName managedObjectModel:nil];
Integration Layers
It is briefly worth noting that RestKit ships with some integration layers to help developers work with some complementary technology. As of this writing, there are two such integration points available:
- RKRailsRouter - A Router implementation aware of Ruby on Rails idioms
- RKRequestTTModel - An implementation of the TTModel protocol for Three20 that allows RestKit object loaders to drive Three20 tables
Ruby on Rails Support
The RKRailsRouter inherits from the RKDynamicRouter introduced in the first tutorial. The Rails router alters the default routing behavior in a couple of ways:
- Allows for registration of server side model names for the purpose of nesting attributes before sending requests
- Prevents any parameter data from being encoded into the request body for DELETE requests
The attribute nesting is understood simply with an example. Imagine that we have a server-side model called ‘Article’, with two attributes ‘title’ and ‘body’. We would configure the Rails router like so:
RKRailsRouter* router = [[RKRailsRouter alloc] init]; [router setModelName:@"article" forClass:[Article class]]; [router routeClass:[Article class] toResourcePath:@"/articles/(articleID)"]; [router routeClass:[Article class] toResourcePath:@"/articles" forMethod:RKRequestMethodPOST]; Article* article = [Article object]; article.title = @"This is the title"; article.body = @"This is the body"; [[RKObjectManager sharedManager] postObject:article delegate:self];
When the object is serialized for the POST request, RestKit will nest the attributes into a hash like so:
article[title]=This is the title& article[body]=This is the body
This matches the format Rail’s controllers expect attributes to be delivered in. The changes to the DELETE payload are self-explanatory — Rails simply expects the params to be empty during DELETE requests and the Rails router abides.
Three20 Support
At Two Toasters, the vast majority of our iOS applications are built on top of two frameworks: RestKit and Three20. We have found that Three20 greatly simplifies and streamlines a number of common patterns in our iOS applications (such as the support for URL based dispatch) and provides a rich library of UI components and helpers that make us happier, more productive programmers. And RestKit obviously makes working with data so much more pleasant. So it should come as little surprise that there integration points available between the two frameworks.
Integration between RestKit and Three20 takes the form of an implementation of the TTModel protocol. TTModel defines an interface for abstract data models to inform the Three20 user interface components of their status and provide them with data. TTModel is the basis for all Three20 table view controllers as well as a number of other components. RestKit ships with an optional
libRestKitThree20 target that provides an interface for driving Three20 table views off of a RestKit object loader via the
RKRequestTTModel class.
RKRequestTTModel allows us to handle all the modeling, parsing, and object mapping with RestKit and then plug our data model directly into Three20 for presentation.
RKRequestTTModel also provides transparent offline support and periodic data refresh in our user interfaces. When you have used Core Data to back your data model and utilize
RKRequestTTModel in your controllers, RestKit will automatically pull any objects from the cache that live at the resource path you are loading in the event you are offline.
RKRequestTTModel can also be configured to hit the network only after a certain amount of time by configuring the
refreshRate property.
In addition to
RKRequestTTModel, a child class
RKRequestFilterableTTModel is provided as well.
RKRequestFilterableTTModel provides support for sorting and searching a collection of loaded objects and can be useful for providing client side filtering operations.
Connecting the Dots
The Three20 support lies at the top of a large pyramid of technology and relies on nearly every part of the framework we have discussed so far. The amount of code necessary to see the full benefits of the framework at this level is daunting to include in the body of this text. A full-featured RestKit application leveraging Core Data, Ruby on Rails, and Three20 is available in the Examples/RKDiscussionBoardExample directory. Please take a close look at the Discussion Board example and join the RestKit mailing list. The community is quite active and more than happy to help new users.
Conclusion
We hope that you have found learning about RestKit fun and rewarding. At this point we have reviewed the vast majority of framework and you should be prepared to utilize RestKit in your next RESTful iOS application. The framework is maturing quickly and iterating rapidly, so please be sure to join the mailing list or follow @RestKit on Twitter to keep up with the latest developments. Happy coding!
Learning More
- RestKit:
- Github:
- API Docs:
- Google Group:
- Brought to you by Two Toasters:
| http://code.tutsplus.com/tutorials/advanced-restkit-development_iphone-sdk--mobile-5916 | CC-MAIN-2014-41 | en | refinedweb |
1.0.0.0.RELEASE") } } apply plugin: 'java' apply plugin: 'spring-boot' jar { baseName = 'myproject' version = '0.0.1-SNAPSHOT' } repositories { mavenCentral() maven { url "" } maven { url "" } } dependencies { compile("org.springframework.boot:spring-boot-starter-web") testCompile("junit:junit") }.0.0.RELEASE
If you are developing features for the CLI and want easy access to the version you just built, follow these extra instructions.
$ gvm install springboot dev /path/to/spring-boot/spring-boot-cli/target/spring-boot-cli-1.0.0.RELEASE-bin/spring-1.0.0.RELEASE/ $ gvm use springboot dev $ spring --version Spring CLI v1.0.0
.0.0.0.0.0 63.context.embedded.AnnotationConfigEmbeddedWebApplicationContext likes()).
@PropertySourceannotations on your
@Configurationclasses.
application.propertiesincluding YAML and profile variants).
application.propertiesincluding YAML and profile variants).:(name=.valididation constraint
annotations to your
@ConfigurationProperties class:
@Component @ConfigurationProperties(name=uraiton { // ... }..Factory
AbstractDataSourceConfiguration, Redis, Gemfire, Couchbase and Cassandra. Spring Boot provides auto-configuration for MongoDB; you can make use of the other projects, but you will need to configure them yourself. Refer to the appropriate reference documentation at. org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Component;,Application @IntegrationTest public class CityRepositoryIntegrationTests { @Autowired CityRepository repository; RestTemplate restTemplate = new TestRestTemplate(); // ... interact with the running server }). And in either case the template will behave
in a friendly way for testing, en<String> { @Override public String health() { // perform some specific health check return ... } }
Spring Boot also provides a
SimpleHealthIndicator
implementation that attempts a simple database test.
If you are using Maven, also be exposed over HTTP. By
default “basic” authentication will be used with the username
user and a generated
You can use Spring properties to change the username and passsword manangement. Since your management
port is often protected by a firewall, and not exposed to the public, you might also
want to disable management security:
management.port=8081 management.security.enabled=false user@localhost user@localhost's password: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.0.0.RELEASE) on myhost
Type
help for a list of commands. Spring boot provides
metrics,
beans,
autoconfig
and
endpoint commands.
You can use the
shell.auth.simple.username and
shell.auth.
Spring Boot Actuator provides an
/error mapping by default that handles all errors in a
sensible way. If you want more specific error pages for some conditions, the embedded
servlet containers support a uniform Java DSL for customizing the error handling.. This is a breeze with
Spring Boot. git@heroku.com:agile-sierra-1405.git * [new branch] master -> master
That should be it! Your application.0.0.0.RELEASE) Hit TAB to complete. Type 'help' and hit RETURN for help, and 'exit' to quit.
From inside the embedded shell you can run other commands directly:
$ version Spring CLI v1.0.0>
The following configuration options are available for the
spring-boot:repackage goal:.
<plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <version>1.0.0 it’sand read from there the available external configuration options. The
@ConfigurationPropertieshas a
nameattribute which acts as a prefix to external properties, thus
ServerPropertieshas
name=.3, “Application events and listeners” in the
“Spring Boot features” section for a complete list.
You can use the
ApplicationBuilder class to create parent/child
ApplicationContext
hierarchies. See Section 20.2, 53.3< 55 can be overridden by providing a bean of the same type, but it’s unlikely you will need to do.
WebMvcAutoConfiguration
and
Thymeleaf. So>${project.groupId}<.
See Section 25.1, “Configure a DataSource” in the
“Spring Boot features” section and the
DataSourceAutoConfiguration
class for more details..
Spring doesn’t require the use of XML to configure the JPA provider, and Spring Boot
assumes you want to take advantage of that feature. If you prefer to use
persistence.xml
then you need to define your own
@Bean of type
LocalEntityManagerFactoryBean, and set
the persistence unit name there.
See
JpaBaseConfiguration
for the default settings.. 32.3, “Customizing the management server port”
in the “Production-ready features” section.
The Actuator. If you are using
Thymeleaf you can do this by adding an
error.html.
If Spring Security is on the classpath then web applications will be secure by default
(“basic” authentication on all endpoints) . To add method-level security to a web
application you can simply
@EnableGlobalMethodSecurity with your desired settings.
Additional information can be found in the Spring
Security Reference.
The default
AuthenticationManager has a single user (username “user” and password
random, printed at INFO level when the application starts up). You can change the
password by providing a
security.user.password. This and other useful properties
are externalized via
SecurityProperties. } template= # #.in-memory=true= #: | http://docs.spring.io/spring-boot/docs/1.0.0.RELEASE/reference/htmlsingle/ | CC-MAIN-2014-41 | en | refinedweb |
Using Windows Compute Cluster Server 2003 Job Scheduler
Updated: June 6, 2006
Applies To: Windows Compute Cluster Server 2003
Windows Compute Cluster Server 2003 operations overview
Microsoft® Windows® Compute Cluster Server 2003 brings high-performance computing (HPC) to industry standard, low cost servers. Jobs—discrete activities scheduled to perform on the compute cluster—are the key to Windows Compute Cluster Server 2003 operation. basic principle of job operation in Windows Compute Cluster Server 2003 relies on three key concepts:
- Admission, or job submission
- Allocation, or the reserving of resources.
- Activation, or job start
These three concepts form the underlying structure of the job life cycle in high-performance computing and are the basis on which Microsoft engineered Windows Compute Cluster Server 2003. Figure 1 illustrates the core relationship between each aspect of job operation. Each time a user prepares a job to run in the compute cluster, the job runs through the three stages.
Figure 1. The HPC job life cycle
To understand how a job operates within a compute cluster, users must understand the components that make up a cluster. Figure 2 illustrates these components and their relationship to one another. In this figure, the dashed line denotes the cluster itself. Several external elements are required in support of the cluster, including Microsoft® Active Directory® directory service. (Clusters must belong to the same Active Directory domain—in this case, the production Active Directory domain.) Other supporting elements may include a licensing server for applications and external data sources as well as user consoles. In this example, cluster nodes include several interconnected links: a public link to access data and the licensing server; a private link to the cluster for intra-cluster communications; and a high-speed, low-latency link for parallel computation execution.
Figure 2. Elements comprising a compute cluster
The cluster itself consists of the head node and compute nodes. The head node is designed to run a job scheduler, add or remove compute nodes, and view job and node status. In other words, the head node manages cluster operations. Compute nodes execute the tasks of which jobs consist.
When a user submits a job to the cluster, the job is recorded in the head node database along with its properties, entered into the execution queue, and then run when the resources it requires become available. Because jobs are submitted in the context of the user and the user's domain, jobs execute using that user’s permissions. As a result, the complexity of using and synchronizing different credentials is eliminated, and the user does not have to use different methods of sharing data or compensate for permission differences among different operating systems. This means that Windows Compute Cluster Server 2003 offers transparent execution, access to data, and integrated security.
Windows Compute Cluster Server 2003 nomenclature
Windows Compute Cluster Server 2003 has a specific nomenclature. Users need to be familiar with this specific terminology to use Windows Compute Cluster Server 2003 effectively.
Cluster
A cluster is the top-level organizational unit of Windows Compute Cluster Server 2003. A cluster consists of the following elements:
- Node. A single computer with one or more processors.
- Queue. An organizational unit that provides queuing and job scheduling. Each cluster contains only one queue, and that queue contains pending, running, and completed jobs. Completed jobs are purged periodically from the queue.
- Job. A collection of tasks that a user initiates. Jobs are used to reserve resources for subsequent use by one or more tasks.
Tasks
A task represents the execution of a program on given compute nodes. A task can be a serial program (single process) or a parallel program with multiple concurrent processes. Figure 3 illustrates common job and task types.
Figure 3. Common job and task types
Compute Cluster Job Scheduler
The Job Scheduler queues jobs and their tasks. It allocates resources to these jobs, initiates the tasks on the compute nodes of the cluster, and monitors the status of jobs, tasks, and compute nodes. Job scheduling is performed through a set of rules called scheduling policies. Figure 4 illustrates the job scheduler stack and its interaction with the job life cycle. The stack consists of three layers, each corresponding to one aspect of the job life cycle:
- The interface layer provides job and task submission, manipulation, and monitoring services accessible through various entry points.
- The scheduling layer provides a decision-making mechanism that balances supply and demand by applying scheduling policies. The workload is distributed among available nodes among the cluster, implementing the concepts of job priorities, equitable sharing, and allocation of resources.
- The execution layer provides the workspace used by tasks. This layer creates and monitors the job execution environment and releases the resources assigned to the task upon task completion. The execution environment supplies the workspace customization for the task, including environment variables, scratch disk settings, security context, and execution integrity, as well as application-specific starting mechanisms and recovery from system interruptions.
Figure 4. The job scheduler stack and its interaction with the job life cycle
Compute Cluster Administrator
The Compute Cluster Administrator is a Microsoft Management Console (MMC) snap-in that allows easy compute cluster administration and deployment. The Compute Cluster Administrator is not typically used when running jobs and tasks in the cluster.
Compute Cluster Job Manager
The Compute Cluster Job Manager is a WIN32 graphical user interface (GUI) that provides access to the Job Scheduler for the creation, submission, and monitoring of jobs in the cluster.
Scheduling policies
Windows Compute Cluster Server 2003 consists of four scheduling policies, which are explained in detail later in this paper:
- Priority-based first come, first served (FCFS)
- Backfilling
- Nonexclusive scheduling
- License-aware scheduling
Each policy type is described here.
Task execution
Tasks operate in either serial or parallel modes. In serial mode, each task runs as a single process and parallelism consists of more than one such task running at the same time. Figure 5 illustrates how task 1 is assigned to the first processor on the first node, then task 2 is assigned to the second processor, task 3 moves to the first processor of the second node, and so on.
In parallel mode, a single task runs on multiple processors. Figure 6 illustrates a task running in parallel mode. Parallel tasks typically call upon Microsoft® Message Passing Interface (MPI) software (called MS MPI) through the executable mpiexec, which is installed on each compute node. Task processes are then started by the node-specific MS MPI Service. Each node can run only one instance of the service, but parallel tasks can call on several nodes to start MS MPI Services. The MS MPI Service on each node, in turn, executes the processes that make up the task. Windows File Sharing supports client-side caching, so applications have to be loaded only once. The application will reside on the compute node’s local disk after it has been loaded, which will speed processing. This configuration must be made on the file server to work and can be done when creating the file share.
The scheduler keeps track of each task and job through task and job ID numbers. It relies on these ID numbers to display task and job status information to Windows Compute Cluster Server 2003 users.
Figure 5. Serial task execution
Figure 6. Parallel task execution
Admission
Several interfaces are available for job submission:
- Compute Cluster Job Manager
- the command-line interface (CLI)
- a COM interface (CCPAPI) for integration with C or C++ custom interfaces and for scripting support
The CLI also supports a variety of scripting languages, including the Perl, C/C++, C#, and Java™ languages.
The Compute Cluster Job Manager includes powerful features for job and task management, and each feature has a corresponding equivalent in the CLI. Features are controlled by the Job Scheduler and include error recovery (that is, automatically retrying failed jobs or tasks and identifying unresponsive nodes); automated cleanup of jobs after they are complete to avoid “runaway” processes on the compute nodes; and security mechanisms (that is, each job runs in the user’s security context, limiting job and task access rights to those of the user initiating them).
As stated earlier, a job is a resource request that contains one or more tasks to be run within the cluster. Each task that makes up the job may in turn be either serial or parallel or a combination of both. An example of a job executing several serial tasks in parallel is a parametric sweep. Parametric sweeps consist of multiple iterations of the same executable that are run concurrently but use different input and output files. There is typically no communication between the tasks, and parallelism is achieved by the scheduler running multiple instances of the same application at the same time.
Users submit jobs to the system, and each job runs with the credentials of the user submitting the job. Jobs can be assigned priorities upon submission. By default, users submit jobs with the Normal priority. If the job needs to run sooner, cluster administrators can assign a higher priority to the job, such as AboveNormal or Highest. Because there is only one job queue, jobs with the highest priority tend to run first.
Jobs also consume resources—nodes, processors, and time—and these resources can be reserved for each specific job. Although one might assume that the best way to execute a job is to reserve the fastest and most powerful resources in the cluster, in fact, the opposite tends to be true. If a job is set to require high-powered resources, it may have to wait for those resources to be free so that the job can run. Jobs that require fewer resources with shorter time limits tend to be executed more quickly, especially if they can fit into the backfill windows (that is, the idle time available to resources) that the Job Scheduler has identified.
Another factor that affects execution time is node exclusivity. By default, jobs require node exclusivity. However, if jobs are defined to require nonexclusive node use, they have faster access to resources because resources can be shared with other jobs. (Any idle resource that can be shared can run other jobs as soon as it is available.)
Creating jobs
You create a job by first specifying the job properties, including priority, the run time limit, the number of required processors, requested nodes, and node exclusivity. After defining the job properties, you can assign tasks to the job. Each task must include the command-line commands to be executed; input, output, and error files to be used; as well as properties similar to those of the job in terms of requested nodes, required processors, the run time limit, and node exclusivity. Tasks also include dependency information, which dictates a specific order in which tasks must run.
You can use either the Compute Cluster Job Manager or the CLI to create jobs. In the Compute Cluster Job Manager, select File > Submit…. On the General tab of the resulting dialog box, enter job details such as the name, the project name (if appropriate), and the name of the submitter. Then select the priority and switch to the Processors tab to identify the minimum and maximum number of processors for the job. This tab also allows you to set the run time duration. Next, move to the Tasks tab to create the tasks associated with the job.
To create a job through the CLI, type the following command:
job new [standard_job_options] [/jobfile:<job_file>] [/scheduler:<host>]
where job file is an optional template file containing previously set options and host is the name of the head node of the cluster that the job will run on. Note that job files can easily be created in the Compute Cluster Job Manager by saving jobs as templates. Table 1 lists the standard properties available with the job command. Note that when dealing with job priorities, users have access only to Normal, BelowNormal, and Lowest, while administrators have access to all the available priorities. If users need to have their jobs run sooner than Normal priority would allow, they must ask a cluster administrator to increase the priority of their jobs.
Table 1. Windows Compute Cluster Server 2003 Job Properties
Adding tasks to a job
Tasks are the discrete commands that jobs execute. You specify tasks through the Tasks tab on the Job Properties sheet. Each task is named, but task names do not have to be unique within a job. Tasks consist of executables that run on cluster resources, so when you create a task, you must enter a command-line command to tell the system which executable to run. If the task uses a Microsoft MPI executable, the task command must be preceded by mpiexec. Tasks can run executables directly or can consist of batch files performing multiple activities.
To create a task, select the Tasks tab of the Job Properties sheet. Then enter the command line required to execute the task and click Add Task. The task appears in the Task window, where you can further refine it with elements such as estimated number of processors and run time duration.
Click Edit on the Tasks tab to open the Task Properties sheet, where a new set of tabs appears: Task, Task Dependencies, Environment, and Advanced. The Task tab is designed for entering input and output files, estimated number of processors and run time duration. The Task Dependencies tab allows dependencies to be set between tasks and is mostly used for serial tasks. The Environment tab is designed for the integration of environment variables to the task. The Advanced tab supports the addition of information such as working folder and the nodes required to run the task, restarting the task if the task fails, and setting a checkpoint on the task. All these values are optional to the task.
Repeat the procedure for each task included in the job. Figures 7 and 8 show the interface for adding a task and the added task, respectively.
Figure 7. Adding a single task
Figure 8. The added task
To create a task through the CLI, you type the following command:
job add <jobID> [standard_task_options] [/taskfile:<template_file>] <command> [arguments]
where jobID is the number of the job and command is the task command line. Table 2 lists the standard task options.
Table 2. Windows Compute Cluster Server 2003 Task Properties
One key factor in admission is determining how the tasks will access the data required to the tasks to run. The way in which this determination is made depends on the amount of the data and the frequency of the changes to the data. If a data set is stable, does not change often, and is relatively large, it can be stored locally. If the data set is small, it can be accessed through a file share, and compute nodes can simply access it from the shared folder. If the data set is large and changes often, a file transfer will be necessary to place the data on the nodes. If you are using small and medium data set sizes, you will have the best out-of-box experience by specifying the working directory. When the task starts, compute nodes see all the files in this working directory and can properly handle the task.
Jobs that must work with parallel tasks through MPI require the use of the mpiexec command, so commands for parallel tasks must be in the following format:
mpiexec [mpi_options] <myapp.exe> [arguments]
where myapp.exe is the name of the application to run. MPI options include standard options supported by the Argonne MPICH2 distribution as well as extensions that have been added to MS MPI. You can submit MPI jobs either through the Job Manager or the command line. In practice, mpiexec options rarely need to be used, since most are set indirectly by the job and task options.
Figure 9 shows how you can quickly build a series of parametric tasks. Through this dialog box, you can generate a series of task command lines consisting of multiple iterations of the same command, automatically incrementing the file extensions of input, output, and error files that the program generates. The increment can be of any size.
Figure 9. Adding a parametric sweep task
In addition to the job and task commands in the CLI, you can call on the cluscfg command, which provides access to information about the cluster itself. A complete list of the options available for the job, task, and cluscfg commands appears in Tables 3.
Table 3. Job-related CLI Commands
Table 4. Task-related CLI Commands
Table 5. Cluster-related CLI Commands
Working with templates and submitting jobs
Jobs and tasks can be saved as templates by clicking Save as Template in the Job and Task Properties sheets. Template files are saved in Extensible Markup Language (XML) format and can therefore be edited in a text editor after they are created. It is good practice to save any recurring job or task as a template. Not only do the templates facilitate job or task recreation, they also support job and task submission through the command prompt window.
Job submission means placing items into the job queue. You can create and submit jobs interactively or from a job template. To create and submit a job interactively, go through each tab on the Job Properties sheet to ensure that the settings are correct and that the tasks you want are specified, and then click Submit from any tab. To submit jobs from templates, select File > Submit with Template…. From there, select the appropriate template, modify its properties (if necessary), and click Submit from any tab.
You can also submit jobs through simple command-line commands—for example, the command:
job submit /numprocessors:8 mpiexec mympitask.exe
where mympitask.exe is the name of the submitted application; this command both creates and submits an MPI job that requires eight processors.
Using the Job Scheduler C# API: Compute Cluster Pack Application Programming Interface (CCPAPI)
CCPAPI provides access to the Windows Compute Cluster Server 2003 Job Scheduler. By writing applications or scripts using these interfaces, you can connect to a cluster and manage jobs, job resources, tasks, nodes, environment variables, extended job terms, and more.
In its simplest terms, using CCPAPI is a five-step process.
- Connect to the cluster
- Create a job
- Create a task
- Add the task to the job
- Submit the job for execution
To connect to the cluster, use the ICluster::Connect method. Create a job using the ICluster::CreateJob method. To create a task, use the ICluster::CreateTask method. To add a child task to a job, use the ICluster::AddTask method. Finally, submit the job using ICluster::SubmitJob().
The sample code below shows an example of how the CCPAPI can be used following the five step process outlined above.
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.ComputeCluster;
namespace SerialJobSub
{
class Program
{
static int Main(string[] args)
{
Cluster cluster = new Cluster();
try
{
cluster.Connect("myheadnode");
IJob job = cluster.CreateJob();
ITask task = new Task();
task.CommandLine = @"c:\myprog.exe arg1 arg2";
task.Stdout = @"c:\pi.out";
job.AddTask(task);
int jobId = cluster.AddJob(job);
cluster.SubmitJob(jobId, @"mydomain\myuserid", null, true, 0);
Console.WriteLine("Job " + jobId + " submitted successfully");
return 0;
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
return 1;
}
}
}
}
The Job Scheduler also has a COM interface. For details on how to use the COM API, see the Microsoft Compute Cluster Pack on the Microsoft Web site ().
Allocation
Resource allocation—together with job ordering within the queue—is controlled through scheduling policies. Windows Compute Cluster Server 2003 supports four policies, with each focusing on a specific scheduling issue:
Priority-based first come, first served (FCFS) — This scheduling policy combines FCFS and other priorities to run jobs. Jobs are placed in higher or lower priority groups when scheduled, based on the priority setting of the job itself, but when a job is placed within a group, it is always placed at the end of the queue.
Backfilling. — Backfilling maximizes node utilization by allowing a smaller job or jobs lower in the queue to run before a job waiting at the top of the queue, as long as the job at the top is not delayed as a result.
When a job reaches the top of the queue, a sufficient number of nodes may not be available to meet its minimum processors requirement. When this happens, the job reserves any nodes that are immediately available and waits for the job that is currently running to complete. Backfilling then utilizes the reserved idle nodes. Refer to Role of the Job Scheduler in the product documentation for more details.
Nonexclusive scheduling. — By default, a job will use the allocated nodes exclusively. Nonexclusive schedules do not exclusively reserve resources for execution, so resources that these jobs use can be shared. Shared resources can be consumed at either the job or the task level. By default, jobs run in Exclusive mode while tasks run in Shared mode.
License-aware scheduling. — Windows Compute Cluster Server 2003 can manage job schedules through special license filters that verify licensing requirements for each job. If license requirements are not met, the job fails. This method helps ensure job compliance. License-aware scheduling is implemented through an admission filter.
These policies should be used appropriately when allocating resources for jobs. You can perform allocation through any one of the job scheduler interfaces, and the allocation becomes part of the job properties.
The default resource allocation strategy processes the following terms (see Table 1 for descriptions):
- numprocessors
- askednodes
- nonexclusive
- exclusive
- requirednodes
The scheduler sorts the candidate nodes using the “fastest nodes with the largest memory first” criterion—that is, it sorts the nodes according to memory size first, and then sorts the nodes by their speed. This behavior is the default for all the resource allocation strategies. Next, the scheduler allocates the CPUs from the sorted nodes to satisfy the minimum and maximum processor requirements for the job. The scheduler attempts to satisfy the maximum number of CPUs for the job before considering another job.
If the askednodes term is specified, the scheduler does not sort the nodes from the complete node list but from a sublist of requested nodes.
If the task term requirednodes is used, other directives are overridden and all the required nodes are reserved by the job.
You will get the maximum benefits from the cluster by properly configuring your jobs —such configuration requires a balance between the allocation of resources and the requirements of each job. In particular, administrators must look to the optimal use of backfill windows to achieve the greatest result with their cluster implementations. To facilitate the optimal use of resources when working with Windows Compute Cluster Server 2003, you should follow these guidelines when creating, submitting, and running jobs:
- Do not use the default Infinite run time when submitting jobs. Instead, define a run time that is an estimate of how long the job should actually take.
- Reserve specific nodes only if the job requires the special resources available on those nodes.
- When specifying the ideal number of processors for a job, set the number as a maximum, and then set the lowest acceptable number as the minimum. If the ideal number is specified as a minimum, the job may have to wait longer for the processors to be freed so it can run.
- Reserve the appropriate number of nodes for each task to run. If two nodes are reserved and the task requires only one, the task has to wait until the two reserved nodes are free before it can run.
These guidelines will help you make the greatest use of the resources available in any Windows Compute Cluster Server 2003 cluster.
Activation
Activation consists of actually starting jobs. Jobs run in the security context of the submitting user, which limits the possibility of runaway processes. Jobs can also be automatically requeued upon failure. Jobs are managed through their state transition. Job states are illustrated in Figure 10.
Figure 10. Job state transition
Like jobs, tasks and nodes also have life cycles represented by status flags displayed in the Windows Compute Cluster Server 2003 GUIs. The status flags for jobs and tasks are identical, while nodes have unique status flags. Job and task status flags are listed in Table 6. Node status flags are listed in Table 7.
Table 6. Windows Compute Cluster Server 2003 Job and Task Status Flags
Table 7. Windows Compute Cluster Server 2003 Node Status Flags
Controlling jobs through filters
Windows Compute Cluster Server 2003 includes several comprehensive features for job submission and activation. Jobs can consist of simple tasks or can include comprehensive feature sets. In terms of activation, administrators can control jobs through activation filters that run the job through a set of conditions before the job actually begins. Two types of filters can be implemented:
- Submission filters
- Activation filters
Administrators can use these filter sets together to verify that jobs meet specific requirements before they pass through the system. Examples of the conditions for these filters include:
Project validation. This condition verifies that the project name is that of a valid project and that the user is a member of the project.
Mandatory policy. This condition ensures that run times are not set to Infinite, which could negatively affect the performance of the cluster, and that the job does not exceed the user’s resource allocation limit.
Usage time. Usage time ensures that the user’s time allocations are not exceeded. Unlike the mandatory policy, this filter limits jobs to the overall time allocations users have for all possible jobs.
In addition, activation filters can ensure that jobs meet licensing conditions before they run. Filters are a powerful job-control feature that should be part of every Windows Compute Cluster Server 2003 implementation. The Job Scheduler invokes the filters by parsing the job file, which contains all the job properties. The exit code of the filter program tells the Job Scheduler what to do. Three types of values are possible:
0: it is okay to submit the job without changing its terms
1: it is okay to submit the job with changed terms
Any other value will cause the job to be rejected.
Security considerations for jobs and tasks
Windows Compute Cluster Server 2003 uses standard Windows mechanisms to manage security within the cluster context. Jobs are executed with the submitting user’s credentials. These credentials are stored in encrypted format on the local computer by the Job Manager, and only the head node has access to the decryption key.
The first time you submit a job, the system prompts for credentials in the form of a user name and password. At this point, the credentials can be stored in the credential cache of the submission computer. During transmission, credentials are encrypted using the Microsoft .NET Remoting channel; upon receipt at the scheduler, credentials are encrypted for storage with the Windows Data Protection application programming interface (DPAPI) and stored in the job database. When a job runs, the head node decrypts the credentials and uses another .NET Remoting channel to pass them along to compute nodes, where they are used to create a token, and then erased. All tasks are performed using this token, which does not include the explicit credentials. When jobs are complete, the head node deletes the credentials from the job database.
When the same user submits the same job for execution again, no credentials are requested because they are already cached on the local computer that is running the Job Manager. This feature simplifies resubmission and provides 256-bit AES (Rijndael) credential encryption throughout the job life cycle (see Figure 11).
Figure 11. The end-to-end Windows Compute Cluster Server 2003 security mode
Conclusion
Windows Compute Cluster Server 2003 makes HPC much more affordable. The system includes powerful job-control features that users access through the familiar Windows graphical environment, including flexible admission controls, simple and effective scheduling policies, and a reliable execution mechanism that incorporates standard Windows security features. The result is a workload management system which delivers resources to the end users in the way that maximizes the corporation’s business and productivity goals. ()
Deploying and Managing Microsoft Windows Compute Cluster Server 2003 ()
Using Microsoft Message Passing Interface
()
Migrating Parallel Applications
()
Debugging Parallel Applications with Visual Studio 2005 ()
For the latest information about Windows Compute Cluster Server 2003, see the Microsoft High-Performance Computing Web site () | http://technet.microsoft.com/en-us/library/cc720125(v=ws.10).aspx | CC-MAIN-2014-41 | en | refinedweb |
600+ question and answer eBook which covers .NET, ASP.NET, SQL Server, WCF, WPF, WWF, Silverlight, Azure click here.
We will create a simple customer entity with customer code and customer name and add the same to Azure tables and display the same on a web role application.
In case you are a complete fresher to Azure, please ensure you have all the prerequisites the namespaces in our code as shown below. Currently we will store a customer record with customer code and customer name in tables. So for that, we need to define a simple customer class with ‘clsCustomer’. This class needs to inherit from ‘TableS ‘clsCustomerDataContext’.
clsCustomer
TableServiceEntity.
Customers
The next step is to create your data context class which will insert the customer entity into Azure table storage. Below is the code snippet of the data context class.The first noticeable thing is the constructor which takes in location of the credentials. The second is the ‘Iqueryable’ interface which is used by the cloud service to create tables in Azure cloud service. next step is to create the table on the ‘onstart’ of the web role.
So open ‘webrole.cs’ file and put the below code on the ‘onstart’ event. The last code enclosed in curly brackets gets the configuration and creates the table's structure.
onstart
CloudStorageAccount.SetConfigurationSettingPublisher(
(configName, configSettingPublisher) =>
{
var connectionString =
RoleEnvironment.GetConfigurationSettingValue(configName);
configSettingPublisher(connectionString);
}
);
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/51535/Simple-Steps-to-Run-Your-First-Azure-Table-Pro?msg=4382388 | CC-MAIN-2014-41 | en | refinedweb |
Caching is an essential mechanism in providing efficient usage of resources in many systems. Data management using JDO is no different and provides a definition of caching at 2 levels. Caching allows objects to be retained and returned rapidly without having to make an extra call to the datastore. The 2 levels of caching available with DataNucleus are
You can think of a cache as a Map, with values referred to by keys. In the case of JDO, the key is the object identity (identity is unique in JDO).
By default the Level 2 Cache is enabled. The user can configure the Level 2 Cache if they so wish. This is controlled by use of the persistence property datanucleus.cache.level2.type. You set this to "type" of cache required. With the Level 2 Persistence.
Note that you can have a PMF with L2 caching enabled yet have a PM with it disabled. This is achieved by creating the PM as you would normally, and then call
pm.setProperty("datanucleus.cache.level2.type", "none");
The majority of times when using a JDO-enabled system you will not have to take control over any aspect of the caching other than specification of whether to use a Level 2 Cache or not. With JDO and DataNucleus you have the ability to control which objects remain in the cache. This is available via a method on the PersistenceManagerFactory.
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(props); DataStoreCache cache = pmf.getDataStoreCache();
The DataStoreCache interface
provides methods to control the retention of objects in the cache. You have 3 groups of methods
These methods can be called to pin objects into the cache that will be much used. Clearly this will be very much application dependent, but it provides a mechanism for users to exploit the caching features of JDO. If an object is not "pinned" into the L2 cache then it can typically be garbage collected at any time, so you should utilise the pinning capability for objects that you wish to retain access to during your application lifetime. For example, if you have an object that you want to be found from the cache you can do
PersistenceManagerFactory pmf = JDOHelper.getPersistenceManagerFactory(props); DataStoreCache cache = pmf.getDataStoreCache(); cache.pinAll(MyClass.class, false); // Pin all objects of type MyClass from now on PersistenceManager pm = pmf.getPersistenceManager(); Transaction tx = pm.currentTransaction(); try { tx.begin(); pm.makePersistent(myObject); // "myObject" will now be pinned since we are pinning all objects of type MyClass. tx.commit(); } finally { if (tx.isActive()) { tx.close(); } }
Thereafter, whenever something refers to myObject, it will find it in the Level 2 cache. To turn this behaviour off, the user can either unpin it or evict it.
JDO allows control over which classes are put into a Level 2 cache. You do this by specifying the cacheable attribute to false (defaults to true). So with the following specification, no objects of type MyClass will be put in the L2 cache.
Using XML: <class name="MyClass" cacheable="false"> ... </class> Using Annotations: @Cacheable("false") public class MyClass { ... }
JDO allows you control over which fields of an object are put in the Level 2 cache. You do this by specifying the cacheable attribute to false (defaults to true). This setting is only required for fields that are relationships to other persistable objects. Like this
Using XML: <class name="MyClass"> <field name="values"/> <field name="elements" cacheable="false"/> ... </class> Using Annotations: public class MyClass { ... Collection values; @Cacheable( has an extension in metadata allowing the user to define that all instances of a class are automatically pinned in the Level 2 cache.
@PersistenceCapable @Extension(vendorName="datanucleus", key="cache-pin", value="true") public class MyClass { ... }). As mentioned earlier, this cache does not support the pin/unpin operations found in the standard JDO interface. However you do have the benefits of Oracle's distributed/serialized caching. If you require more control over the Coherence cache whilst using it with DataNucleus, you can just access the cache directly via
DataStoreCache cache = pmf.getDataStoreCache(); NamedCache coherenceCache = ((CoherenceLevel2Cache)cache).getCoherence> | http://www.datanucleus.org/products/accessplatform_3_3/jdo/cache.html | CC-MAIN-2014-41 | en | refinedweb |
The Sisu project provides a dynamic dependency injection framework that can interact with OSGi services and Eclipse extensions.
The project is currently divided into three codebases:
Sisu artifacts will be made available in the Eclipse download area and published to the Central Repository. The project also plans to provide a P2 update site using Eclipse/Tycho.
Milestones will be published as committer time allows.
Sisu consists of pure Java code and is expected to run on any JVM that supports Java SE 5 or newer.
None of the Sisu deliverables are internationalized, any log and exception messages use English.
API Contract Compatibility:
To comply with Eclipse Foundation requirements, all Sisu Java types/packages will be moved into the
org.eclipse.sisu namespace. An external compatibility wrapper is available for users of Sonatype Sisu.
Source Compatibility:
To comply with Eclipse Foundation requirements, all Sisu Java types/packages will be moved into the
org.eclipse.sisu namespace. While the API will undergo some refactoring and cleanup during the move to Eclipse, only clients that use the Sisu extensions to JSR330 will need to update their imports and make minor code changes to successfully build against the new API.
The refactoring of code into the
org.eclipse.sisu namespace provides an opportunity to remove deprecated code and clean up the API to ease future evolution and to improve usability.
Back to the top | http://www.eclipse.org/projects/project-plan.php?planurl=/sisu/project-info/plan.xml | CC-MAIN-2014-41 | en | refinedweb |
This chapter describes the benefits and use of Statement caching, an Oracle Java Database Connectivity (JDBC) extension.
This chapter contains the following sections:
Reusing Statements Objects.. reinitialized and reset to default values, while metadata is saved. Statements are removed from the cache to conform to the maximum size using
There are two ways to enable implicit Statement caching. The first method enables Statement caching on a nonpooled physical connection, where you need to explicitly specify the Statement size for every connection, using the
setStatementCacheSize method. The second method enables Statement caching on a pooled logical connection. Each connection in the pool has its own Statement cache with the same maximum size that can be specified by setting the
MaxStatementsLimit property.
Method 1
Perform the following steps:
Call the
OracleDataSource.setImplicitCachingEnabled(true) method on the connection to set the
OracleDataSource property
implicitCachingEnabled to
true. For example:
OracleDataSource ods = new OracleDataSource(); ... ods.setImplicitCachingEnabled(true); ...
Call the
OracleConnection.setStatementCacheSize method on the physical connection. The argument you supply is the maximum number of statements in the cache. For example, the following code specifies a cache size of ten statements:
((OracleConnection)conn).setStatementCacheSize(10);
Method 2
Perform the following steps:
Set the
OracleDataSource properties
implicitCachingEnabled and
connectionCachingEnabled to
true. For example:
OracleDataSource ods = new OracleDataSource(); ... ods.setConnectionCachingEnabled( true ); ods.setImplicitCachingEnabled( true ); ...
Set the
MaxStatementsLimit property to a positive integer on the connection cache, when using the connection cache. For example:
Properties cacheProps = new Properties(); ... cacheProps.put( "MaxStatementsLimit", "50" ); Statement and assure that it is not returned to the cache:
In J2SE 5.0
Disable caching for that statement
stmt.setDisableStatementCaching(true);
Call the
close method of the statement object
stmt.close();
In JSE 6.0
stmt.setPoolable(false); stmt.close();
Physically Closing a Cached Statement algorithm typically. ();
Note:If you are using JSE 6, then you can disable Statement caching by using the standard JDBC 4.0 method
setPoolable:
PreparedStatement.setPoolable(false);
Use the following to check whether the
Statement object is poolable:
Statement.isPoolable(); 20-2 describes the methods used to allocate statements and retrieve implicitly cached statements.
Example 20-1 provides a sample code that shows how to enable implicit statement caching.
Example 20-1 Using Implicit Statement Cache
import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Properties; import javax.sql.DataSource; import oracle.jdbc.OracleConnection; import oracle.jdbc.pool.OracleDataSource; public class TestJdbc { /** * Get a Connection, prepare a statement, execute a query, fetch the results, close the connection. * @param ods the DataSource used to get the connection. */ private static void doSQL( DataSource ods ) throws SQLException { final String SQL = "select username from all_users"; OracleConnection conn = null; PreparedStatement ps = null; ResultSet rs = null; try { conn = (OracleConnection) ods.getConnection(); System.out.println( "Connection:" + conn ); System.out.println( "Connection getImplicitCachingEnabled:" + conn.getImplicitCachingEnabled() ); System.out.println( "Connection getStatementCacheSize:" + conn.getStatementCacheSize() ); ps = conn.prepareStatement( SQL ); System.out.println( "PreparedStatement:" + ps ); rs = ps.executeQuery(); while ( rs.next() ) { String owner = rs.getString( 1 ); System.out.println( owner ); } } finally { if ( rs != null ) { rs.close(); } if ( ps != null ) { ps.close(); conn.close(); } } } public static void main( String[] args ) { try { OracleDataSource ods = new OracleDataSource(); ods.setDriverType( "thin" ); ods.setServerName( "localhost" ); ods.setPortNumber( 1521 ); ods.setServiceName( "orcl" ); ods.setUser( "scott" ); ods.setPassword( "tiger" ); ods.setConnectionCachingEnabled( true ); ods.setImplicitCachingEnabled( true ); Properties cacheProps = new Properties(); cacheProps.put( "InitialLimit", "1" ); cacheProps.put( "MinLimit", "1" ); cacheProps.put( "MaxLimit", "5" ); cacheProps.put( "MaxStatementsLimit", "50" ); ods.setConnectionCacheProperties( cacheProps ); System.out.println( "DataSource getImplicitCachingEnabled: " + ods.getImplicitCachingEnabled() ); for ( int i = 0; i < 5; i++ ) { doSQL( ods ); } } catch ( Exception ex ) { ex.printStackTrace(); } } } typically.
Note:The Oracle JDBC Drivers use implicit statement caching to support statement pooling.:
Note:
The server-side and client result set caches are most useful for read-only or read-mostly data. They may reduce performance for queries with highly dynamic results.
Both server-side and client result set caches use memory. So, caching very large result sets can cause performance problems.
Support for server-side Result Set caching has been introduced for both JDBC Thin and JDBC Oracle Call Interface (OCI) drivers since Oracle Database 11g Release 1 (11.1).
Since Oracle Database 11g Release 1 (11.1), support for client result cache has been introduced for JDBC OCI driver. The client result cache improves performance of applications by caching query result sets in a way that subsequent query executions can access the cached result set without fetching rows from the server. This eliminates many round-trips to the server for cached results and reduces CPU usage on the server. The client cache transparently keeps the result set consistent with any session state or database changes that can affect its cached result sets. This allows significant improvements in response time for frequent client SQL query executions and for fetching rows. The scalability on the server is increased since it expends less CPU time.
See Also:Client Result Cache | http://docs.oracle.com/cd/E18283_01/java.112/e16548/stmtcach.htm | CC-MAIN-2014-41 | en | refinedweb |
Skip navigation links
java.lang.Object
oracle.irm.engine.content.crypto.CryptoSchemaOperationsInstance
public final class CryptoSchemaOperationsInstance
Operations for obtaining cryptography schemas.
This class provides static methods for a set of procedural style methods. The methods can be made to appear as global methods by using import static. e.g.
import static oracle.irm.engine.content.crypto.CryptoSchemaOperationsInstance.*;
public static Collection<CryptoSchema> getCryptoSchemas()
public static.
public static CryptoSchema getCryptoSchema(String id) throws UnknownCryptoSchemaException
id- the schema identity.
UnknownCryptoSchemaException- if the cryptography schema with the supplied id is not known.
public static | http://docs.oracle.com/cd/E28280_01/apirefs.1111/e12907/oracle/irm/engine/content/crypto/CryptoSchemaOperationsInstance.html | CC-MAIN-2014-41 | en | refinedweb |
I need to make a program that calculates the area of a circle but I have to use an int, char, and float. No matter what I do though I can't get the second and third variables to output meaningful values, only the first. This is what I have written down so far.
I'm not sure what to do and I've been working on this for hours:( Thanks for any help or advice!I'm not sure what to do and I've been working on this for hours:( Thanks for any help or advice!Code:
#include <stdio.h>
#include "conio.h"
#define PI 3.14159
int main()
{
/* The following is a program to demonstrate several different c numerical input funtions:interger, float, character.*/
float radius1, area_float; //declares the radius that will be used for float
int radius, area_int;
char radius3, area_char;
printf("Input radius for float, please? "); // Promts the user to enter a value for float
scanf("%f", &radius1 ); // obtains the value from the float promt
area_float = PI * radius1 * radius1; //Equation for area of a cricle
printf("area1 = %f\n", area_float); //Prints the answer for float
//Forces the command console to pause because c is used.
printf("Input radius for int value, please? ");
area_int = PI * radius * radius;
scanf("%d", &radius );
printf("area2 =%d\n", area_int);
_getch();
printf("Input radius for char value, please? ");
scanf("%c", &radius3);
area_char= PI * radius3*radius3;
printf("area3 =%c2\n", area_char);
_getch();
return 0;
} | http://cboard.cprogramming.com/c-programming/138558-multivariable-problem-seems-so-simple-but-i-cant-figure-out-printable-thread.html | CC-MAIN-2014-41 | en | refinedweb |
ok, i'm new to c++ and trying to learn it on my own. i've been doing not that bad...until i reached the point of classes. i copied the code from "teach yourself c++ in 21 days" and decided to augment it to ask the user for a certain piece of input. here is the augmented code:
i hope i did that right. any road....when i run it, well here is a sample output:i hope i did that right. any road....when i run it, well here is a sample output:Code:#include <cstdlib> #include <iostream> class cat { public: int itsAge; int itsWeight; }; using namespace std; int main(int argc, char *argv[]) { int x; cat frisky; frisky.itsAge = x; cout << "How old is frisky?\n"; cin >> x; cout << "Frisky is a cat who is "; cout << frisky.itsAge << " years old.\n"; system("PAUSE"); return EXIT_SUCCESS; }
How old is frisky?
5
Frisky is a cat who is 2 years old
press any key to continue....
ok, where the duce is the 2 coming from. i never declared x to = 2, it was supposed to be user defined. help please? thank you in advance.... | http://cboard.cprogramming.com/cplusplus-programming/110324-problems-variable-class-decleration-prog.html | CC-MAIN-2014-41 | en | refinedweb |
Search:
Forum
Lounge
Promoting C++11 and the STL
Page 2
Promoting C++11 and the STL
Pages:
1
2
3
Dec 20, 2012 at 8:44pm UTC
chrisname
(7279)
Well, think of it this way. No, you don't go into calculus and study pre-calc, because it's presumed that when you study calculus, you've already studied pre-calc. But you don't go into calculus and immediately learn multivariate transformations. You start with simple functions.
Of course they shouldn't only teach you C, but IMO they should teach the fundamentals first. That's why it's called "fundamentals". It comes from the Latin word for foundation.
Dec 20, 2012 at 8:50pm UTC
DesiredNote
(238)
What was the point of this? Nobody said "hey let's just have beginners on a random forum teach C++ classes".
You had said the worst teachers of C++ are college professors. I had replied with a teacher worse than the average college professor - a beginner. Those learning from this teacher clearly do not know they are learning from someone ignorant of the language. But you are right, I doubt anyone has every said that.
Last edited on
Dec 20, 2012 at 8:50pm UTC
Dec 20, 2012 at 9:08pm UTC
htirwin
(957)
I actually have not taken a course in C++. My intro class started in C and then went to Java.
Still, my observation has been that instructors purposefully try to limit how much information they give per lecture. It is for a good reason they do this. It comes down to not overwhelming the core of the class with information. They have a strategy to teach you the most important stuff as effectively as possible in a given time frame.
This is why the best Universities (like MIT) use languages like python for intro classes. They don't want to need to worry about going over specific stuff like what is size_t. They want to make you into a good programmer who can learn, and be proficient in, any language.
Last edited on
Dec 20, 2012 at 9:55pm UTC
Dec 21, 2012 at 12:52am UTC
masterofpuppets690
(48)
I don't think C is a prerequisite to C++, learning Java is as helpful or any other C based language. Even memory management is different in C as it is in C++ so why learn mallocs when you're just gonna use new anyway? sure its good to know these things but C++ can be used in such a high level way that knowledge of C is not needed. Now thats not to say learning C isn't good. It is and I have the basics of C that if I ever need to do a C program I can transfer my C++ skills pretty easily. But when I see professors teaching students char* instead of std::string it just angers me because they are learning C++ not C for godsake. Use what the language gives you to your advantage. Good habits in C are not necessarily good habits in C++ and vice versa so why teach them? thats just my opinion though.
Dec 21, 2012 at 1:00am UTC
Grey Wolf
(4144)
I
wrote:
And the C v C++ style is BS as well.
Catfish3
wrote:
What do you mean, is BS?
Okay what I mean is, Procedural C++ is just that...Procedural C++. It is not C. If you want to learn C++, learn it how it relates to the paradigms that it supports. Don't throw out a large portion of what the language can do because you are misguided enough to thing that C++ should only be Object oriented or that skills in procedural programming are old school.
C is C and C++ isn't. Learn C is
not
a good way to learn procedural C++.
Dec 21, 2012 at 1:28am UTC
Luc Lieber
(1221)
When I hear "C vs C++ vs C++11", the first thing that comes to mind is "malloc vs new/delete vs smart pointers" and "calloc vs new[] /delete[] vs std::array".
Language paradigms hold no bearing on "deprecated" technologies...
"deprecated" isn't quite the correct term in hindsight...
Last edited on
Dec 21, 2012 at 1:30am UTC
Dec 21, 2012 at 1:31am UTC
masterofpuppets690
(48)
@Luc Lieber I'm not suggesting you are, I'm just curious but are you saying that C is a deprecated technology?
Dec 21, 2012 at 1:37am UTC
Luc Lieber
(1221)
Deprecated really isn't the right term as I've stated...it's quite a touchy subject, but in my
opinion
, when one is writing
modern C++
, certain artifacts of C should not remain, calloc / malloc / c-strings being the most obvious exclusions, except when optimization is both required and proven after the fact.
In other words, if someone wants to use C, use C. If someone wants to use C++11, why mix in dangerous C artifacts for no apparent reason?
Damn it, now I'm calling C dangerous...I'm starting to sound like a java programmer.
Last edited on
Dec 21, 2012 at 1:40am UTC
Dec 21, 2012 at 1:38am UTC
Grey Wolf
(4144)
"deprecated" technologies...
0_o
missed an update
Last edited on
Dec 21, 2012 at 1:41am UTC
Dec 21, 2012 at 1:41am UTC
masterofpuppets690
(48)
Oh I agree when it comes to C++ using cstrings and malloc and such is not useful but C the language is not and never will be deprecated. Keep C specific stuff for C is my opinion and in certain circumstances its good to use C specific stuff for C++ but I agree with you more or less that stuff like that shouldn't be really used in C++.
Dec 21, 2012 at 7:20am UTC
Stewbond
(2763)
Don't teach the C subset of the language.
For many students, they are learning how to
program
. C++ is just a tool. It's more important for them to learn the ideas behind programming first before they can be taught the intricacies of a language.
Don't worry about the return type, 'size_t' -- you can think of it as 'int'.
We can't just go off on tangents everytime we introduce a new function. It's more important for the student to know what functions are out than it is to than to discuss the specifics of typedef'd return types.
These courses are introductions only. Overloading beginners with the nitty gritty language intricacies unnecessarily distracts them from learning the core concepts of generic programming which is the aim of these beginner courses.
Last edited on
Dec 21, 2012 at 7:20am UTC
Dec 21, 2012 at 7:22am UTC
Stewbond
(2763)
^^^
That being said, I like the idea of promoting the C++ standard library. At work, we use a C/C++ mix and I know surprisingly few people that have ever pulled anything out of std:: or any other namespace. char* seems to be king which makes me shudder.
Dec 21, 2012 at 7:16pm UTC
closed account (
iw0XoG1T
)
I don't really understand some of the points being made.
oop is a paradigm, and procedural programming is also a paradigm; they are both just abstract ideas used to model a program.
So how is using an array instead of a vector, char* instead of string make one program oop and another procedural? My point is that the tools you use do not make your program oop or procedural it is how you solve the problem that makes it oop or procedural.
If I am wrong please correct me-- I would appreciate it ( This is a sincere statement).
Last edited on
Dec 21, 2012 at 7:17pm UTC
Dec 21, 2012 at 7:22pm UTC
masterofpuppets690
(48)
Well I think its more to do with how you use the language and its tools, string is a class of its own so if you're doing C++ why not use the string class instead of using char*, I see it loads where people are being taught C++ and the lecturer makes them use char* instead of string, for what reason I do not know.
Dec 21, 2012 at 7:49pm UTC
closed account (
iw0XoG1T
)
I see it loads where people are being taught C++ and the lecturer makes them use char* instead of string
I imagine that the reason is the lecturer is not a C++ programmer and they want to limit the students to what they are familiar with. I have seen this discussed before and the response is "we are teaching how to program, not how to program in C++".
Last edited on
Dec 21, 2012 at 7:50pm UTC
Dec 21, 2012 at 7:53pm UTC
masterofpuppets690
(48)
Its not a good enough excuse though, you can teach someone how to program with std::string too, why not teach both and let the programmer decide which s/he prefers. There's no reason you should limit someone to just char* and not std::string, now if someone can tell me if there is a good reason I'd listen but for me for what I know there is no good reason.
Dec 21, 2012 at 8:15pm UTC
closed account (
o1vk4iN6
)
let the programmer decide which s/he prefers
It's not about preference, it's about functionality. What do you think you are using when you use a string literal ?
Dec 21, 2012 at 8:19pm UTC
masterofpuppets690
(48)
I'm using a dynamically allocated string object as opposed to a stack initialised scalar char array, now I could be wrong and please tell me if I am.
Dec 21, 2012 at 8:19pm UTC
cire
(5609)
What do you think you are using when you use a string literal ?
An argument to a string constructor?
Dec 22, 2012 at 3:16am UTC
Luc Lieber
(1221)
We can't just go off on tangents everytime we introduce a new function.
No need for a tangent, just tell it as it is. Comparing apples to apples is a one-time, fits-all lesson.
College Professor
wrote:
Don't worry about the return type, 'size_t' -- you can think of it as 'int'.
I
wrote:
'size_t' is an unsigned integral data type that is defined in the <cstring> header. It may vary between platforms, and because of that, it's important
not to
compare 'size_t' to 'int', as there is no hard guarantee that the two data types are the same.
If you don't know what 'unsigned', 'integral', or 'platform' means, or are confused about the term 'data type', then I urge you to look over your notes from past courses dealing with the fundamentals of computer programming.
I can't think of a platform where 'size_t' can be safely compared to 'int', perhaps 'unsigned int', but even then there is no guarantee.
1
2
3
4
// my platform
int
x = -1 size_t y = 4294967295; x == y
I probably get worked up over this too much, but it's just so aggravating that this stuff is blatantly wrong and is still being casually taught to new students...
Last edited on
Dec 22, 2012 at 3:58am UTC
Pages:
1
2
3
C++
Information
Tutorials
Reference
Articles
Forum
Forum
Beginners
Windows Programming
UNIX/Linux Programming
General C++ Programming
Lounge
Jobs
|
v3.1
Spotted an error? contact us | http://www.cplusplus.com/forum/lounge/88608/2/ | CC-MAIN-2014-41 | en | refinedweb |
To all who have been waiting for me to develop my sound engine, I am sorry for the wait but it is getting much closer to being completed.
The main problem I was having was the sound effect(s) were not mixing and thus only one sound could be played at one time. I have fixed this and it was pure oversight on my part that caused the problem.
Shakti, you probably want to read this very closely because the code you have thus far only needs to be altered just a little bit to play multiple sounds. Since you are not using 3D audiopaths, all you really need to do is setup a sound emitter class with a vector of sounds relating to that sound emitter. Here is how to fix the code:
In the call to IDirectMusicPerformance8::PlaySegmentEx() you must specify the segment as being a SECONDARY segment. I was specifying NULL which defaulted to a PRIMARY segment. There can only be one primary segment playing at any one time. If another primary segment needs to play the currently playing one is stopped and the new one is started. This is rather ugly. Primary segments are really intended for music use only as they do not have much use in sound effects. When you declare the segment as a secondary segment each instance of the playing sound will be auto-mixed into the DirectSoundBuffer8 relating to that sound and then mixed down into the primary buffer to be played. This is what you want to do.
Here are updated versions of CDXAudio, CDXSoundSegment, and CDXSoundEmitter classes.
Usage instructions
Instantiation/Creation of CDXAudio
First you must instantiate the CDXAudio class. The constructor for this class does not take any parameters and can simply be instantiated globally like this:
Initialization of CDXAudioInitialization of CDXAudioCode:
#include "CDXAudio.h"
CDXAudio SoundEngine;
Second, you must call CDXAudio::Create(). This will auto-setup the IDirectMusicPerformance8 and IDirectMusicLoader8 interfaces. This create function will also do all of the necessary COM intialization setup so that you do not have to mess with it.
This is where it gets a little fuzzy. The way that DirectMusic works is the Loader loads the sample into the Segment and then the Segment downloads the data into the Performance. Then the Performance can play/stop and do other things with the sound. This makes for a very strange class setup. Here is what I opted to do.
Object-based sound engine
My sound engine is object-based. In other words every sound played comes from some type of object in the world. So if you have a tank in your game that has engine sounds, bullets sounds, and firing sounds it needs to load all these sounds at initialization. This is quite simple really. Each object is a CDXSoundEmitter object. The CDXSoundEmitter object holds pointers to the IDirectMusicPerformance8 and IDirectMusicLoader8 interfaces and they are declared as public for quick access to them. I saw no benefit in creating accessor functions just to return a pointer and for the sake of simply hiding data. Data hiding just adds unnecessary stack frame overhead in this case. If you want to implement ambient sound effects simply create a CDXSoundEmitter and load all the ambient sounds into it. You may wish to derive from it in order to add your own functionality such as to add a randomness to when/how ambient sounds are played.
Initialization of CDXSoundEmitter
Ok so you have your CDXSoundEmitter. The class constructor does not take any parameters.
To init the class you MUST pass the valid IDirectMusicPerformance8 and IDirectMusicLoader8 interface pointers to the CDXSoundEmitter::Create() function. These were created when you called CDXAudio::Create(). To retrieve the pointers from CDXAudio use the following functions.
CDXAudio::GetPerformance()
CDXAudio::GetLoader()
Code example
Loading sounds into the CDXSoundEmitter classLoading sounds into the CDXSoundEmitter classCode:
#include "CDXAudio.h"
CDXAudio SoundEngine;
CDXSoundEmitter TestEmitter;
void Setup(void)
{
//Create the audio object and init
SoundEngine.Create();
TestEmitter.Create(SoundEngine.GetLoader(),SoundEngine.GetPerformance());
}
Ok now you have a valid CDXAudio object and a valid CDXSoundEmitter object. All that is left to do is load the sounds into the object.
This is done by calling CDXSoundEmitter::LoadSound(WCHAR *_Filename)
This next part is very important. Since any one object can emit more than one sound simply keeping track of one Segment to play is not sufficient. CDXSoundEmitter holds a vector of CDXSegment objects. Note that you do not and should not directly instantiate CDXSegment, this is already done for you. Here is what happens inside of CDXSoundEmitter::LoadSound()
- A temporary CDXSegment object is created.
- The file provided is opened and the data is loaded into the IDirectMusicSegment8 pointer (CDXSegment::Segment).
- The object is then added to the CDXSegment vector inside of CDXSoundEmitter.
- The temporary object is deleted and the size of the vector is returned to the caller. This value is the sound ID number and is extremely important. All future calls to play, stop, change volume, pan, frequency, etc., for this sound segment will be accessed via this ID number.
NOTE: All access to the sounds for your sound emitter are provided through the CDXSoundEmitter class interface. Directly accessing CDXSegment class members should be avoided. In future releases I will ensure that CDXSegment cannot be misused in this way.
What this means for the programmer is that he/she does not have to mess with sound segments at all. All that needs to be done to operate on a certain sound is to save the ID number returned from CDXSoundEmitter::LoadSound() and then pass that ID number to the desired CDXSoundEmitter function. For instance here is code that will play a sound.
Checking to see if the sound is currently stopped/playingChecking to see if the sound is currently stopped/playingCode:
unsigned int test_sound=TestEmitter.LoadSound(L"test.wav");
TestEmitter.Play(test_sound);
To test to see if the sound in question is already playing simply do this:
I have not provided a function to see if the sound has stopped because this can be deduced from IsPlaying().I have not provided a function to see if the sound has stopped because this can be deduced from IsPlaying().Code:
...
if (TestEmitter.IsPlaying(sound_ID))
{
//Sound is currently playing
}
...
Setting the volume for the sound
To set the volume for the sound (only prototyped in the class - not functional yet):.Code:
...
TestEmitter.SetVolume(sound_ID,.5f);
...
For a complete description of all the classes and functions please consult CDXAudio.h
AudioPaths and 3D AudioPaths
This sound system does not implement 3D audiopaths or audiopaths as of yet and as such no spatially oriented sounds are supported. It is in my code base but has been disabled. It really isn't that much more code and simply means creating a class to encapsulate DirectMusic audio paths and then including a pointer to that class inside of CDXSoundEmitter. I'm tweaking that system right now. Also scripting is coded but not currently supported, but will be available soon. This will really aid in creating cool sound effects.
I also hope to include occlusion sound effects and other effects that can easily be done inside of DirectMusic.
Problems/Bug reports
Should you have any trouble using this module and/or have found a bug I've overlooked or would like to add some functionality, please post in this thread.
SFX Latency
I'm also working on some code to interface with the driver to dial down the latency for sound effects to insane levels. This is only supported under DirectX9 and older sound cards might play some sound glitches if you attempt to use this on them. Music as of yet is also not completed. In my engine the music and sound effects are SEPARATE Performances. This will eliminate pchannel, groove level, chord progression, and tempo change interference between music and sfx. Thanks to Scott Selfon and Todor J. Fay for writing DirectX9 Audio Exposed. Much of my sound code and sound system structure was influenced by their suggestions.
Included in the ZIP
Here are the files in the zip:
CDXAudio.h - header for CDXAudio.cpp
CDXAudio.cpp - sound engine module
CQuickCom.h - macros for working with COM | http://cboard.cprogramming.com/game-programming/58665-updated-sound-engine-code-printable-thread.html | CC-MAIN-2014-41 | en | refinedweb |
El tono del audio source.
Pitch is a quality that makes a melody go higher or lower. As an example imagine playing an audio clip with pitch set to one. Increasing the pitch as the clip plays will make the clip sound like it is higher. Similarly decreasing the pitch less than one makes the clip sound lower.
//Attach this script to a GameObject. //Attach an AudioSource to your GameObject (Click Add Component and go to Audio>Audio Source). Choose an audio clip in the AudioClip field. //This script sets the pitch of the audio at the start, and then gradually turns it down to 0 as time passes.
using UnityEngine;
//Make sure there is an Audio Source component on the GameObject [RequireComponent(typeof(AudioSource))]
public class ExampleScript : MonoBehaviour { public int startingPitch = 4; public int timeToDecrease = 5; AudioSource audioSource;
void Start() { //Fetch the AudioSource from the GameObject audioSource = GetComponent<AudioSource>();
//Initialize the pitch audioSource.pitch = startingPitch; }
void Update() { //While the pitch is over 0, decrease it as time passes. if (audioSource.pitch > 0) { audioSource.pitch -= Time.deltaTime * startingPitch / timeToDecrease; } } }
Otro ejemplo:
using UnityEngine;
// A script that plays your chosen song. The pitch starts at 1.0. // You can increase and decrease the pitch and hear the change // that is made.
public class AudioExample : MonoBehaviour { public float pitchValue = 1.0f; public AudioClip mySong;
private AudioSource audioSource; private float low = 0.75f; private float high = 1.25f;
void Awake() { audioSource = GetComponent<AudioSource>(); audioSource.clip = mySong; audioSource.loop = true; }
void OnGUI() { pitchValue = GUI.HorizontalSlider(new Rect(25, 75, 100, 30), pitchValue, low, high); audioSource.pitch = pitchValue; } } | https://docs.unity3d.com/es/2019.1/ScriptReference/AudioSource-pitch.html | CC-MAIN-2020-45 | en | refinedweb |
Ticket #18737 (closed defect: fixed)
Linux guests: open(filename, O_CREAT|..., mode) fails inside shared folder => fixed in SVN/6.0.x x>10
Description
The following C code produces a simple executable which will attempt to create and open a read only test file in a single operation.
Expected behaviour when run:
- A newly created file called test.txt is created on disk with read only permissions
- A read-write file-handle for the file is returned by the open call
#include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> #include <stdio.h> int main() { int fd = open("test.txt", O_CREAT | O_EXCL | O_RDWR, 0444); if (fd < 0) { perror("Could not create file: "); return -fd; } return 0; }
When run on other file systems this works correctly but when run on a virtualbox shared folder the file is created correctly but the program exits with the error message:
Could not create file: : Permission denied
This can be bypassed by running the command with sudo, in which case it succeeds normally.
This is not a purely hypothetical problem, since the same error occurs when attempting to use any programme that makes this system call, such as a git clone.
Attachments
Change History
comment:2 Changed 15 months ago by justinsteven
Can confirm this was an issue for me on 6.0.8 and is still an issue on 6.0.10.
- Host: Debian Linux
- Guest: Debian Linux (running Guest Additions 6.0.10)
- Host filesystem: ext4 (mounted rw,relatime,data=ordered)
- Guest: vboxsf volume mounted rw,nodev,relatime,iocharset=utf8,uid=<my_uid>,gid=<my_gid>
OP's reproducer reproduces the issue for me.
I'm also having issues with git repos on a vboxsf volume.
comment:3 Changed 15 months ago by justinsteven
Changed 15 months ago by justinsteven
- attachment poc_18737.py
added
comment:4 Changed 15 months ago by justinsteven
I've written a reproducer in Python and tried all file modes from 0o000 to 0o777 using flags O_CREAT | O_EXCL | O_RDWR.
If I map /home/justin/shared to my VirtualBox guest (mounted as /home/justin/shared within the guest) and place the reproducer in shared/test_file_creation, then the poc runs fine from my host but exhibits failures when run from the guest.
In summary, the following file modes are OK in the guest but everything else fails:
0o0 --------- PASS 0o2 -------w- PASS 0o20 ----w---- PASS 0o22 ----w--w- PASS 0o6** rw-****** PASS 0o7** rwx****** PASS
(Where "*" is of course anything)
Curiously, mode 0o222 fails:
% python3 Python 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> os.open("testfile", os.O_CREAT | os.O_EXCL | os.O_RDWR, 0o222) Traceback (most recent call last): File "<stdin>", line 1, in <module> PermissionError: [Errno 13] Permission denied: 'testfile'
I've attached poc_18737.py from which I got the above results.
comment:5 Changed 14 months ago by justinsteven
This still happens with VBoxGuestAdditions_6.0.12-132672.iso
comment:6 Changed 14 months ago by paulson
- Owner set to paulson
- Status changed from new to accepted
- Summary changed from Open system call to create and open read only file fails on shared folder to Linux guests: open(filename, O_CREAT|..., mode) fails inside shared folder
comment:7 follow-up: ↓ 8 Changed 14 months ago by paulson
- Status changed from accepted to closed
- Resolution set to fixed
- Summary changed from Linux guests: open(filename, O_CREAT|..., mode) fails inside shared folder to Linux guests: open(filename, O_CREAT|..., mode) fails inside shared folder => fixed in SVN/6.0.x x>10
This is a regression which was introduced in VirtualBox 6.0.6 and affects Linux guests which have a kernel > 3.16.0 and utilize the atomic_open() filesystem interface. The file creation succeeds but the follow-up call to the Linux kernel routine finish_open() happened without FMODE_CREATED set in the file->f_mode structure element which then erroneously returns EACCES for unprivileged users.
This has been fixed in trunk and the fix has also been backported to VirtualBox 6.0.x (x > 10) and any 6.0.x Testbuilds with a revision >= r132861. This issue doesn't apply to VirtualBox 5.2.x.
comment:8 in reply to: ↑ 7 Changed 14 months ago by socratis
This is a regression which was introduced in VirtualBox 6.0.6 ... This has been fixed ... with a revision >= r132861.
I added the ticket and the fixed revision in the Discuss the 6.0.6 release thread in the forums, thanks @paulson.
comment:9 Changed 14 months ago by justinsteven
The issue is fixed for me using VBoxGuestAdditions_6.0.11-132973.iso on VirtualBox 6.0.10 (Both the Python reproducer, as well as "git clone")
Thanks @paulson
Can confirm that this problem is not present on v6.0.4. | https://www.virtualbox.org/ticket/18737 | CC-MAIN-2020-45 | en | refinedweb |
[Java] [Java] Sending email using Amazon SES
Introduction
Since I had the opportunity to use SES (Simple Email Service) of AWS, I would like to summarize the method of sending email by Java program, also as a review. There are a method of using SMTP and a method of using AWS SDK, but for the moment, I will describe the method of SMTP. We will add AWS SDK later.
About Amazon SES
- Amazon SES (Simple Email Service) is an email delivery service provided by Amazon.
- Low installation cost, small scale and cheap operation possible
- Send the first 62,000 emails every month free of charge (SES fee)
- Mail logs can be saved and analyzed using other AWS services
Preparation (SES console)
Before creating a program, you need to:
Sender (From) address authentication, Destination (To) address authentication
The account is created as a new user in the test environment called sandbox, so you can only send and receive emails with the email address you have verified. To send emails to unverified email addresses, increase the number of emails you can send per day, and send emails fast, you need to move your account out of the sandbox.
Procedure
- Log in to AWS and open the SES console
- Click “Email Addresses”
- Click the “Verify a New Email Address” button
- Enter the email address to be verified and click the “Verify This Email Address” button
- Click the link in the received email
- When you open the screen in step 2, the verification status of the specified email address is verified.
Your email address is now verified. You can click the “Send a Test Email” button to see if you can send and receive email.
Create an SMTP user and get credentials (username and password)
Procedure
- Log in to AWS and open the SES console
- Click “SMTP Settings”
- Click the “Create My SMTP Credentials” button
- Enter the SMTP user name and click the “Create” button (the name can be the default value)
- Click View user’s SMTP security credentials
- Copy the displayed authentication information and save it in a safe place, or download csv with “Download authentication information”
You can see the SMTP user created in IAM Console> Access Management> Users.
Operating environment
- eclipse Version: 2020-06 (4.16.0)
- java version
$ java -version java version "1.8.0_231" Java(TM) SE Runtime Environment (build 1.8.0_231-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.231-b11, mixed mode)
Java program
- Create a Maven project in eclipse
- Add dependency of JavaMail to pom.xml (Search JavaMail with MvnRepositoryandaddlatestversionjar)
- Creating an email sending program
pom.xml
<!-- Dependencies --> <dependencies> <dependency> <groupId>com.sun.mail</groupId> <artifactId>javax.mail</artifactId> <version>1.6.2</version> </dependency> </dependencies>
SesSmtpSample.java
public class SesSmtpSample { // Sender address and sender name (email address authenticated by SES console) static final String FROM = "[email protected]"; static final String FROMNAME = "Sender Name"; // Destination address (email address authenticated by SES console) static final String TO = "[email protected]"; // SMTP username and password created in SES console static final String SMTP_USERNAME = "smtp_username"; static final String SMTP_PASSWORD = "smtp_password"; // Specify the Config Set created on the SES console. It is used when saving the mail log. This time it is unnecessary so comment out // static final String CONFIGSET = "ConfigSet"; // Amazon SES SMTP endpoint (us-west-2 if region is Oregon) static final String HOST = "email-smtp.us-west-2.amazonaws.com"; // Amazon SES SMTP endpoint port number. static final int PORT = 587; // Email subject static final String SUBJECT = "Amazon SES test (SMTP interface accessed using Java)"; // Text static final String BODY = String.join( System.getProperty("line.separator"), "<h1>Amazon SES SMTP Email Test</h1>", "<p>This email was sent with Amazon SES using the ", "<a href=''>Javamail Package</a>", "for <a href=''>Java</a>." ); public static void main(String[] args) throws Exception { It's a sequel. // define SMTP server Properties props = System.getProperties(); props.put("mail.transport.protocol", "smtp"); props.put("mail.smtp.port", PORT); props.put("mail.smtp.starttls.enable", "true"); props.put("mail.smtp.auth", "true"); // establish a mail session Session session = Session.getDefaultInstance(props); // compose email MimeMessage msg = new MimeMessage(session); msg.setFrom(new InternetAddress(FROM,FROMNAME)); msg.setRecipient(Message.RecipientType.TO, new InternetAddress(TO)); msg.setSubject(SUBJECT); msg.setContent(BODY,"text/html"); // Set the Configuration Set. I will not use this time so comment out //msg.setHeader("X-SES-CONFIGURATION-SET", CONFIGSET); Transport transport = session.getTransport(); // send e-mail try { System.out.println("Sending..."); // connect to the SMTP server transport.connect(HOST, SMTP_USERNAME, SMTP_PASSWORD); // send e-mail transport.sendMessage(msg, msg.getAllRecipients()); System.out.println("Email sent!"); } catch (Exception ex) { System.out.println("The email was not sent."); System.out.println("Error message: "+ ex.getMessage()); } finally { // connection finished transport.close(); } } }
SMTP endpoints vary by region. You can find it in AWS References ().
in conclusion
Since there is a Japanese document for Amazon SES, basically I think you can send an email without problems if you follow the document. | https://linuxtut.com/java-sending-email-using-amazon-ses-618c8/ | CC-MAIN-2020-45 | en | refinedweb |
Maybe the Zope-Dev guys have comments on this.... > >From: Jim Fulton <[EMAIL PROTECTED]> > > > >Michel, > > > >You have advocated that methods should always be bound to the objects they > >are accessed in. You argue that there should be no choice in the matter. > > > >I have to disagree strongly. I'll try to explain why. > > > >In Python, methods are bound to instances. Methods are part > >of an instance's core behavior. They are specific to the kind > >of thing the instance is. In my words, methods are part of > >the genetic makeup of an object. > > > >In Zope, we allow some methods to be bound to their context. > >This is done in a number of ways and is sometimes very useful. > >We have methods, like standard_html_header, which are designed > >to be used in different contexts. > > > >We have other methods, like manage_edit that are designed to > >work on specific instances. It would be an egregious error > >if this method was acquired and applied to it's context. > > > >We have some methods that are designed to bound to an instance > >(container, in your terminology) but that, because they are written in > >DTML, can be bound to other objects. This can cause significant problems. > >For example, methods defined in ZClasses almost always want to be > >bound to ZClass instances, not to other arbitrary objects. > > > ><aside>There's a bonus problem with DTML Methods. When > >a DTML Method is invoked from another DTML Method, it > >is bound to neither the object it was accessed in or > >to the object it came from. It is bound to the calling > >namespace. It turns out that this is a useful behavior > >if the DTML Method is designed to be used as a "subtemplate". > ></aside> > > > >There is no one "right" way to bind a method. There are good > >reasons to sometimes bind a method to it's context and > >sometimes bind a method to it's container (ie instance). > >There are even sometimes reasons to bind a method to a > >calling namespace. > > > >The principle of least surprise doesn't help here, because > >methods defined in Python classes don't behave the way > >methods defined through the web do currently. > > > >We *need* control over binding, as well as reasonable defaults. > > > >If we can agree that we need binding control, the question > >arises as to some details and default names. > > > >Should it be possible to do more than one binding at a time, > >using multiple names? If not, then I'd agree that the name > >'self' should be used for either the context or container binding. > >If both bindings are allowed at the same time, then 'self' should > >refer to container binding to be consistent with standard Python > >usage and some name like 'context' should be used (by default) > >for contextual binding. > > > >Jim _______________________________________________ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - )
- [Zope-dev] Michel's Reply | https://www.mail-archive.com/zope-dev@zope.org/msg02693.html | CC-MAIN-2018-09 | en | refinedweb |
This page discusses - Introduction to Dojo and Tips
AdsTutorials. It is an Open Source JavaScript toolkit that provides a simple the Application Programming Interface for building the serious applications in less time. The functionality of Dojo to make HTTP requests and receives dynamic interfaces.The amount of functionality available in dojo, a few architectural differences compared to most other frameworks; it uses a late of function. That dojo Script always includes the package names in an object reference. IT is very nice for-each loop function, for instance, I have to refer this as {dojo.lang.forEach(listOfThings, myFunc);} instead of just {forEach(listOfThings, myFunc);}. Dojo increases readability when the debugging or the refastening things later. If you want to refer to a DOM element "dojo way", you write "dojo.byNa;" Another big change in the philosophy between the dojo and the prototype is that prototype long and glorious history of changing initial javascript objects, like that adding useful new functions to the string object. The Dojo provide resulted in the collisions on other the erratic behavior if using other JavaScript libraries for programming, If you want change or assume at functionality of the very same the function names. To using the namespaces, the dojo ensures that no the collisions occur between itself and any other JavaScript libraries on the same page. I'm going to use the dojo Application Programming Interface.
Posted on: April 18, 2011 If you enjoyed this post then why not add us on Google+? Add us to your Circles
Advertisements
Ads
Ads
Discuss: Introduction to Dojo and Tips
Post your Comment | http://roseindia.net/dojo/dojoHelloWorld/js/dojo/dojo-tips.shtml | CC-MAIN-2018-09 | en | refinedweb |
Name | Synopsis | Description | Return Values | VALID STATES | Errors | TLI COMPATIBILITY | Attributes | See Also
#include <xti.h> int t_close(3NSL)..
Upon successful completion, a value of 0 is returned. Otherwise, a value of –1 is returned and t_errno is set to indicate an error.
T_UNBND
On failure, t_errno is set to the following:
The specified file descriptor does not refer to a transport endpoint. that can be set by the XTI interface and cannot be set by the TLI interface is:
See attributes(5) for descriptions of the following attributes:
close(2), t_getstate(3NSL), t_open(3NSL), t_unbind(3NSL), attributes(5)
Name | Synopsis | Description | Return Values | VALID STATES | Errors | TLI COMPATIBILITY | Attributes | See Also | https://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et5n/index.html | CC-MAIN-2018-09 | en | refinedweb |
Aurelia element animation with custom attribute
I've been exploring Aurelia javascript UI framework recently to get some experience needed for our next big project. One thing that I couldn't implement out of the box was a kind of animation.
I have a grid of values bound to View Model. View Model communicates to server, receives any updates of data and the grid got immediately updated, all that works great with Aurelia. Now I want to highlight the cell which has just received an updated value with a small background animation, like this:
Aurelia has a library called aurelia-animator-css with a helper class to run CSS animation. If you use it directly in your View Model, you will end up with the code like
this.newMessageReceived = msg => { this.data.filter(i => i.id === msg.id).forEach(t => { let editedItemIdx = this.data.indexOf(i); var elem = this.element.querySelectorAll('tbody tr')[editedItemIdx + 1] .querySelectorAll('td')[3]; this.animator.addClass(elem, 'background-animation').then(() => { this.animator.removeClass(elem, 'background-animation'); }); }); };
So we get a new message, find the related item in our data, then find the index of that data. Then we use this index in query selector to get the exact row that needs animation, find the cell by hard coded index, and finally use animator to highlight the background.
Ouch... That smells. We spoiled our view model with view details, and all this code is very ugly and fragile.
Good news: we can improve the solution with the Aurelia's feature called Custom Attributes. Let's create a new
javascript file and call it
animateonchange.js:
import {customAttribute} from 'aurelia-framework'; @customAttribute('animateonchange') export class AnimateOnChangeCustomAttribute { }
I declared a class for our new attribute, so far it's empty. I imported customAttribute decorator from
Aurelia framework: that the way we can define a name for our custom attribute. This can be avoided: if I
change the name to
AnimateonchangeCustomAttribute, Aurelia will infer the name from class name, but I want
to stay explicit and keep the class name readable. Note that capital letters are not allowed in attribute name.
Now, let's declare the constructor of the new class and inject all the dependencies:; } }
I used dependency injection to get attribute's element and CSS animator and save them into class fields. Here's how to use them:; this.initialValueSet = false; } valueChanged(newValue){ if (this.initialValueSet) { this.animator.addClass(this.element, 'background-animation').then(() => { this.animator.removeClass(this.element, 'background-animation'); }); } this.initialValueSet = true; } }
The new method
valueChanged will be called every time the bound value changes. I want to ignore the
first value (it's not an update yet), so I did that with
initialValueSet flag. Then I just run CSS
animator. No DOM-related queries!
Here is how we use the custom attribute from a view:
<template> <require from="./animateonchange"></require> <table class="table"> <tr repeat. <td>${item.value1}</td> <td>${item.value2}</td> <td animateonchange.${item.value3ToUpdate}</td> <td>${item.value4}</td> </tr> </table> </template>
First, we use
require element to import custom attribute definition (make sure the path is correct
and no
.js extension is present).
Second, we use
animateonchange.bind to bind the value to the custom attributes. And it works!
Of course, you need to define the CSS class, e.g.
.background-animation-add { -webkit-animation: changeBack 0.5s; animation: changeBack 0.5s; } .background-animation-remove { -webkit-animation: fadeIn 0.5s; animation: fadeIn 0.5s; } @-webkit-keyframes changeBack { 0% { background-color: white; } 50% { background-color: lightgreen; } 100% { background-color: white; } } @keyframes changeBack { 0% { background-color: white; } 50% { background-color: lightgreen; } 100% { background-color: white; } }
Here is a plunkr link to a complete example
Happy coding!
Useful links:
Like this post? Please share it! | https://mikhail.io/2015/07/aurelia-element-animation-with-custom-attribute/ | CC-MAIN-2018-09 | en | refinedweb |
Getting started with Azure Application Insights in Aurelia
Azure Application Insights is an analytics service to monitor live web applications, diagnose performance issues, and understand what users actually do with the app. Aurelia is a modern and slick single-page application framework. Unfortunately, there's not much guidance on the web about how to use AppInsights and Aurelia together in a proper manner. The task gets even more challenging in case you are using TypeScript and want to stay in type-safe land. This post will set you up and running in no time.
Get Your AppInsights Instrumentation Key
If not done yet, go register in Azure Application Insights portal. To start sending telemetry data from your application you would need a unique identifier of your web application, which is called an Instrumentation Key (it's just a guid). See Application Insights for web pages walk-through.
Install a JSPM Package
I'm using JSPM as a front-end package manager for Aurelia applications. If you use it as well, run the following command to install AppInsights package:
jspm install github:Microsoft/ApplicationInsights-js
it will add a line to
config.js file:
map: { "Microsoft/ApplicationInsights-js": "github:Microsoft/[email protected]", ...
To keep the names simple, change the line to
"ApplicationInsights": "github:Microsoft/[email protected]",
Do exactly the same change in
project.json file,
jspm ->
dependencies section.
Create an Aurelia Plugin
In order to track Aurelia page views, we are going to plug into the routing pipeline with a custom plugin. Here is how my plugin looks like in JavaScript (see TypeScript version below):
// app-insights.js export class AppInsights { client; constructor() { let snippet = { config: { instrumentationKey: 'YOUR INSTRUMENTATION KEY GUID' } }; let init = new Microsoft.ApplicationInsights.Initialization(snippet); this.client = init.loadAppInsights(); } run(routingContext, next) { this.client.trackPageView(routingContext.fragment, window.location.href); return next(); } }
The constructor instantiates an AppInsights client. It is used inside a
run method,
which would be called by Aurelia pipeline during page navigation.
Add the Plugin to Aurelia Pipeline
Go the the
App class of your Aurelia application. Import the new plugin
// app.js import {AppInsights} from './app-insights';
and change the
configureRouter method to register a new pipeline step:
configureRouter(config, router): void { config.addPipelineStep('modelbind', AppInsights); config.map(/*routes are initialized here*/); }
After re-building the application, you should be all set to go. Navigate several pages and wait for events to appear in Application Insights portal.
TypeScript: Obtain the Definition File
If you are using TypeScript, you are not done yet. In order to compile the
AppInsights
plugin you need the type definitions for
ApplicationInsights package. Unfortunately,
at the time of writing there is no canonical definition in
typings registry, so
you will have to provide a custom
.d.ts file. You can download mine from
my github.
I created it based on a file from
this NuGet repository.
I've put it into the
custom_typings folder and then made the following adjustment
to
build/paths.js file of Aurelia setup:
dtsSrc: [ 'typings/**/*.d.ts', 'custom_typings/**/*.d.ts' ],
For the reference, here is my TypeScript version of the
AppInsights plugin:
import {NavigationInstruction, Next} from 'aurelia-router'; import {Microsoft} from 'ApplicationInsights'; export class AppInsights { private client: Microsoft.ApplicationInsights.AppInsights; constructor() { let snippet = { config: { instrumentationKey: 'YOUR INSTRUMENTATION KEY GUID' }, queue: [] }; let init = new Microsoft.ApplicationInsights.Initialization(snippet); this.client = init.loadAppInsights(); } run(routingContext: NavigationInstruction, next: Next): Promise<any> { this.client.trackPageView(routingContext.fragment, window.location.href); return next(); } }
Conclusion
This walk-through should get you started with Azure Application Insights in your Aurelia application. Once you have page view metrics coming into the dashboard, spend more time to discover all the exciting ways to improve your application with Application Insights.
Like this post? Please share it! | https://mikhail.io/2016/08/getting-started-with-azure-application-insights-in-aurelia/ | CC-MAIN-2018-09 | en | refinedweb |
This is not the current version. View the latest documentation
Realm React Native enables you to efficiently write your app’s model layer in a safe, persisted and fast way. Here’s what it looks like:
// Define your models and their properties class Car {} Car.schema = { name: 'Car', properties: { make: 'string', model: 'string', miles: 'int', } }; class Person {} Person.schema = { name: 'Person', properties: { name: {type: 'string'}, cars: {type: 'list', objectType: 'Car'}, picture: {type: 'data', optional: true}, // optional property } }; // Get the default Realm with support for our objects let realm = new Realm({schema: [Car, Person]}); // Create Realm objects and write to local storage realm.write(() => { let myCar = realm.create('Car', { make: 'Honda', model: 'Civic', miles: 1000, }); myCar.miles += 20; // Update a property value }); // Query Realm for all cars with a high mileage let cars = realm.objects('Car').filtered('miles > 1000'); // Will return a Results object with our 1 car cars.length // => 1 // Add another car realm.write(() => { let myCar = realm.create('Car', { make: 'Ford', model: 'Focus', miles: 2000, }); // Query results are updated in realtime cars.length // => 2
Getting Started
Follow the installation instructions below to use Realm React Native via npm, or see the source on GitHub.
Prerequisites
- Make sure your environment is set up to run React Native applications. Follow the React Native instructions for getting started.
- Apps using Realm can target both iOS and Android.
- React Native 0.20.0 and later is supported.
Installation
Create a new React Native project:
react-native init <project-name>
Change directories into the new project (
cd <project-name>) and add the
realmdependency:
npm install --save realm
Next use
react-nativeto link your project to the
realmnative module.
react-native link realm
You’re now ready to go. To see Realm in action, add the following as the definition for your
class <project-name> in
index.ios.js or
index.android.js:
const Realm = require('realm'); class <project-name> extends Component { render() { let realm = new Realm({ schema: [{name: 'Dog', properties: {name: 'string'}}] }); realm.write(() => { realm.create('Dog', {name: 'Rex'}); }); return ( <View style={styles.container}> <Text style={styles.welcome}> Count of Dogs in Realm: {realm.objects('Dog').length} </Text> </View> ); } }
You can then run your app on a device and in a simulator!. let realm = new Realm({schema: [CarSchema, PersonSchema]});
If you’d prefer your objects inherit from an existing class, you just need to define the schema on the object constructor and pass in the constructor when creating a realm:
class Person { get ageSeconds() { return Math.floor((Date.now() - this.birthday.getTime())); } get age() { return ageSeconds() / 31557600000; } } Person.schema = PersonSchema; // Note here we are passing in the `Person` constructor let realm = new Realm({schema: [CarSchema, Person]});
Once you have defined your object models you can create and fetch objects from the realm:
realm.write(() => { let car = realm.create('Car', { make: 'Honda', model: 'Civic', miles: 750, }); // you can access and set all properties defined in your model console.log('Car type is ' + car.make + ' ' + car.model); car.miles = 1500; });
Basic Property', } }
Object Properties
For object types you specify the
name property of the object schema you are referencing:' let realm = new Realm({schema: [CarSchema, PersonSchema]});
When accessing object properties, you can access nested properties using normal property syntax:
realm.write(() => { var nameString = person.car.name; person.car.miles = 1100; // create a new Car by setting the property to valid JSON person.van = {make: 'Ford', model: 'Transit'}; // set both properties to the same car instance person.car = person.van; });
List Properties
For list properties
All changes to an object (addition, modification and deletion) must be done within a write transaction.
Write transactions incur non-negligible overhead - you should architect your code to minimize the number of write transactions.
Creating Objects
As shown above, objects are created using the
create method:
let realm = new Realm({schema: [CarSchema]); realm.write(() => { realm.create('Car', {make: 'Honda', model: 'Accord', drive: 'awd'}); });
Nested Objects
If an object has object properties, values for those properties can be created recursively by specifying JSON values for each child property:
let realm = new Realm({schema: [PersonSchema, CarSchema]); change events
Multiple let realmAtAnotherPath = new Realm({ path: 'anotherRealm.realm', schema: [CarSchema] });
Default Realm Path
You may have noticed in all previous examples that the path argument has been omitted. In this case the default Realm path is used. You can access and change the default Realm path using the
Realm.defaultPath global property.
Schema Version
The last let realm = new Realm({schema: [PersonSchema]}); const UpdatedPersonSchema = { // The schema name is the same, so previous `Person` object // in the Realm will be updated name: 'Person', properties: { name: 'string', dog: 'Dog' // new property } }; // this will throw because the schema has changed // and `schemaVersion` is not specified let realm = new Realm({schema: [UpdatedPersonSchema]}); // this will succeed and update the Realm to the new schema let realm = new Realm({schema: [UpdatedPersonSchema], schemaVersion: 1});
If you wish to retrieve the current schema version of a Realm, you may do so with the
Realm.schemaVersion method.
let currentVersion = Realm.schemaVersion(Realm.defaultPath);(schemas[schemas.length-1]);
Change Events
Change events are sent out when write transactions are completed. To register for change events:
// Observe Realm Change Events realm.addListener('change', () => { // Update UI ... }); // Unregister all listeners realm.removeAllListeners();
React Native ListView
If you’d like to use
List or
Results instances as data for a
ListView, it is highly recommended that you use the
ListView and
ListView.DataSource provided by the
realm/react-native module:
import { ListView } from 'realm/react-native';
The API is exactly the same as
React.ListView, so you can refer to the ListView documentation for usage var realm = new Realm({schema: [CarObject], encryptionKey: key}); //.
The Realm Mobile Platform
The Realm Mobile Platform extends the Realm Mobile Database across the network, enabling automatic synchronization of data across devices.
Enabling this for React Native. | https://realm.io/docs/javascript/0.14.0/ | CC-MAIN-2018-09 | en | refinedweb |
Creating search engines the links wont work). I therefor decided to create my own.
To start with I went to the EPiServer SDK to have a look at the PagingControl class. Here I was able to see the namespace and what .dll file it lives in (EPiServer.dll). I then opened up Reflector, to inspect EPiServer.dll and have a closer look at the code.
Description of Reflector by Red Gate.
.NET Reflector enables you to easily view, navigate, and search through, the class hierarchies of .NET assemblies, even if you don’t have the code for them. With it, you can decompile and analyze .NET assemblies in C#, Visual Basic, and IL.
To find the PagingControl class you can either use the search feature or navigate the namespaces.
The great thing is that all the methods are virtual which means that we can override them :).
Start by creating a new class and inherit from PagingControl (remember to add using EPiServer.Web.WebControls;).
Then create a new page template with a PageList control.
Notice that we set the PageList’s PagingControl property to a new instance of our CustomPager web control.
If you open up a browser and run the code, you’ll just get a standard PagingControl with the JavaScript and bad markup (since we haven’t overridden anything yet).
Lets start by adding an ordered list for the paging items.
To do this we have to override the CreatePagingItems, AddSelectedPagingLink and AddUnselectedPagingLink methods.
I added a private property to hold my ordered list (Container), so that I can easily add child controls to it. I Also removed the calls to the AddLinkSpacing method (since we don’t need it).
If you browse the page you’ll see that everything is inside an ordered list now.
To fix the JavaScript links I created a new method CreatePagingHyperLink that will create the HyperLink control with the correct url. I Then added code in the OnInit method to get the query string and set the PageList’s CurrentPagingItemIndex property.
I also added some CSS code to give you an idea of how easy it is to style and change the look.
The result
Adam Blomberg says:Post Author October 8, 2009 at 14:34
Thanks for an excellent tip which many of our SiteSeeker and EPiServer customers will appreciate! A minor tweaking would be to render the current page in the list not as a link but as pure text (as recommended here:). That would require changing the row:
var child = this.CreatePagingHyperLink(pagingIndex, text, altText);
to instead render text, and slightly modifying the CSS (“.PagingContainer a”).
Frederik Vig says:Post Author October 12, 2009 at 00:26
Thanks for that Adam! I’ve updated the AddSelectedPagingLink method and the CSS code.
Frederik
Peter says:Post Author December 9, 2009 at 17:00
I’ve created the CustomPager-class and assigned it to be used in a pagelist and allthough it renders correctly I get some navigation problems. When pressing the “2” to go to the second page in the listing I get a odd-looking version of the start page and the url is “…/Templates/Public/Pages/Articles.aspx?id=27&epslanguage=sv&p=1” instead of “…/sv/Artiklar/?p=2” that I assumed it would be.
I’ve fiddled around in the CreateUrl-function and the url-variable inside it and added the “.PathAndQuery” to the “HttpContext.Current.Request.Url”. I don’t know excactly what this does but it gives me the disered look of the url but all the links in the pagination ends with “?p=1”.
Any idea of what might be the problem?
Nulled Scripts says:Post Author December 23, 2009 at 00:17
Nice post..Keep them coming 🙂 Thanks for sharing.
Andrey Lazarev says:Post Author January 14, 2010 at 11:18
Hello, Frederik!
I’ve used your approach to implement the custom paging and it works OK, but…
Paging works correctly *ONLY* for logged-in users. For anynomous visitor – when you click “2” in paging the same first page will be just reloaded.
Any ideas why this could be?
Frederik Vig says:Post Author January 14, 2010 at 13:47
@Peter – Sorry for not getting back to you until now.. I’ve updated the code a little now to support both friendly and regular urls (just updated the CreateUrl method)
@Andrey – Sounds like a databinding problem.. have you tried using EPiServer paging control? I’m guessing you’ll get the same result with it. This is only guessing, I’ll need to take a look at your code to give a proper answer.
Andrey Lazarev says:Post Author January 14, 2010 at 15:21
Frederik, you mean – using not custom but ‘standard’ paging control? Yes, I tried this too – it is working.
The only difference from your code is that I didn’t placed the EPiServer.PageList control inside the page template but in the WebControl instead:
pageTemplate[ WebControl[PageList] ]
And now I’m trying to apply custom paging to this PageList.
Also I’ve selected to use simple links instead of …:
protected override LinkButton AddSelectedPagingLink(int pagingIndex, string text, string altText)
{
HtmlGenericControl cntrl = new HtmlGenericControl();
cntrl.Attributes.Add(“class”, this.CssClassSelected);
cntrl.InnerText = text;
HtmlGenericControl cntrlSeparator = new HtmlGenericControl();
cntrlSeparator.InnerHtml = ” “;
this.HtmlContainer.Controls.Add(cntrl);
this.HtmlContainer.Controls.Add(cntrlSeparator);
return null;
}
protected override LinkButton AddUnselectedPagingLink(int pagingIndex, string text, string altText, bool visible)
{
HtmlGenericControl cntrl = new HtmlGenericControl();
HyperLink hlChild = this.CreatePagingHyperLink(pagingIndex, text, altText);
hlChild.CssClass = this.CssClassUnselected;
hlChild.Visible = visible;
HtmlGenericControl hlChildSeparator = new HtmlGenericControl();
hlChildSeparator.InnerHtml = ” “;
cntrl.Controls.Add(hlChild);
cntrl.Controls.Add(hlChildSeparator);
this.HtmlContainer.Controls.Add(cntrl);
return null;
}
// Code for this:
private static string CreateUrl(int count)
{
UrlBuilder url = new UrlBuilder(HttpContext.Current.Request.Url.PathAndQuery);
Global.UrlRewriteProvider.ConvertToExternal(url, null, UTF8Encoding.UTF8);
return UriSupport.AddQueryString(url.ToString(), “p”, count.ToString());
}
protected HyperLink CreatePagingHyperLink(int pagingIndex, string text, string altText)
{
HyperLink link = new HyperLink();
this.LinkCounter++;
link.ID = “PagingID” + this.LinkCounter;
link.NavigateUrl = CreateUrl(pagingIndex);
link.Text = text;
link.ToolTip = altText;
return link;
}
And – removed first-last, prev-next links
I have a public property in my webcontrol named ‘EnablePaging’ and I’m using it to select how PageList should be rendered:
if (EnablePaging == true)
{
this.epiNewsListSimple.Paging = true;
this.epiNewsListSimple.PageLink = newssource;
this.epiNewsListSimple.PagingControl = new {%namespace here%}.CustomPager();
this.epiNewsListSimple.PagesPerPagingItem = 3;
this.epiNewsListSimple.MaxCount = -1;
this.epiNewsListSimple.EnableViewState = true;
}
else
{
this.epiNewsListSimple.Paging = false;
this.epiNewsListSimple.PageLink = newssource;
this.epiNewsListSimple.MaxCount = 3;
}
this.epiNewsListSimple.DataBind();
Andrey Lazarev says:Post Author January 14, 2010 at 15:28
Re-tested once again – still same strange behavior. Standard paging is working for PageList and the custom one – isn’t. 🙁
Frederik Vig says:Post Author January 14, 2010 at 15:31
If you remove all the code inside the CustomPager class, so you only have something like this left.
The paging should work. After making sure it works try adding a few more methods and check that it still works, then some more etc, until you find the source of the problem.
Andrey Lazarev says:Post Author January 14, 2010 at 16:27
Frederik, it looks like a dumb joke, but I got it working making this change:
if (EnablePaging == true)
{
this.epiNewsListSimple.Paging = true;
this.epiNewsListSimple.PageLink = newssource;
this.epiNewsListSimple.PagingControl = new CustomPager();
this.epiNewsListSimple.PagesPerPagingItem = 3;
if (Request.QueryString[“p”] != null)
{
this.epiNewsListSimple.PagingControl.CurrentPagingItemIndex = int.Parse(Request.QueryString[“p”].ToString().Trim());
}
}
else
{
this.epiNewsListSimple.Paging = false;
this.epiNewsListSimple.PageLink = newssource;
this.epiNewsListSimple.MaxCount = 3;
}
this.epiNewsListSimple.DataBind();
So, for me it looks like changes to the paging control made from the custom class didn’t change anything in fact 🙁
Very strange…
Anyway – the control is working as expected now.
Øyvind says:Post Author January 29, 2010 at 13:34
Thanx for sharing. If the datasource of your news list is a pagedatacollection, you need to set the lstNewsList.PagingControl.CurrentPagingItemIndex before the Init in the custom pager. But else it works out of the box.
Peter says:Post Author February 2, 2010 at 11:33
I totally forgot to follow up on this page so no harm done. I’ve got it working now thanks to your friendly url-support. To start out I did have the same problem as Andrey had but it was easily fixed. Thanks!
Peter says:Post Author February 10, 2010 at 08:33
How would you go about to make the pager only show 10 pages at a time? E.g. “1 2 3 4 5 6 7 8 9 10 >”, “”. At the moment my pager shows 27 pages with 10 pages on each one of them and my guess is that when these get to about 45 the page listing will continue on outside the graphics and will be unreachable.
Vidar says:Post Author February 18, 2011 at 10:37
Thanks for this great article.
I’ve two questions.
1. I don’t get friendly URLs. I’ve tried removing if (UrlRewriteProvider.IsFurlEnabled)
and also changed
return UriSupport.AddQueryString(url.ToString(), “p”, count.ToString());
to
return UriSupport.AddQueryString(url.Uri.AbsoluteUri, “p”, count.ToString());
as in the friendly url article just to see if it’s friendly. But I still doesn’t get it.
2. How do I remove the div container?
Again, thank you for you support!
Vidar says:Post Author February 18, 2011 at 12:13
Hey! Problem 1. solved. That was only the case on my dev-domputer, once i put it in the test environment, the urls became friendly. | https://www.frederikvig.com/2009/09/creating-a-custom-episerver-paging-control/ | CC-MAIN-2018-09 | en | refinedweb |
Quite a few people seem to think that "trading the equity curve" is the answer. The basic idea is that when you are doing badly, you reduce your exposure (or remove it completely) whilst still tracking your 'virtual' p&l (what you would have made without any interference). Once your virtual p&l has recovered you pile back into your system. The idea is that you'll make fewer losses whilst your system is turned off. It sounds too good to be true... so is it? The aim of this post is to try and answer that question.
This is something that I have looked at in the past, as have others, with mixed results. However all the analysis I've seen or done myself has involved looking at backtests of systems based on actual financial data. I believe that to properly evaluate this technique we need to use large amounts of random data, which won't be influenced by the fluke of how a few back tests come out. This will also allow us to find which conditions will help equity curve trading work, or not.
This is the second post in a series on using random data. The first post is here.
How do we trade the equity curve?
I'm going to assume you're reasonably familiar with the basic idea of equity curve trading. If not, then its probably worth perusing this excellent article from futures magazine.
An equity curve trading overlay will comprise of the following components:
- A way of identifying that the system is 'doing badly', and quantifying by how much.
- A rule for degearing the trading system given how badly it is doing
- A second rule for regearing the system once the 'virtual' account curve is 'doing better'
I've seen two main ways for identifying that the system is doing badly. The first is to use a straightforward drawdown figure. So, for example, if your drawdown exceeds 10% then you might take action.
(Sometimes rather than the 'absolute' drawdown, the drawdown since the high in some recent period is considered)
The second variation is to use a moving average (or some other similar filter) of the account curve. If your account curve falls below the moving average, then you take action.
(There are other variations out there, in particular I noticed that the excellent Jon Kinlay blog had a more complex variation)
As for degearing your system, broadly speaking you can eithier degear it all in one go, or gradually. Usually if we dip below the moving average of an equity curve then it is suggested that you cut your position entirely.
Whilst if you are using the current drawdown as your indicator, then you might degear gradually: drawdown; then by another 20% for a total of 40% when you hit a 20% drawdown and so on.
Note that this degearing will be in addition to the normal derisking you should always do when you lose money; if you lose 10% then you should derisk your system by 10% regardless of whether you are using an equity curve trading overlay.
Finally the 'regearing' rule is normally the reverse of the degearing rule and process.
Prior research
The idea of trading the equity curve is something that seems to have bypassed academic researchers (unless they are calling it something else - please write in if you know about any good research) so rather than any formal literature review I had a quick look at the first page of google.
Positive
adaptrade
crazy ivan (whether he's a reliable source I don't know...)
Jon Kinlay (though not the normal system)
Negative (or at least no clear benefit)
Futures magazine (and also here)
Anthony Garnett
futures.io
r-bloggers
Why random data?
I personally find the above research interesting, but not definitive one way or another. My main issue is that it was all done on different financial instruments and different kinds of trading systems, which unsuprisingly gave different results. This might be because there is something 'special' about those instruments where equity curve trading worked, but it's more likely to be just dumb luck. Note: It is slightly more plausible that different kinds of trading rules will give different results; and we'll explore this below.
I personally think that we can't properly evaluate this kind of overlay without using random data. By generating returns for different arbitrary trading strategies we can then judge whether on average equity curve trading will be better.
Another advantage of using random data to evaluate an equity curve overlay system is that we avoid potentially overfitting. If we run one version of the overlay on our system, and it doesn't work, then it is very tempting to try a different variation until it does work. Of course we could 'fit' the overlay 'parameters' on an out of sample basis. But this quite a bit of work; and we don't really know if we'd have the right parameters for that strategy going forward or if they just happened to be the best for the backtest we ran.
Finally using random data means we can discover what the key characteristic of a trading system is that will allow trading the equity curve to work, or not.
Designing the test
Which overlay method?
There are probably an infinite variety of methods of doing an equity curve trading overlay (of which I touched on just a few above). However to avoid making this already lengthy post the size of an encylopedia I am going to limit myself to testing just one method. In any case I don't believe that the results for other methods will be substantially different.
I'm going to focus on the most popular, moving average, method:
"When the equity curve falls below it's N day moving average, turn off the system. Keep calculating the 'virtual' curve, and it's moving average during this period. When the 'virtual' curve goes back above it's moving average then turn the system back on"
That just leaves us with the question of N. Futures magazine uses 10, 25 and 40 days and to make life simple I'll do the same. However to me at least these seem incredibly short periods of time. For these faster N trading costs may well overwhelm any advantage we get (and I'll explore this later).
Also wouldn't it be nice to avoid the 3 year drawdown in trend following that happened between 2011 and 2013? Because we're using random data (which we can make as long as we like) we can use longer moving averages which wouldn't give us meaningful results if we tested just a few account curves which were 'only' 20 years long.
So I'll use N=10, 25, 40, 64, 128, 256, 512
(In business days; 2 weeks, 5 weeks, 8 weeks, 3 months, 6 months, 1 year, 2 years)
def isbelow(cum_x, mav_x, idx):
## returns 1 if cum_x>mav_x at idx, 0 otherwise
if cum_x[idx]>=mav_x[idx]:
return 1.0
return 0.0
def apply_overlay(x, N_length):
"""
apply an equity curve filter overlay
x is a pd time series of returns
N_length is the mav to apply
Returns a new x with 'flat spots'
"""
if N_length==NO_OVERLAY:
return x
cum_x=x.cumsum()
mav_x=pd.rolling_mean(cum_x, N_length)
filter_x=pd.TimeSeries([isbelow(cum_x, mav_x, idx) for idx in range(len(x))], x.index)
## can only apply with a lag (!)
filtered_x=x*filter_x.shift(1)
return filtered_x
Which criteria?
A nice way of thinking about equity curve trading is that it is a bit like buying insurance, or in financial terms a put option on your system's performance. If your system does 'badly' then the insurance policy prevents you from losing too much.
One of my favourite acronyms is TINSTAAFL. If we're buying insurance, or an option, then there ought to be a cost to it. Since we aren't paying any kind of explicit premium, the cost must come in the form of losing something in an implicit way. This could be a lower average return, or something else that is more subtle. This doesn't mean that equity curve trading is automatically a bad thing - it depends on whether you value the lower maximum drawdown* more than the implicit premium you are giving up.
* This assumes we're getting a lower maximum drawdown - as we'll see later this isn't always the case.
I can think of a number of ways of evaluating performance which try and balance risk and reward. Sadly the most common, Sharpe Ratio, isn't appropriate here. The volatility of the equity curve with the overlay on will be, to use a technical term, weird - especially for large N. Long periods without returns will be combined with periods when return standard deviation is normal. So the volatility of the curve with an overlay will always be lower; but it won't be a well defined statistic. Higher statistical moments will also suffer.
Instead I'm going to use the metric return / drawdown. To be precise I'm going to see what effect adding the overlay has on the following account curve statistics:
- Average annual return
- Average drawdown
- Maximum drawdown
- Average annual return / average drawdown
- Average annual return / maximum drawdown
My plan is to generate a random account curve, and then measure all the above. Then I'll pass it through the equity overlay, and remeasure the statistics.
Finally just to note that for most of this post I won't be considering costs. For small N, given we are closing our entire strategy down and then restarting it potentially every week, these could be enormous. Towards the end I will give you an idea of how sensitive the findings are to the costs of different trading instruments.
Which equity curve characteristics?
Broadly speaking the process for using random data is:
- Identify the important characteristics of the real data you need to model
- Calibrate against some real data
- Create a process which produces random data with the neccessary characteristics
- Produce the random data, and then do whatever it is you need to do
Notice that an obvious danger of this process is making random data that is 'too good'. In an extreme case with enough degrees of freedom you could end up producing 'random' data which looks exactly like the data you calibrated it against! There is a balance between having random data that is realistic enough for the tests you are running, and 'over calibrated'.
Identification
What characteristics of a trading system returns will affect how well an equity curve overlay will work? As in my previous post I'll be producing returns that have a particular volatility target - that won't affect the results.
They will also have a given expected Sharpe Ratio. With a negative Sharpe Ratio equity curve overlay should be fantastic - it will turn off the bad system. With a high positive Sharpe Ratio they will probably have no effect (at least for large enough N). It's in the middle that things will get more interesting. I'll test Sharpe Ratios from -2 to +2.
My intuition is that skew is important here. Negative skew strategies could see their short, sharp, losses reduced. Positive skew such as trend following strategies which tend to see slow 'bleeds' in capital might be improved by an overlay (and this seems to be a common opinion amongst those who like these kind of systems). I'll test skew from -2 (average for a short volatility or arbitrage system) to +1 (typical of a fast trend following system).
Finally I think that autocorrelation of returns could be key. If we tend to get losses one after the other then equity curve trading could help turn off your system before the losses get too bad.
Calibration
First then for the calibration stage we need some actual returns of real trading systems.
The systems I am interested in are the trading rules described in my book, and in this post: a set of trend following rule variations (exponentially weighted moving average crossover, or EWMAC for short) and a carry rule.
First skew. The stylised fact is that trend following, especially fast trend following, is positive skew. However we wouldn't expect this effect to occur at frequencies much faster than the typical holding period. Unsurprisingly daily returns show no significant skew even for the very fastest rules. At a weekly frequency the very fastest variations (2,8 and 4,16) of EWMAC have a skew of around 1.0. At a monthly frequency the variations (8,32 and 16,64) join the positive skew party; with the two slowest variations having perhaps half that.
Carry doesn't see any of the negative skew you might expect from say just fx carry; although it certainly isn't a positive skew strategy.
There is plenty of research showing that trend following rules produce returns that are typically negatively autocorrelated eg and. The latter paper suggests that equities have a monthly autocorrelation of around +0.2, whilst trend following autocorrelations come in around -0.3. Carry doesn't seem to have a significant autocorrelation.
My conclusion is that for realistic equity curves it isn't enough just to generate daily returns with some standard deviation and skew. We need to generate something that has certain properties at an appropriate time scale; and we also need to generate autocorrelated returns.
I'll show the effect of varying skew between -2 and +1, autocorrelation between -0.3 and +0.3, and Sharpe Ratio between -1 and +2.
How do we model?In the previous post I showed how to generate skewed random data. Now we have to do something a bit fancier. This section is slightly technical and you might want to skip if you don't care where the random data comes from as long as it's got the right properties.
The classic way of modelling an autocorrelated process is to create an autoregressive AR1 model* (note I'm ignoring higher autoregression to avoid over calibrating the model).
* This assumes that the second, third, ... order autocorrelations follow the same pattern as they would in an AR1 model.
So our model is:
r_t = Rho * r(t-1) + e_t
Where Rho is the desired autocorrelation and e_t is our error process: here it's skewed gaussian noise*.
* Introducing autocorrelation biases the other moments of the distribution. I've included corrections for this which works for reasonable levels of abs(rho)<0.8. You're unlikely to see anything like this level in a real life trading system.
This python code shows how the random data is produced, and checks that it has the right properties.
Now, how do we deal with different behaviour at different frequencies? There are very complicated ways of dealing with this (like using a Brownian bridge), but the simplest is to generate returns at a time scale appropriate to the speed of the indicator. This implies weekly returns for carry and fast EWMAC rules(2,4 and 4,8); and monthly for slower EWMAC rules. If you're trading a very fast trading rule then you should generate daily data, if you see a clear return pattern at that frequency.
I'll use daily returns for the rest of this post, but I've checked that they still hold true at weekly and monthly frequencies (using equity curve filter lookbacks at least 3 times longer than the frequency of returns we're generating).
Results
To recap I am going to be TESTING different lookbacks for the equity curve filter; and I am going to be GENERATING returns with skew between -2 and +1, autocorrelation between -0.4 and +0.4, and Sharpe Ratio between -1 and +2.
I'll keep standard deviation of returns constant, since that will just change the overall scale of each process and not affect the results. I'll look at the results with daily returns. The results won't be significantly different with other periods.
All the code you need is here.
Sharpe Ratio
In this section I'll be varying the Sharpe Ratio whilst keeping the skew and autocorrelation fixed (at zero, and zero, respectively).
scenario_type="VaryingSharpeRatio"
period_length=1 ### daily returns
Average annual return
All of the plots that follow have the same format. Each line shows a different level of return characteristic (Sharpe Ratio in this case). The x axis shows the equity curve filter N day count that we're using for the moving average. Note that N=1000, which is always on the right hand side, means we aren't using a filter at all. The y axis shows the average value of the statistic of interest (in this case average annual return) across all the random equity curves that we generate and filter.
The good news is if you know you're trading system is rubbish, then applying an equity curve system, preferably with large N, improves the performance. If you knew your system was rubbish then of course rather than use a complicated filter to turn it off you wouldn't bother turning it on at all! However for all profitable equity curves equity curve trading reduces, rather than increases, your returns.
Average drawdown
Maximum drawdown
For profitable systems there might perhaps be a modest reduction in maximum drawdown for small values of N. For loss making systems the biggest reduction in drawdown is for large values of N; although all filters are better than none.
Average annual return / average drawdown
Let's now try and put together the return and drawdown into a simple statistic. For unprofitable systems the overlay makes no difference. For profitable systems it reduces the drawdown adjusted return (the 'hump' at the right hand side of the SR=2.0 line is an artifact caused by the fact we can't calculate this statistic when the average drawdown is zero).
Average annual return / maximum drawdown
Skew
In this section I'll be varying the Skew whilst keeping the Sharpe Ratio and autocorrelation fixed (at one, and zero, respectively).In this section I'll be varying the Skew whilst keeping the Sharpe Ratio and autocorrelation fixed (at one, and zero, respectively).
scenario_type="VaryingSkew"
period_length=1 ## daily returns
Average annual return / average drawdown
Average annual return / maximum drawdown
Autocorrelation
In this section I'll be varying the Autocorrelation whilst keeping the skew and Sharpe Ratio fixed (at zero, and one, respectively).In this section I'll be varying the Autocorrelation whilst keeping the skew and Sharpe Ratio fixed (at zero, and one, respectively).
scenario_type="VaryingAuto"
period_length=1
Average annual return
However as we've already discussed trend following systems seem to have negative autocorrelation. So it looks like equity curve overlays are a no-no for a trend following system.
Average drawdown
Again drawdowns are improved only if the autocorrelation is positive. They are much worse if it is negative (note that average drawdowns are smaller for negatively autocorrelated systems anyway).
Maximum drawdown
There is a similar picture for maximum drawdown.
Average annual return / average drawdown
Remember that I haven't included costs in any of these calculations. For information the annualised turnover added to each system by the equity curve filter ranges from around 23 for N_length 10 to 1.2 for N_length 512. With the cheapest futures I trade (standardised cost of 0.001 SR units per year, for something like NASDAQ) this is not a major problem, reducing average return with N length of 10 by around 1%.
* See chapter 12 of my book for details of how I calculate turnover and standardised costs
However let's see the results using a more expensive future, the Australian interest rate future, with a cost of around 0.03 SR units per year.
scenario_type="VaryingAutoMore"
period_length=1
annualised_costs_SR=0.03
Average annual return / maximum drawdown
Conclusion
The idea that you can easily improve a profitable equity curve by adding a simple moving average filter is, probably, wrong. This result is robust across different positive sharpe ratios and levels of skew. Using a shorter moving average for the filter is worse than using a slower one, even if we ignore costs.
There is one exception. If your trading strategy returns show positive autocorrelation then applying a filter with a relatively short moving average will probably improve your returns, but only if your trading costs are sufficiently low.
However if your strategy is a trend following strategy, then it probably has negative autocorrelation, and applying the filter will be an unmitigated disaster.
This is the second post in a series on using random data. The first post is here. The next post on portfolio optimisation is here.
Another solid post, Rob. Why do you prefer SR over the return/drawdown measure that was used in this post? Throughout the book, you use the SR to standardized risk but isn't the ret/dd ratio a more specific measure of that?
I can think of a few reasons:
Intepretability
I have been using Sharpe Ratios a long time, and I don't really have a feel for other performance statistics. If you told me something had a calmar ratio of 0.5 I wouldn't have a clue what you were talking about, and even if you explained that meant the average annual return to max drawdown was 0.5 I still wouldn't know if that was good or not.
Consistency
I create forecasts as proportional to expected returns scaled for standard deviation, which is basically a Sharpe Ratio. I also calculate risk for position scaling by using the standard deviation. Consistently using standard deviation as a measure of risk has its flaws, but at least I'm using the same measure throughout my system.
Symmettry:
Although the downfall of SR is that it assumes symmettry, I actually think it is a good thing to be as scared of an expected rise as a fall.
Statistical robustness:
When we calculate the standard deviation we use the whole distribution. With average drawdown we don't and with maximum we use only a single point. Return / max drawdown in particular is a terrible statistic to use with real data, since it is very much random what it comes out at.
I've tried using a simple MA on win rates for several mean-reversion systems, and the results have always been dismal. Except for that one time I forgot to check for a future leak, and that one worked like a charm! 😃 The problem, as I saw it, is that these systems all had a very high win rate, 60-75%, and so losing trades were few and far between. There was no clustering to speak of, so the switching on and off merely removed good grades.
-Matt
Interesting. I guess a system where you cut after the drawdown reached X% might protect you from a drawdown larger than you ever saw in simulation; i.e. the situation when the MR relationship breaks down.
Thanks Rob, I'll have to look into that. Haven't tried anything using the drawdown as a filter. I suspect that it might not work, but my intuition is usually wrong about trading. Which is why I prefer robots and computers. These sort of of shortish MR trades tend to have many smaller wins, with the occasional invigoratingly large loss. A system that shuts off on drawdown thresholds might simply reduce the amount of little wins needed to scrape back into the black. But until I've tried it, I won't know.
Thanks for your comments and your blog. I've added it to my regular reading list. | https://qoppac.blogspot.com/2015/11/random-data-evaluating-trading-equity.html | CC-MAIN-2018-09 | en | refinedweb |
ASP.NET # MVC # 7 – Call Method on controller from JavaScript function / Call ASP.NET MVC Controller method from JavaScript function
Hi Friends,
As We know Asp.net mvc divides the web form in three different parts [Model,View,Controller] , as we saw in Model-View-Controller in our post.
Asp.net mvc doesn’t provide server side events as in Asp.net , we usually come across the situations where we need to call the server side method from the client side while using Asp.net mvc .
For Example :
1) Call some server side method after selected index change event of a dropdown list.
2) Call Server side method on Change of a radio button.
To call the Server side method from the JavaScript we use the $.Ajax(url,[options]) method of Ajax for this you need to have the reference of jquery-1.4.1.min.js , MicrosoftAjax.js , MicrosoftMvcAjax.js in the view like following references
<script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script>
<</script>
<</script>
For Example we have a dropdownlist ddlTest and on selected index change event of it we are calling the JavaScript method called onDropdownChange .
<%=Html.DropDownList("ddlDept", new SelectList(Model.lstEmployee, "Emp_Number", "First_Name", 0), "Select", new { @onchange = "onDropdownChange(this);" })%>
//JavaScript MEthod as follow
The $.ajax function accepts firs parameter as URL, where we have supplied value as “Employee/ServerMethodName” ,
it means that this function will call the method server side method ServerMethodName which is in the controller Employee as follows.
public class EmployeeController : Controller { public void ServerMethodName() { //Put your logic here } }
For More on Microsoft technologies visit our site Dactolonomy of WebResource
Thanks. | https://microsoftmentalist.wordpress.com/2011/09/05/asp-net-mvc-7-call-method-on-controller-from-javascript-function-call-asp-net-mvc-controller-method-from-javascript-function/ | CC-MAIN-2018-09 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Python Errors when Creating Warehouse
Hi Community,
I recently believe I may have found a bug in the warehouse system, wondering if anyone can help me confirm.
I recently installed a fresh copy of OpenERP 7 on Ubuntu server, and everything has been working great. However, today, I went into my warehouse settings, and clicked the option to manage multiple warehouses. Now, when I go to warehouses ---> configuration ---> warehouses, when I click on create warehouse I receive the following error popup:
OpenERP Server 1120, in _call_kw return getattr(req.session.model(model), method) 1583, in default_get defaults[f] = self._defaultsf File "/opt/openerp/v7/server/openerp/addons/stock/stock.py", line 2963, in _default_lot_output_id return lot_output_id UnboundLocalError: local variable 'lot_output_id' referenced before assignment
I have made no module changes, and this is why this error is so surprising to me. Everything else has been working great. Can anyone else confirm this bug?
Thank you in advance for any help.
Sincerely,
Tim
That is because, buggy code is committed for stock module to get default Output Location:
def _default_lot_output_id(self, cr, uid, context=None): try: lot_input_stock_model, lot_input_stock_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'stock', 'stock_location_output') self.pool.get('stock.location').check_access_rule(cr, uid, [lot_input_stock_id], 'read', context=context) except (ValueError, orm.except_orm): # the user does not have read access on the location or it does not exists lot_output_id = False return lot_output_id
It should be like this:
def _default_lot_output_id(self, cr, uid, context=None): try: lot_output_stock_model, lot_output_stock_id = self.pool.get('ir.model.data').get_object_reference(cr, uid, 'stock', 'stock_location_output') self.pool.get('stock.location').check_access_rule(cr, uid, [lot_output_stock_id], 'read', context=context) except (ValueError, orm.except_orm): # the user does not have read access on the location or it does not exists lot_output_stock_id = False return lot_output_stock_id
I'm updating this with the bug that Tim filed:. We arrived at the same solution as Dharmesh Patel
Hello Tim,
That's not a bug, it's just that the system can't find the default location data.
As you can see on
Addons>Stock>stock_data.xml
There are two records that are going to be inserted on the db:
<record id="stock_location_output" model="stock.location"> <field name="name">Output</field> <field name="location_id" ref="stock_location_company"/> <field name="usage">internal</field> <field name="chained_location_type">customer</field> <field name="chained_auto_packing">transparent</field> <field name="chained_picking_type">out</field> <field name="chained_journal_id" ref="journal_delivery"/> </record> <record id="stock_location_stock" model="stock.location"> <field name="name">Stock</field> <field name="location_id" ref="stock_location_company"/> </record>
And the error that is showing up to you it's because this function can't find it.
def _default_lot_input_stock_id(self, cr, uid, context=None): lot_input_stock = self.pool.get('ir.model.data').get_object(cr, uid, 'stock', 'stock_location_stock') return lot_input_stock.id def _default_lot_output_id(self, cr, uid, context=None): lot_output = self.pool.get('ir.model.data').get_object(cr, uid, 'stock', 'stock_location_output') return lot_output.id _defaults = { 'company_id': lambda self, cr, uid, c: self.pool.get('res.company')._company_default_get(cr, uid, 'stock.inventory', context=c), 'lot_input_id': _default_lot_input_stock_id, 'lot_stock_id': _default_lot_input_stock_id, 'lot_output_id': _default_lot_output_id, }
So my advice is: go to
Configuration>Installed Modules and look for
stock module... Update it and it's going to work. If that doesn't work maybe there is a problem with stock module and you have to download it again.
Thank you for the advice. However, as noted in a comment above by Ray Carnes, this is an issue with openERP. For anyone else looking at the post, the issue is not fixed by the above answer.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
When did you install? The build from today is broken - trunk addons is downloaded when you try to download 7.0 addons.
I updated through bzr today. It looks like my server refuses to download the new version of the warehouse software. Its stuck in an old version it looks like... What should I do?
Whats the difference between trunk addons and 7.0 addons?
You need to wait until OpenERP fixes it. The bzr revisions it SHOULD be letting you download are shown at http//runbot.openerp.com - trunk is the version AFTER version 7.0 (ie: a alpha/beta release of 7.1 or 8.0 or whatever they call the next version).
Is something like this an urgent issue for the team? My current software is now unusable... Also, is this effecting all modules, or only the warehouse module?
Anything is an urgent issue for customers with an OpenERP Warranty (aka OpenERP Enterprise Pricing). If you have such a warranty or contract, contact OpenERP.
Try again today, there have been two new builds since the one that didn't work. | https://www.odoo.com/forum/help-1/question/python-errors-when-creating-warehouse-29249 | CC-MAIN-2018-09 | en | refinedweb |
How deep a recurse?
By clive on Aug 01, 2009
Chris has been exploring various limits of a lab M8000. Inspired (well, umm, also maybe board on a conf call) by this and prompted by a Twitter update on Google and recursion from Alec (don't recall if I read it first on his blog or Twitter) got me thinking about how deep you can recurse on a modern system? So wrote some code. The marginally tricky bit was setting up the alternate stack to handle the signal on with sigaltstack.
#include < unistd.h > #include < stdio.h > #include < signal.h > #include < stdlib.h > #include < sys/resource.h > #include < sys/mman.h > static void handler(); static void recurse(void); static int depth = 0; int main() { struct sigaction act; struct rlimit rlp; stack_t ss; getrlimit(RLIMIT_STACK, &rlp); printf("RLIMIT_STACK = %u:%u\\n", rlp.rlim_cur, rlp.rlim_max); act.sa_handler = handler; sigemptyset(&act.sa_mask); act.sa_flags = 0; act.sa_flags |= SA_RESETHAND|SA_SIGINFO|SA_ONSTACK; if (sigaction(SIGSEGV, &act, NULL) < 0) { perror("sigaction failed"); exit(1); } if ((ss.ss_sp = mmap(NULL, SIGSTKSZ, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANON, -1, 0)) == MAP_FAILED) { perror("mmap failed"); exit(1); } ss.ss_size = SIGSTKSZ; ss.ss_flags = 0; if (sigaltstack(&ss, NULL) < 0) { perror("sigaltstack failed"); exit(1); } recurse(); } static void recurse(void) { depth++; recurse(); } void handler(void) { printf("depth = %u\\n", depth); exit(0); }
1st attempt on a Macbook with OSX gave a number of 524030. We then moved to Solaris Nevada 110 running on one of our x86 lab system. Also tried the S10 Sparc stable server. The Sparc numbers are a lot smaller than the x86 numbers. The Sparc numbers are similar on Solaris 10 or Nevada. What a great microbenchmark this would make to base purchases on, how deep can a system recurse with no function arguments passed. Many purchasing decisions have been made on the results of benchmarks of similar relevance to the business problem in hand so lets not dismiss totally. Anyway, back to reality.
On a Solaris 10 Sparc box we get
ebusy(5.10)$ cc -o recurse recurse.c ebusy(5.10)$ ./recurse RLIMIT_STACK = 1000000000:1000000000 depth = 10416627 ebusy(5.10)$ cc -m64 -o recurse recurse.c ebusy(5.10)$ ./recurse RLIMIT_STACK = 1000000000:1000000000 depth = 5681792 ebusy(5.10)$ uname -a SunOS ebusy 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire
and on the Nevada x86 lab system we get
exdev(5.11)$ cc -o recurse recurse.c exdev(5.11)$ ./recurse RLIMIT_STACK = 2147483647:2147483647 depth = 16812966 exdev(5.11)$ cc -m64 -o recurse recurse.c exdev(5.11)$ ./recurse RLIMIT_STACK = 1000000000:1000000000 depth = 62499512 exdev(5.11)$ uname -a SunOS exdev 5.11 snv_110 i86pc i386 i86pc
I am sure there are some games to play with increasing the hard stack limit or allocating the alternate stack a huge segment of memory and recursing through that as well. However, over 62 million stack frames is adequate for most recursive situations which will complete.
Interesting that compilation with -x02 or higher leads to assembler that does nothing and the code just sits in a loop without ever calling the function below.
exdev(5.11)$ pstack `pgrep recur` 17659: ./recurse 0000000000400f80 recurse () exdev(5.11)$
So a few interesting questions that will have to wait for the next conf call where I don't need to pay too much attention.
ALl the includes have gone. You build it 64 bit then you will be limited just by the amount of memory in the system.
Posted by Chris Gerhard on August 01, 2009 at 11:18 AM BST #
Thanks Chris, includes fixed. Even a 64 bit build still has an hard rlimit for the stack, though bigger. The second set of results were built 64 bit.
Posted by Clive King on August 01, 2009 at 11:38 AM BST #
Just don't ustack() it :-) . Actually, we'd stop at 2048 frames by default so long way from where you got to...
Posted by Jon Haslam on August 01, 2009 at 03:08 PM B.
Posted by Clive King on August 03, 2009 at 05:13 AM BST # | https://blogs.oracle.com/clive/entry/how_deep_can_you_recurse | CC-MAIN-2015-40 | en | refinedweb |
Microsoft Ponders Shared-Sourcing SQL Server 194
i_frame writes "C|net is reporting in an interview with Tom Rizo, director of product management in Microsoft's SQL server unit, that 'the company is thinking about including the forthcoming SQL Server 2005 in Microsoft's shared-source program for disclosing product source to customers'. Is Microsoft reinventing themselves, and are they ready to learn the benefits of open source?" From the article: "It's not finalized. It's not anything there, but if a lot of customers demand it, we'll definitely look at doing shared source with SQL Server..."
Share Source is not shared (Score:5, Informative)
Re:Share Source is not shared (Score:5, Interesting)
Re:Share Source is not shared (Score:1)
Bloody whiners.
Re:Share Source is not shared (Score:5, Insightful)
Being able to look at select chunks of code but not being able to modify anything or recompile is of nominal value. I'm really not sure why anyone would want to do that. It sounds more like a PR initiative, so that MS can technically say that they've embraced "open source".
Re:Share Source is not shared (Score:3, Insightful)
There are plenty of SHARED SOURCE licenses out there like HydraIRC, do you bitch and moan about that, NO?
How often do you MODIFY code? I am a software design engineer and I rarely modify the code on projects available on Sourceforge.net
Alot of the time the amount of effort required to maintain it is a bitch unless you are over 80% confident in the code otherwise you are just plain and simply hacking and poking and proding the product hoping the fix
Frequency of modification (Score:3, Insightful)
I usually do so unhappily, bitching and moaning the whole time, as I'd prefer not to have to - but if I need a cusomisation for my site that's not configurable, I'll still modify the product if necessary.
I also fix the odd problematic bug and provide a patch with my bug report. As somone who does OSS development work, I *know* how happy that makes the developers.
That said, I'm working under different constraints than apply to a company buying MS softwar
Re:Share Source is not shared (Score:1, Informative)
WTL [sourceforge.net]
They donated this to sourceforge.
Quit yer whining. zealot.
Re:Share Source is not shared (Score:2)
Re:Share Source is not shared (Score:3, Interesting)
To make it easier for them to find security exploits?
No, I'm serious, and I'm talking about security-conscious users as well as people attempting to break into computers. If you can't modify or reuse the code, isn't security auditing the only other reason to want to look at it?
Perhaps that's why Microsoft only wants to release code a ch
Re:Share Source is not shared (Score:5, Informative)
You do not get a complete copy of the source. You get large chunks... enough to examine the code, but not enough to compile a working product.
Modification is a no-no. Even sending code modifications to Microsoft is against the license. You may NOT modify code or write patches against the code.
You absolutely may NOT incorporated shared code into anything. If you've seen MS source code, you must wash your eyes and cleanse your brain as not to inadvertantly introduce MS code into other projects. Some would say it goes as far as not participating in GPL projects.
Shared source is to appease the customer who wants the ability to evaluate the code and audit its safety. It goes something like "purchase XXX licenses, and we'll show you the source code. Of course, if you don't like the poor quality of the code, you don't get a refund, just that sinking feeling that you're screwed.".
Re:Share Source is not shared (Score:1, Insightful)
Shared source is to appease the customer who wants the ability to evaluate the code and audit its safety.
Why do customers think this works? If you have a partial source tree and you cannot compile it to the binaries that you run on your servers, then no matter how much source the company gives you it is still not the binaries you are running.
Is this trustwor
Re:Share Source is not shared (Score:3, Insightful)
You've got it. That's Trusted Computing in a nutshell. Trusted isn't about a warm fuzzy feeling, it's a statement of what you've done. You run the stuff, you're trusting Microsoft.
Re:Share Source is not shared (Score:5, Interesting)
It can't be for the curious either, as many curious hackers would then be 'tainted' as people have said, and unable to continue with their own projects in case they get sued for copying Microsoft's code.
'Shared Source' must be doing something correct, otherwise it wouldn't still be here. What is it doing right?
Re:Share Source is not shared (Score:4, Interesting)
Govt OSS Advocate says "But OSS software is better because everyone can see and review the source code".
MS says: "You can see ours as well".
Its certainly answering some of the critisms against closed source, but its still 100% missing the point of OSS.
Re:Share Source is not shared (Score:4, Insightful)
The Govt OSS Advocate should have said "But OSS software is better because everyone can see and adapt the source code". MS just says "You can see ours as well, but don't you dare try to accomplish anything with it."
Re:Share Source is not shared (Score:2, Interesting)
"Shared Source" has value in and of itself. Just because it is not the same value as open source is no reason to dismiss it. If you don't want it, don't use it.
Re:Share Source is not shared (Score:4, Insightful)
You already answered it:
You cannot put bits of it into your own projects...and if you do, Microsoft will move to shut you down. Such a threat is real enough for the Samba team:
In order to avoid any potential licensing issues we also ask that anyone who has signed the Microsoft CIFS Royalty Free Agreement not submit patches to Samba, nor base patches on the referenced specification.
The conspiracy theorist in me says Microsoft hopes (L)GPL projects will be contaminated by exposure to their code. The more cross-pollenation, the more Open Source they can shut down and bully.
Re:Share Source is not shared (Score:3, Interesting)
I mean um... tehrs no point, M$ is eevil OMFG!
Re:Share Source is not shared (Score:5, Insightful)
Shared Source has a purpose that is not yet fully revealed. Until then, we won't really know if it is doing something right or not.
The purpose of Shared Source is to poison open source projects. It is hoped that one day, some non-trivial bit of Shared Source will "somehow" find its way into a major open source project. Then the lawsuits and injunctions can begin.
Despite how badly the fiaSCO is going, the fiaSCO has demonstrated two things very clearfully.
You do understand how this works don't you?
Traditionally, developers treat source with great secrecy. You don't want your competitors to gain advantage by studying your work. The above two scenarios are the ONLY reason that the "gain unfair advantage" would not be a consideration. Microsoft would have to be hoping for this to happen. At the same time, Microsoft has no real commercial competitors who could secretly make use of shared source. It is only against Open Source that Microsoft could consider Shared Source to be a weapon -- because they can study our source.
What would traditionally be a drawback of letting your competitors see your secrets becomes an advantage to Microsoft because: (1) they have no real commercial competitors, and (2) when some real or alleged infringement takes place, they can prove it, unlike with a closed source competitor.
Ergo, Shared Source is only a weapon against open source. It has never been about any other purpose. Microsoft is not in the business of "sharing", they are out to make money. They expect the "sharing" to have an eventual return -- and a huge one. The "risk" that Microsoft is taking is something that they want us to perceive to be real.
Re:Share Source is not shared (Score:2)
How much useful stuff goes undone as a result?
Re:Share Source is not shared (Score:2)
Ultimately I think I agree with many others here, in that the main point is probably political -- so that Microsoft can look as if it's doing something good without it really being very useful.
That said, I think it is still slightly better than having no source at all. For one thing, it's possible to examine the API's more closely and get a better idea of what's going on behind them. Sometimes this can be very useful, especially if the documentation's missing something im
Re:Share Source is not shared (Score:3, Interesting)
(Hold on while I get my tin-foil hat on)
Since the money they put into SCO is fizzling, maybe this is their next attempt. Release code into the open (not "open source" open, just that some non-MS people have access to it), wait a few years
Re:Share Source is not shared (Score:2)
1. Your computer (CPU, RAM, etc...) is the wood, nails and screws
2. The GNU tool chain (gcc, ld, make, etc...) are the hammer, saw and screwdrivers
3. YOUR OWN IDEAS and the resultant code are the blueprints
I don't see anyone hauling off DIY folks to cou
You forgot (Score:4, Insightful)
Because you can't compile the code, you have no way to verify that it is even the right source code.
The only thing you will get is [i]some[/i] source code. It might be from a 5-year old version of the product, it might even be from another product.
Re:Share Source is not shared (Score:3, Interesting)
I used to have a really good contact with Microsoft, we were running FreeBSD at the time (Linux nowadays) and were quite happy, they suggested we port our stuff to NT so we could 'evaluate' the whole windows thing, they'd pay our way.
So, free NT licenses and MSDN subscriptions and all the other goodies we're slaving away to make this thing work, just to give them the benefit of the doubt (I'm all for looking at the evidence) and guess what ? YOU CAN'T DO IT
Re:Share Source is not shared (Score:3, Insightful)
"I just want the best environment for my application to be built on"
The build environment is not the same as the deployment environment.
If your Web application is so tied to one Unix environment that it is impossible to move then I suggest you have pro
Re:Share Source is not shared (Score:4, Insightful)
transplanting from PHP to ASP is more then a little bit of work, Apache leaves IIS in the dust, so you need more hardware.
If I can't even get decent performance in the 'lab', and the tools don't let me tune the server to perform at least as good under BSD (or linux for that matter) then why bother throwing it in front of the lions ?
What happens in the open source world is something like this: Developer X is working on some project, needs a feature (say server-status in apache), adds it to the source, compiles and tests it until it works for him, submits the DIFF to the apache crew so he won't need to do it again next time he rebuilds the latest souces, it gets accepted, he feels good, they feel good, the product just got better. You try to get MS to include one of your 'improvements' or even a suggestion of one into IIS. Good luck
If you throw enough hardware at the problem it will eventually go away, I don't doubt that (and besides Ebay there are quite a few other large companies that can 'afford' to run windows as their server platform).
It's just that *I* can't afford that strategy and for a small operation like the one we are running (but with a significant web presence) windows is simply not an option due to the above concerns.
Big companies have less of a problem with wasting some money, some are actually quite good at it !
And I really gave it a good try, came away quite disappointed.
FWIW I'm handling some 2000 database driven hits on pages per SECOND.
I'm sure EBAY does a lot more than that but not on a puny little farm like mine.
Re:Share Source is not shared (Score:2)
Their dynamic content is mostly through a non-MSFT application server.
First, they can go away any day with minimum effort.
Second, they can use higher level abstractions compared to ASP/PHP as a result of that. This is the other approach to the problem which is more typical of people with the size of eBay. Once your business is that big you are working from a different set of premises:
Re:Share Source is not shared (Score:2)
Re:Share Source is not shared (Score:2)
Re:Share Source is not shared (Score:2, Informative)
Re:Share Source is not shared (Score:5, Interesting)
If Microsoft are serious here, they've got a couple of different options:-
1) Use a license like the APSL or Mozilla License, which from memory does have a few commercial stipulations.
2) Come up with their own version of something like the LGPL, in the sense that there are terms with regards to specifically where the source can and can't be used.
3) Use the loss leader approach. Find something they don't really care about losing too much, (most likely something in their dev department, since that's not their primary bread and butter) and put it under the BSD license. Bill has already been quoted at one of his keynotes as saying that he likes the BSD license, or at least prefers it to the GPL, and he could earn himself some major PR points if he decides to prove it in practical terms...and good PR is something that Microsoft needs as much of as it can get these days. This would also help a few other people. It could score some free PR for FreeBSD, and if Bill was really smart he could even ally with the FreeBSD Foundation and Apple with the goal of driving back the GPL somewhat...Something which I for one wouldn't necessarily see as a bad thing. Stallman gives himself far too much credit for FOSS in general...the man is in dire need of being put squarely back in his box, in my opinion. More promotion of the BSD and other licenses could go a long way towards demonstrating to him that the world does not in fact need him anywhere near as much as he likes to think. I'm aware the GPL zealots will now materialise howling out of the woodwork and mod me a troll, as they generally do when I express this kind of opinion...but they are welcome to mod me a troll as much as they like...it won't silence me.
Re:Share Source is not shared (Score:3, Interesting)
If they tried that with the Linux IP stack, they would have to put the rest of the nT kernel under GPL - that's what's wrong with GPL, he can't make money off other people's work without giving something back in return.
Re:Share Source is not shared (Score:2)
A few of the minor userland Windows tools were also BSD tools. And as an aside, a large portion of the Services for Unix tools are actually GNU tools - not just BSD (I've been told the percen
Re:Share Source is not shared (Score:3, Informative)
That's what patents are for, and MS has been known file quite some lately. Also, they have the option of isolating what they consider their 'most innovative' pieces in libraries still hidden from view. Finally, if you are good enough to get ideas from them without incurring in copyright infringement by inadvertently doing derivative work by inconscient memory afterwards, you are probably worth your weight i
Re:Share Source is not shared (Score:2)
>their supporters against each other for no good
>reason.
Er...I think that fight's already been started [gnu.org]...and not by me. Stallman thinks everyone is entitled to his opinion, and his opinion only. That to me, by definition, is not freedom.
This [63.249.85.132] might interest you as well...it talks about some of the other, more practical problems associated with the GPL...Stallman's megalomaniacal egotism notwithstanding.
Before you also accuse me of doing exactly the
Re:Share Source is not shared (Score:1)
Re:Share Source is not shared (Score:2)
Open source, but not free to use... (Score:4, Insightful)
Re:Open source, but not free to use... (Score:1)
Avoid shared source (Score:1, Interesting)
Still Interesting to see how Linux/Apache/Mysql/PostgresSQL is shadowing microsoft - They are giving IIS away free, they have to sell WS 2003 web edition cheaper xp home, and now they have to give sql server for free... Ms users should be happy about the competition.
But Shared source is a hideous "Have a look, don't touch, and definetly don't touch any competing product after looking at this". Nice if you are a researcher, but it escapes me why do r
Re:Open source, but not free to use... (Score:2)
It is my guess that they'll open source the whole of windows long before they'll 'shared source' the office file formats. The lock in of the market is based on this file compatibility and you'll never have 100% as long as those formats are not public.
Myself, I'm for forced legislation that states that as soon as a certain file format gains
Whatever (Score:5, Insightful)
Re:Whatever (Score:5, Funny)
Geez, you anti-american zealots...
Tom
Re:Whatever (Score:5, Interesting)
On the other hand, if Microsoft "embraced" enough of the open-source philosophy that it placated corporate customers, won't that be a significant blow to the rise of linux?
I doubt those corporate customers are interested in all the feel-good benefits of open source. The feel-good benefits are probably the most difficult for Microsoft to adopt. If I had to guess on what "shared-source" really means, I would guess "Beating linux and open source at its own game in order to solidify the corporate market."
Microsoft are not pondering anything (Score:4, Insightful)
Re:Microsoft are not pondering anything (Score:2)
Re:Microsoft are not pondering anything (Score:5, Funny)
You can consider this a request.
Dear valued Microsoft customer (Score:5, Funny)
As part of our Shared-Source[tm] initiative, you have requested to see the main SQL server[tm] source code.
We at Microsoft[tm] strive to meet customer demands. As part of the Shared-Source[tm] initiative, we are happy to disclose parts of our source code, in stages, after approval of our Customer's requests.
Your request has been approved. Please find attached to this email the main SQL server[tm] source code.
We hope this source code disclosure meets your requirements. The next scheduled disclosure will happen in 450 days.
Regards,
Joe Blow, Customers Satisfaction Manager, Microsoft Corp.
PROJECT: SQL_SERVER
FILE: main.c
*/
#include <common.h>
main(int argc, char **argv)
{
start_sqlserver(argc,argv);
}
Re:Dear valued Microsoft customer (Score:4, Funny)
Re:Dear valued Microsoft customer (Score:5, Funny)
Re:Dear valued Microsoft customer (Score:1)
Re:Dear valued Microsoft customer (Score:1)
int wmain(int argc,wchar_t **argv)
UTF-16 is Windows native, UTF-8/ANSI is slower.
Re:Dear valued Microsoft customer (Score:2)
shared source (Score:3, Interesting)
Re:shared source (Score:1)
Anything else I can help you with?
sybase (Score:3, Interesting)
Re:sybase (Score:3, Funny)
Underpant gnome problem solved (Score:5, Insightful)
2) Let customers spot and fix all bugs, but don't give them the right to use the code they write.
3) Charge same customers again for new and improved product.
4) Profit!
At least until they find out what Free software is really all about... at which point the game is up.
Re:Underpant gnome problem solved (Score:1)
Re:Underpant gnome problem solved (Score:2)
I can't argue with that.
Re:Do you know MS SQL server at all? (Score:2)
is it one time look (Score:2, Interesting)
Dont hold your breath. (Score:3, Informative)
Too much risk for them. Just imagine the next 'slammer worm'...
Honestly Great News (Score:5, Insightful)
This means that open source is really and truly getting a serious chunk of the market.
Personally, I've been using PostgreSQL in situations where I'd otherwise be using SQL Server if PostgreSQL did not exist. PostgreSQL is phenomenally powerful and robust. And, for those who want to go the Windows route, its new Windows installer is so user-friendly that it approaches SQL Server in that department.
Gift of polution (Score:4, Informative)
But the bigger concern is that by opening their source code, every open source database is now subject to a lawsuit from MS, claiming that it misappropriated some for-loop or comment line that appeared in SQL Server.
IMHO, the open-source DBs are catching up to SQL Server just fine, and would be far better off without the lawsuit risks associated with MS exposing its source code.
Re:Gift of polution (Score:2)
Is this useful? (Score:2)
When they did this to ATL 7, that seemed useful since that is a lightweight library that developers commonly call into. A C++ developer could trace into it and it would help them figure out a crash in their app, or contribute bug fixes/improvements to ATL7.
I want access to the source for libraries that I call into directly such as MFC. That would me debug MFC applications better. Shared source of IE would help me figure out why
Re:Is this useful? (Score:2)
that would be against the license of shared source, you can't really do anything with the source.
the real purpose of it is just another checkmark on the evaluation paper when considering them against an open source rival.
Re:Is this useful? (Score:2)
You do know that the MFC source is availible, right? Comes with the compiler. Back when I worked a straight job a co-worker of mine actually found a nasty bug in it that was causing us all sorts of problems. He ended up building a patched version and we shipped that with the product until MS fixed it (he reported the bug, supplied the fix).
NO (Score:2)
NO. Messages like the above only serve to confuse and distract. Microsoft's shared-source scheme is nothing like open-source.
PostgreSQL 8.0 for Win32 (Score:1)
Another step in the right direction!! (Score:2)
shared source is a trap (Score:5, Informative)
The only way to guard against those claims is not to look at other people's source code unless the license not only permits you to look but explicitly permits you to reuse. Open source licenses do that, shared source licenses don't.
Shared source isn't new. AT&T UNIX and DEC VMS were "shared source", for example. Companies hand out shared source licenses because they are too cheap to fix their own bugs and want to get bug reports with fixes from customers, because they want customers to be tied more closely to their product (making it harder to switch), because they want others to do their porting work for them, and/or because they actually want to lay traps for open source developers.
If you have looked at any shared source source code under a non-open source license, do not work on any related open source or proprietary project; you would be putting those projects in jeopardy. Do not be fooled by "shared source" that's downloadable with a click-through: it may look like open source at first glance, but whether it's downloadable or whether you have to go into a room with five lawyers and sign an elaborate agreement may make some difference if it came to a court case, but it doesn't change the principle. Furthermore, most of those cases won't get to court: your future employer or open source project will probably unceremoniously dump you if there is even a hint that you have looked at shared source.
In other words, before you look at some company's proprietary source code, think carefully whether you want that company to own a piece of your brain for the rest of your life, because that's what it comes down to.
Re:shared source is a trap (Score:1)
Re:shared source is a trap (Score:3, Insightful)
Many other open source projects and many companies have similar rules. If the issue arises in a company, they may try to find another internal position where your previous exposure to such source code doesn't create a legal liability for them; of course, that position may be less interesting and
Re:shared source is a trap (Score:2)
Re:shared source is a trap (Score:2)
Not legally; the open source licenses are pretty clear on that point, and open source is, by its very nature, not trade secret. The legal problems that can result from looking at non-open source code are real.
I imagine MS wouldn't hire any Linux kernel developers.
MS hired some of the people who developed the original (open source) Mach kernel. Microsoft has also hired other open source developers (e.g. the develop
Re:shared source is a trap (Score:2)
There's nothing unique to proprietary source about that, though. I could just as easily release some code under the GPL, wait, then go after anyone releasing code that does something similar too.
True, most lone coders and independent projects wouldn't have the money to sue, but what of larger companies, such as IBM or Apple? Just because they're playing nice now doesn't mean th
Re:shared source is a trap (Score:2)
No, the two situations differ.
Open source licenses satisfy explicit requirements (see) that protect you from such claims; the nature and aims of open source software almost make that necessary.
Shared source licenses, on the other hand, usually impose restrictions that cause legal problems if y
What about Timeline? (Score:2)
transparency, not openness. (Score:3, Insightful)
Open Source is in contrast, a democratic government, run by the people. Open source isn't about "opening" your source. Open source projects are community driven, designed for and by the people.
If Microsoft wants to share its SQL server source, they must ensure:
a) That the whole thing is released so people can compile it at home,
b) Support the community requests to change this or that part of the code
and most important, c),
NOT use this as a weapon to end the competition. How do we know that they'll sue open source projects because one of their developers has even glimpsed at Microsoft code?
Call it FUD if you like, but As much as Bill says GPL can infect projects, I fear that the "microsoft share code" will "infect" open source projects so that Bill can sue them all and vanquish the competition.
If you view "shared" source you're forever tainted (Score:2, Informative)
Have a nice career - my company won't even interview anyone who's signed one of those "agreements" that allow folks to see M$ code. You have to sign an affadavit that you've never done such a thing to work with us.
Lets just look at why they are doing it: (Score:3, Interesting)
2) Open source is a big buzz word, something each IT manager is worrying his job over.
3) Open source is seen as growing competition against M$, they want to remove any unique selling points
4) pressure from gov's looking to switch to open source
IBM have opensourced a DB, sun have/are about to.
So Microsoft invent shared source... I thin they were forced to do this... so they went along... it is pathetic at least.
Now they are trying to us thier 'shared source' to confuse the unwashed masses that microsoft has the benefits of open source... the best of both worlds... pathetic shit like that.
still, doesn't work on me.
Re:Lets just look at why they are doing it: (Score:2)
In summary: when IBM and Sun open source something, they do it for real, and when Microsoft does it, they would just as well have not done it.
It took Sun years to produce OpenSolaris. They had their team of lawyers on it, studied the problem, went with the CDDL for many reasons, and, finally, after five years, will release a full open source operating system. And the fruit of their efforts is an OS that should basically be immune from patent lawsuits--this is a good thing.
IBM most definitely went throug
Re:Lets just look at why they are doing it: (Score:2)
And, if you read later in groklaw, they actually
Cold War Victory (Score:2)
It's going to be hard for Microsoft to talk out of both sides of its propaganda mouth on "Shared Source". They've got 3 points they hammer Open Source on:
1> No corporate accountability
But there are big, sueable companies which specialize in open source support contracts: IBM, Novell, RedHat. Their bizmodel is exactly consistent with Microsoft's whining that SW TCO comes from the support costs, not the purchase. While Microsoft's model treats support as
The real reasons? (Score:2)
Many above have mentioned that Shared Source is a one way system. It only benefits the owner (Microsoft), by having lots of eyes (and brains) on their code.
Ingres source was also opened recently. It did not do them much good. Hope that Microsoft learns the lesson there.
This is mainly a PR ploy: they want to say that they are "open" too, and they are putting out the source like others do, so they are like Linux et. al.
You know..... (Score:2)
SQL server is the only product to my knowledge that preforms reasonably well, is incredibly stable and is probably the least affected by malicious attacks. (yes I know that's still a lot of attacks, just less than windows/iis/ie)
It's so touchy opening a product up that's in use already in the market. At least in opensource, there's a public alpha and beta and people have a chance to work out some of the bugs/exploits
Microsoft showing crackers its code... (Score:2)
A: Of course not.
Yet this is what Open Source software has been doing for years.
The Shared Source way of allowing select users to check code for flaws is fine; but, surely one of the greatest benefits of Open Source is that anybody can see it?
Secure coding is mandatory for popular Open Source software - it's a prime target!
Open Source software can stand up to being thrown to the masses, yet Microsoft prefers security through obscurity [wikipedia.org].
Sha
ohhhh (Score:2)
It's already shared source - with Sybase (Score:2)
Re:It's already shared source - with Sybase (Score:2)
Since the vast majority of the SQL Server codebase was straight from Sybase...
Version 4.2 of SQL Server, which ran orignally on OS/2 was a joint Sybase/MS product. MS then made the decision that OS/2 was not the platform of the future and ported 4.2 to NT (this is in 1992). Indeed ported might not be the best way to describe it, because it involved a huge amount of re-writing of code, including the kernel. O
Don't bother! (Score:2)
There is an argument for security by obscurity. I am completely unconvinced by it, but it's there. So now you take a product that is highly dependent upon obscurity for its security and you let (world - dog) check it out. Now the set of people who can audit for vulnerabilities is larger. Oooh - I'm sure there's no economic espionage coming from China! I'm sure there's no maladjusted contract programmer at THIS Fortune 1000 company going to share the shared source on IRC. But
Shared Source - The Microsoft Definition... (Score:2)
and what's yours is ours too...
Implications of Shared Source MSSQL (Score:2)
Re:Implications of Shared Source MSSQL (Score:2)
What would happen i MS really did embrace FLOSS? (Score:2)
Of course they wouldn't make everything open source. What impacts would a REAL change in strategy mean for the community?
GJC
Shared Source has *NOTHING* to do with Open Source (Score:2, Informative)
Postgres (Score:2)
Microsoft is just finally doing something to fight against postgresql, which finally has a fast and easy install for windows machines. | http://it.slashdot.org/story/05/02/26/2053229/microsoft-ponders-shared-sourcing-sql-server?sdsrc=nextbtmnext | CC-MAIN-2015-40 | en | refinedweb |
iofunc_attr_t
I/O attribute structure
Synopsis:
#include ; #if !defined(_IOFUNC_OFFSET_BITS) || _IOFUNC_OFFSET_BITS == 64 #if _FILE_OFFSET_BITS - 0 == 64 off_t nbytes; ino_t inode; #else off64_t nbytes; ino64_t inode; #endif #elif _IOFUNC_OFFSET_BITS - 0 == 32 #if !defined(_FILE_OFFSET_BITS) || _FILE_OFFSET_BITS == 32 #if defined(__LITTLEENDIAN__) off_t nbytes; off_t nbytes_hi; ino_t inode; ino_t inode_hi; #elif defined(__BIGENDIAN__) off_t nbytes_hi; off_t nbytes; ino_t inode_hi; ino_t inode; #else #error endian not configured for system #endif #else #if defined(__LITTLEENDIAN__) int32_t nbytes; int32_t nbytes_hi; int32_t inode; int32_t inode_hi; #elif defined(__BIGENDIAN__) int32_t nbytes_hi; int32_t nbytes; int32_t inode_hi; int32_t inode; #else #error endian not configured for system #endif #endif #else #error _IOFUNC_OFFSET_BITS value is unsupported #endif uid_t uid; gid_t gid; time_t mtime; time_t atime; time_t ctime; mode_t mode; nlink_t nlink; dev_t rdev; } iofunc_attr_t;
Since:
BlackBerry 10.0.0
Description:
The iofunc_attr_t structure describes the attributes of the device that's associated with a resource manager. The members include the following:
- mount
- A pointer a structure information about the mountpoint. By default, this structure is of type iofunc_mount_t, but you can specify your own structure by changing the IOFUNC_MOUNT_T manifest.
- flags
- Flags that your resource manager can set to indicate the state of the device. This member is a combination of the following flags:
- IOFUNC_ATTR_ATIME
- The access time is no longer valid. Typically set on a read from the resource.
- IOFUNC_ATTR_CTIME
- The change of status time is no longer valid. Typically set on a file info change.
- IOFUNC_ATTR_DIRTY_NLINK
- The number of links has changed.
- IOFUNC_ATTR_DIRTY_MODE
- The mode has changed.
- IOFUNC_ATTR_DIRTY_OWNER
- The uid or the gid has changed.
- IOFUNC_ATTR_DIRTY_RDEV
- The rdev member has changed, e.g. mknod().
- IOFUNC_ATTR_DIRTY_SIZE
- The size has changed.
- IOFUNC_ATTR_DIRTY_TIME
- One or more of mtime, atime, or ctime has changed.
- IOFUNC_ATTR_MTIME
- The modification time is no longer valid. Typically set on a write to the resource.
In addition to the above, your resource manager can use in any way the bits in the range defined by IOFUNC_ATTR_PRIVATE (see <sys/iofunc.h>).
- lock_tid
- The ID of the thread that has locked the attributes. To support multiple threads in your resource manager, you'll need to lock the attribute structure so that only one thread at a time is allowed to change it.
The resource manager layer automatically locks the attribute (using iofunc_attr_lock()) for you when certain handler functions are called (i.e. IO_*).
- lock_count
- The number of times the thread has locked the attribute structure. You can lock the attributes by calling iofunc_attr_lock() or iofunc_attr_trylock(); unlock them by calling iofunc_attr_unlock()
A thread must unlock the attributes as many times as it locked them.
- count
- The number of OCBs using this attribute in any manner. When this count is zero, no one is using this attribute.
- rcount
- The number of OCBs using this attribute for reading.
- wcount
- The number of OCBs using this attribute for writing.
- rlocks
- The number of read locks currently registered on the attribute.
- wlocks
- The number of write locks currently registered on the attribute.
- mmap_list and lock_list
- To manage their particular functionality on the resource, the mmap_list member is used by iofunc_mmap() and iofunc_mmap_default(); the lock_list member is used by iofunc_lock_default(). Generally, you shouldn't need to modify or examine these members.
- list
- Reserved for future use.
- list_size
- Size of reserved area; reserved for future use.
- nbytes
- The number of bytes in the resource; your resource manager can change this value.
For a file, this would contain the file's size. For special devices (e.g. /dev/null) that don't support lseek() or have a radically different interpretation for lseek(), this field isn't used (because you wouldn't use any of the helper functions, but would supply your own instead.) In these cases, we recommend that you set this field to zero, unless there's a meaningful interpretation that you care to put to it.
- inode
- This is a mountpoint-specific inode that must be unique per mountpoint. You can specify your own value, or 0 to have the Process manager fill it in for you. For filesystem type of applications, this may correspond to some on-disk structure. In any case, the interpretation of this field is up to you.
- uid and gid
- The user ID and group ID of the owner of this resource. These fields are updated automatically by the chown() helper functions (e.g. iofunc_chown_default()) and are referenced in conjunction with the mode member for access-granting purposes by the open() help functions (e.g. iofunc_open_default()).
- mtime, atime, and ctime
- POSIX time members:
- mtime — modification time (write() updates this).
- atime — access time (read() updates this).
- ctime — change of status time (write(), chmod() and chown() update this).
One or more of the three time members may be invalidated as a result of calling an iofunc-layer function. To see if a time member is invalid, check the flags member. This is to avoid having each and every I/O message handler go to the kernel and request the current time of day, just to fill in the attribute structure's time member(s).
To fill the members with the correct time, call iofunc_time_update().
- mode
- The resource's mode (e.g. type, permissions). Valid modes may be selected from the S_* series of constants in <sys/stat.h>; see " Access permissions " in the documentation for stat().
- nlink
- The number of links to this particular name; your resource manager can modify this member. For names that represent a directory, this value must be at least 2 (one for the directory itself, one for the ./ entry in it).
- rdev
- The device number for a character special device and the rdev number for a named special device.
Classification:
Last modified: 2014-11-17
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus | http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_attr_t.html | CC-MAIN-2015-40 | en | refinedweb |
NAME
getdirentries, getdents - get directory entries in a file system independent format
LIBRARY
Standard C Library (libc, -lc)
SYNOPSIS
#include <sys/types.h> #include <dirent.h> int getdirentries(int fd, char *buf, int nbytes, long *basep); int getdents(int fd, char *buf, int nbytes);
DESCRIPTION
The
If successful, the number of bytes actually transferred is returned. Otherwise, -1 is returned and the global variable errno is set to indicate the error.
ERRORS.
SEE ALSO
lseek(2), open(2)
HISTORY
The getdirentries() system call first appeared in 4.4BSD. The getdents() system call first appeared in FreeBSD 3.0. | http://manpages.ubuntu.com/manpages/maverick/man2/getdirentries.2freebsd.html | CC-MAIN-2015-40 | en | refinedweb |
Type: Posts; User: Fides Facit Fortis
not sure how to check if it's "bug-free", but I've never had any problems with it.
Here's more info:
gcc (tdm-1) 4.7.1 (year 2012)
GNU GCC compiler
Well, I did it. I debugged the nickname part too because sometimes it takes away the first 1 or 2 characters from the nickname as well. Here's what I've got:
The nickname part:
...
Sorry for not responding but I was away from home yesterday.
Following your advice I changed this part of code:
DWORD WINAPI ReceiverThread(LPVOID ClientSocketDeskryptor) //Thread used for...
if you mean the only buffer I'm using
...
DWORD WINAPI ReceiverThread(LPVOID ClientSocketDeskryptor) //Thread used for handling server response
{
char message[220]; //the only buffer in...
Actually I am not even using char arrays, I'm using strings which I then pass to the send() function using string::c_str() which returns the C string version(which is a null-terminated char array)
...
Pretty much. I placed a breakpoint in every single line and added every variable to the watch list. At the end shortly after sending the 1st message(nickname) the client crashes but only in debug...
Heh, thank you but I already know what a Segmentation fault is. In fact I've been using this exact site before. The question is how do I fix it. I see no buffer overflows, null-ptr dereferences or...
In case you wanted the entire source code:
#include <winsock2.h>
#include <cstdio>
#include <iostream>
#include <windef.h>
#include <windows.h>
#include <cstring>
using namespace std;
Hello wise people!
Got a problem with my winsock client program. It looses parts of input. Here's the important (IMO) part of the code:
...
string nickname;
...
Uhm sorry for the trouble, I've managed to find the solution myself.
Here are some screenshots
In the form's designer
32551
In the control's designer
32553
Hello.
The project I'm currently working on REQUIRES me to create a UserControl. I won't explain why I have to use it, but believe me, I have a good reason to do so.
Anyway, I created one and...
THX!
whoa thx a lot.Wondering why my book doesn't contain this information.It's so simple and usefull!
I made controls in my form scale with the form's size upon load( so for example a button has 10% in form's width and 5% in form's height etc) so it should always fit into the form no matter what the...
Well, I'm not very experienced in C++/CLI programming and I have no idea how to make scrollbars, so I tought that the easiest way to deal with the problem of resizing my form would be to make it...
So,as the title says, I need my form to be either constantly maximized or constantly having it's maximized size, but the first option is prefered. What I've been trying to do is:
1. I've set form...
OK so I've moved my code from InitializeComponent to form's constructor,but that didn't change anything.
Hello.
I'm new to C++/CLI programming and i recently encountered a problem.It's propably so basic that I've failed to find it on the web...
int ScreenX,ScreenY;
void...
Thx,you are my Guru :thumb:
You see, the problem is that the bar I'm trying to implement needs to have custom properties.I want it to have similiar functionality to overlap buttons in browsers. Each one is responsible for one...
Wow,thx for a quick response.
Yeah, this is propably due to the fact that my book describes both .NET and WinApi programming so I sometimes get lost.
I'll check this out,thanks.
Hello.
I'm new to winapi but I've read a book about it already. The problem is that my Visual Studio 2012 considers some of the stuff mentioned in my book as an error. For example:
Array^ arr =... | http://forums.codeguru.com/search.php?s=0824e9ef79dbcc90d5620bdf4f1d14c6&searchid=7941513 | CC-MAIN-2015-40 | en | refinedweb |
Details
Description
I received a request to remove JCC testing from the derby suite. The user had a very old jcc version in their classpath 2.4 and 10.5 tests were failing with:
com.ibm.db2.jcc.c.SqlException: DB2 SQL error: SQLCODE: -1, SQLSTATE: XJ040, SQLERRMC: Failed to start database '/results/axxon/58712/laka10a-derby-m101-20100830-003810/derbyall/derbynetmats/DerbyNet/derbynetmats/dblook_test_net_territory//wombat', see the next exception for details.::SQLSTATE: XJ001Java exception: 'Access denied (java.util.PropertyPermission com.ibm.crypto.provider.FIPSMODE read): java.security.AccessControlException'.
at com.ibm.db2.jcc.c.o.a(o.java:3219)
at com.ibm.db2.jcc.a.cb.q(cb.java:653)
at com.ibm.db2.jcc.a.cb.p(cb.java:541)
at com.ibm.db2.jcc.a.cb.l(cb.java:363)
at com.ibm.db2.jcc.a.cb.d(cb.java:145)
at com.ibm.db2.jcc.a.b.Sb(b.java:1274)
at com.ibm.db2.jcc.a.b.a(b.java:1166)
at com.ibm.db2.jcc.a.b.q(b.java:934)
at com.ibm.db2.jcc.a.b.a(b.java:702)
at com.ibm.db2.jcc.a.b.(b.java:305)
at com.ibm.db2.jcc.DB2Driver.connect(DB2Driver.java:162)
at java.sql.DriverManager.getConnection(DriverManager.java:322)
at java.sql.DriverManager.getConnection(DriverManager.java:273)
at org.apache.derby.tools.dblook.go(Unknown Source)
at org.apache.derby.tools.dblook.(Unknown Source)
at org.apache.derbyTesting.functionTests.tests.tools.dblook_test.lookThree(dblook_test.java:417)
at org.apache.derbyTesting.functionTests.tests.tools.dblook_test.runTest(dblook_test.java:283)
at org.apache.derbyTesting.functionTests.tests.derbynet.dblook_test_net_territory.doTest(dblook_test_net_territory.java:65)
at org.apache.derbyTesting.functionTests.tests.derbynet.dblook_test_net_territory.main(dblook_test_net_territory.java:41)
Now that I look at it more closely, their actual problem might be on the server side and JCC just reporting it but good to get the JCC tests out of the mix when people accidentally have it in their classpath anyway.
Activity
For the removal from the JUnit tests, I would suggest that you go through the callers of JDBCClient.isDB2Client() and BaseJDBCTestCase.usingDB2Client(), and finally remove those two methods themselves. I think this will remove most of the JCC-specific code in the JUnit tests.
There's also a less frequently used method called DerbyJUnitTest.usingDB2Client().
re removal of derbynetmats suites...You can't just remove those files; you need to add the contents of derbynetmats.runall to derbynetclientmats.runall, and replace the 'derbynetmats' suite reference in derbynetclientmats.properties with 'jdbcapi jdbc20'.
Good point. Thanks Myrna I didn't notice those were linked up.
Assigning to myself
Attaching the patch and stat files for JCC Removal.
checking the patch available box so committers get automatically reminded of a patch waiting for review/commit.
Thanks Jayaram for the patch! I committed it to trunk with revision 1054146. Next I suggest at removing JCC from the JUnit tests per Knut's earlier comment.
Happy New Year!
Patch submitted after removal of references to "usingDB2Client" in Junit test classes.
Hi Jayaram,
On the JCCRemoval_Jan112011.txt patch, my only comment would be that although I am usually a big fan of comments, I think in this case, the comments like
//Derby 4785 removed usingDB2Client as a part of JCC removal
and previously existing comments about JCC and DB2 can just be removed.
The svn history will show clearly that JCC was removed and hopefully at the end of this effort we just won't find DB2 and JCC anywhere.
I don't see anything in the change that should cause your tests to stop abruptly. Were there any javacore files left or anything like that. If not I would suggest, make the comment changes and rerun suites.All. If it still won't run on your machine I will give the new patch a spin on mine.
Inline comments for jcc usingdb2client removal have been updated as per review .
Hi Jayaram,
I was reviewing your latest patch and it doesn't seem like it compiles. Could you check what's wrong with it? I'll commit it after it's fixed.
Thanks
This is the error I got by the way:
junitcomponents:
[javac] Compiling 1 source file to /Users/tiago/Desktop/Derby/cleanTrunk/classes
[javac] /Users/tiago/Desktop/Derby/cleanTrunk/java/testing/org/apache/derbyTesting/junit/TestConfiguration.java:1241: cannot find symbol
[javac] symbol : variable DB2CLIENT
[javac] location: class org.apache.derbyTesting.junit.JDBCClient
[javac] jdbcClient = JDBCClient.DB2CLIENT;
[javac] ^
[javac] 1 error
BUILD FAILED
/Users/tiago/Desktop/Derby/cleanTrunk/build.xml:584: The following error occurred while executing this line:
/Users/tiago/Desktop/Derby/cleanTrunk/java/testing/build.xml:59: The following error occurred while executing this line:
/Users/tiago/Desktop/Derby/cleanTrunk/java/testing/org/apache/derbyTesting/junit/build.xml:74: Compile failed; see the compiler error output for details.
Total time: 8 seconds
Patch for removal of JCC references. Also the issue which occurred in previous patch build has been addressed.
I'm attaching a patch that removes handling of JCC and Java 1.3 in the compatibility test. JCC was only needed when using 10.0.2.1 on the client side in client/server compatibility testing, since 10.0.2.1 didn't have its own client driver. The patch makes the compatibility test skip combinations with 10.0.2.1 on the client side. The 10.0.2.1 server is however still tested.
I've tested the patch by running the compatibility test both from the Ant script and from the JUnit suite, and it passed in both configurations.
Committed d4785-compat-1a.diff to trunk with revision 1161422.
As a initial patch for removing all jcc references removed all the references to IsDB2Client method in test classes
Thank you for your continued interest in Derby!
I looked visually at the .diff file from the patch and I have the following comments:
- a number of the tests have a lot of whitespace differences. In particular is the case for the tests jdbcapi/HoldabilityTest.java, jdbcapi/ConcurrencyTest.java, jdbcapi/DriverTest.java, jdbcapi/BlobClob4BlobTest.java, jdbcapi/ProcedureTest.java.
It seems to me that you've reformatted those tests with a tab as the indent, instead of the 4 spaces as per our formatting convention.
Typically, we do not touch code that we don't have to - it makes the diff huge, and comparing the actual changes impractical.
- some of the tests that have only minimal changes still have unnecessary whitespace changes, e.g.
jdbcapi/XATransactionTest.java. In fact, I don't see why that test needed to be changed at all?
From the .diff file:
--------------------------
@@ -25,6 +25,7 @@
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
+
import java.sql.Statement;
import javax.sql.XAConnection;
import javax.sql.XADataSource;
@@ -426,6 +427,7 @@
assertEquals(XAException.XAER_RMFAIL, xae.errorCode);
}
}
+
/**
----------------------------
If there's no real reference to DB2Client or JCC here this file should get svn revert-ed.
- A number of the changes remove the if clauses relating to DB2Client, but leave the comments in place. The comments should go also. This is for instance the case with lang/TableFunctionTest.java. From the .diff:
--------------------
{
// skip this test if using the DB2 client, which does not support the
// JDBC4 metadata calls.
- if ( usingDB2Client() ) { return; }
-
println( "\nExpecting correct function metadata from " + functionName );
ResultSet rs = getFunctions( null, "APP", functionName );
JDBC.assertFullResultSet( rs, expectedGetFunctionsResult, false );
---------------------
The comment lines '//skip this test if using the DB2 client, which does not support the" and "// JDBC4 metadata calls." should also be removed.
Similar comments are in many of the other tests.
Attaching the updated patch as per the previous comments
Thank you Jayaram for this patch.
There were still a few minor nits which I fixed up; a few references to failing with DB2Client, BlobClob4BlobTest had been modified since your last refresh and so I just redid that one, and I removed further references to JCC in the javadoc of a few tests, in particular, NSSecurityMechanismTest.
Committed with revision 1244295.
Is there more work to be done here?
I'm thinking that there are still some suite files for the old harness that can be removed.
e.g. functionTests/suites/DerbyNet.exclude, DerbyNetUseProcess.exclude, j9derbynetmats (we're not really supporting j9 with networkserver anyway), etc.
Clearing patch available flag.
There seems to be some more work left to do on this issue.
Found that derbynet.exclude had references to 2 files rsgetXXXcolumnNames.java and SetQueryTimeoutTest.java. But rsgetXXXcolumnNames.java file didnt have any references to jcc. So i am curious to know how derbynet.exclude is linked with jcc removal..
I think the link to JCC is that DerbyNet.exclude lists the tests to skip if we run the old test harness with the JCC driver. Since we don't support running tests against JCC anymore, we don't need the files that tell which tests would fail when run against JCC.
I think this issue is practically done, just a few more deletes; the final files in the suites directory that refer to DerbyNet and the DerbyNet canon directory in functionTests/master:
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\holdCursorJDBC30.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\Stream.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\synonym.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\dblook_test_net.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\dblook_test_net_territory.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\maxfieldsize.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\holdCursorExternalSortJDBC30.out
D java\testing\org\apache\derbyTesting\functionTests\master\DerbyNet\holdCursorIJ.out
D java\testing\org\apache\derbyTesting\functionTests\suites\DerbyNetRemote.exclude
D java\testing\org\apache\derbyTesting\functionTests\suites\j9derbynetmats.properties
D java\testing\org\apache\derbyTesting\functionTests\suites\DerbyNetUseprocess.exclude
D java\testing\org\apache\derbyTesting\functionTests\suites\DerbyNet.exclude
Commit 1492887 from Myrna van Lunteren
[ ]
DERBY-4785; Remove JCC tests and references to JCC in test code
Removing the last few files from the suites dir and the entire master/DerbyNet directory.
[bulk update] Close all resolved issues that haven't been updated for more than one year.
On the mailing list Jayaram asked for first steps on this issue.
I think a good first step would be to remove the derbynetmats suite from derbyall.
Basically all that would need to be done for this first patch would be to remove derbynetmats from
java/testing/org/apache/derbyTesting/functionTests/suites/derbyall.properties and remove the erbynetmats.properties derbynetmats.runall and run tests to make sure there is no unexpected mpact.
Then as Dag suggest we can do the bare minimum in RunTest, RunSuite etc to make sure it doesn't get loaded and finally do a more careful removal of the jcc references in the JUnit test infrastructure, but I think the first step would be just to remove derbynetmats from the suite. | https://issues.apache.org/jira/browse/DERBY-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel | CC-MAIN-2015-40 | en | refinedweb |
Forum:Too many namespaces?
From Uncyclopedia, the content-free encyclopedia
Hi,
Before you all pull out your torches, I'd just like to say that this is not a complaint about all of UN's sideprojects.
Anyway, we have too many namespaces. And by that, I'm not talking about all the ones that everybody uses, like the mainspace, UnTunes:, UnNews:, UnBooks:, etc. etc. I'm also not talking about the ones that no one really uses but we need anyway, such as Help:, MediaWiki:, Uncyclopedia:, User:, File:, Template:, and Category:.
What I'm talking about is, if you search for something and search under "advanced," then you'll find that there are four namespaces that nobody really uses and that we do not need. These are:
- Message Wall: (thought Wikia left us out of those)
- Message Wall Greeting: (same as above)
- RelatedVideos: (what the fuck is that)
- Thread: (we already have Forum:, dumbasses)
We don't need those; therefore, I say we get rid of 'em. And this is coming from a user who's barely ever on. Who's with me?
--
Кıяву Тαгк Сойтяıвs 2012-08-31T00:25
Vote
For. I was wondering what the hell those were and it's just unnecessary to keep them. I'd like to see a Blog: namespace. —qzekrom.net16.net clicky! 00:59, August 31, 2012 (UTC)
- Oh right, everything has to involve a vote nowadays. In that case,
- Report the bug to Wikia. Also, for. -- Brigadier General Sir Zombiebaron 05:41, August 31, 2012 (UTC)
- For I only just discovered those namespaces:45, August 31, 2012 (UTC)
- Ya -- 22:05, August 31,)}" >
Update from Wikia
Like an African dictator I declare all of your votz null and void because this is not intentional, this is as Zombiebaron mentioned, a bug. So I am filing a report with our engineers to fix the issue which should remove these namespaces from view. I will be sure to let them know you hold the RelatedVideos module in such high regard. --DaNASCAT (talk) 19:10, September 5, 2012 (UTC)
- @DaNASCAT What IS the RelatedVideos module anyway? ---05T22:54 | http://uncyclopedia.wikia.com/wiki/Forum:Too_many_namespaces%3F?oldid=5690417 | CC-MAIN-2015-40 | en | refinedweb |
Skip navigation links
java.lang.Object
oracle.jdevimpl.audit.profile.AuditDialog
public class AuditDialog
The Audit and Metrics run dialog. This dialog is displayed when an Audit or Metrics command is invoked and allows a user to select the profile to be used by the command. A profile is a set of properties for all the rules or all the metrics registered with Audit. Profiles are saved in the IDE system directory and are known to users by their simple name.
An Edit button displays the Audit or Metrics Profile dialog to create, delete, or edit profiles.
ProfileModel,
Profile
public AuditDialog(java.lang.String description)
public Profile show()
protected java.awt.Component createComponent()
public java.awt.Component getComponent()
public void actionPerformed(java.awt.event.ActionEvent event)
actionPerformedin interface
java.awt.event.ActionListener
Skip navigation links | http://docs.oracle.com/cd/E35521_01/apirefs.111230/e17493/oracle/jdevimpl/audit/profile/AuditDialog.html | CC-MAIN-2015-40 | en | refinedweb |
Type: Posts; User: AlanGRutter
Hi gurus,
I have a web site that uses the AJAX Control Toolkit v3.020820.28853 and master pages. Everything works absolutely perfectly on my development machine, however when I deploy the site to...
I believe it should get picked up by the garbage collection.
Personally, IMHO, I believe responsibility for closing the FileStream lies within the scope of the object that created it. If used on a...
What you are doing is multicasting which requires a UDP connection and special IP address ranges. I suggest you start with a Google search for UDP multicast and see what turns up.
Regards
Alan
The linker is telling you that your function is defined twice in different object modules.
It has already encountered a publicly visible function body for CallbackProc that takes two longs as...
An ABC acts solely as a base class for inherited classes. You cannot instantiate an ABC - you must instantiate one of the derived classes using new and assign it to a pointer to the base ABC.
...
Is your code running server side or client side?
ASP.NET 2.0 ?
I'm not very familiar with this stuff but on client-side don't you just need to get the element by id (getElementById) and then set...
You could use the strtok() function to split up your line into tokens.
Regards
Alan
Here's my attempt - took me about 1 hour
//Narrative: This is a program that -
// 1) reads in the number of salespeople
// 2) reads in the name of the salesperson
// 3) reads the quota for...
This is different to what you originally posted and is why I couldn't make any sense of the statement you did provide. It is still ambiguous
2 times first minus z
is that
(2 * first) - z
...
When you type a message, the toolbar has a 'Code' icon - if you click it it pops up a box showing HTML style tags. Use these around any code that you post and it will keep the formatting - you can...
Submit or attach your full code.
a)
int FunctionOne(int x, int y)
{
return (x > y) ? (x + y) : (x - (2*y));
}
b) Item 5 makes no sense to me. Whose previous value? Did you mean z greater than twice value of x?
Post your whole code - otherwise no-one can see exactly what you're doing.
Regards
Alan
You will not get many replies if you don't use code tags. Also post your whole code and any sample data you are using.
Just create a loop (either a for or a while) for the number of salespersons...
I have a couple of questions which may help you get some replies
1) Which order are the bits in the file in (Little Endian or Big Endian) ?
2) You ask 'What am I doing wrong?', yet you provide no...
1. You need to use the array form of delete
delete [] addrs;
since you declared an array. You should also have a copy constructor.
2. There's nothing wrong with the declaration - you...
Well you haven't searched around much on the net then since in less than 1 minute I've found loads of resources on debugging.
Try this one for starters
Visual Studio Debugging
As Paul said...
I have been programming for at least 25 years and in all that time I can honestly say I have never ever used a goto statement. I've always found a cleaner way.
If you were working for me and you...
cin is for 'C++'. The poster said originally that it's a 'C' assignment.
As it says in the KB article, the include file you require can be found in the DDK (Driver Development Kit) in the following location \Ddk\Src\Storage\Inc
So you need to get a copy of the DDK from...
Are you saying you replaced
char st[]; by
char *st;
You haven't allocated any memory for the variable - you need to do this in both cases. I think you are lucky in the first case.
Try
Exterminator - the below description is what I was hinting at
Regards
Alan
What is being said about alignment is true however you generally do not need to specifically pad out your data structures so that they align on word boundaries as the compiler will do this for you.
...
AFAIK you can't initialise static members inside the class. You need to have
class Movie
{
...
static Graph actorGraph;
}
Your macro is turning MIN into 7;
#define statements do not ordinarily end in a semi-colon. Additionaly your main should return something.
The correct code is
#include <iostream>
#include... | http://forums.codeguru.com/search.php?s=0824e9ef79dbcc90d5620bdf4f1d14c6&searchid=7941515 | CC-MAIN-2015-40 | en | refinedweb |
[
]
Pinaki Poddar commented on OPENJPA-1612:
----------------------------------------
Rick,
In my view, this is not a change in the right direction.
OpenJPA used to have a reasonably advanced support for untyped relations. These capabilities
are described in [1]
Now in the past, I had used these powerful features to demonstrate how generically typed structures
can be modeled [2] in OpenJPA (that blog unfortunately has been eaten by a very powerful
company and one can merely find its indirect cached references by searching for 'Persistence
of Generic Graph Pinaki Poddar" ;).
Recently I noticed that those powerful type system is weakened (meaning those neat generic
model example does not work with OpenJPA 2.0).
Support for generically typed domain model is a powerful construct and OpenJPA was quite capable
of meeting that demand. Hence I consider OpenJPA 2.0 has regressed on that aspect.
I have not investigated deeply, but my cursory look at the changes suggest that cause of the
regression is more at the surface and can be corrected at ease.
In view of that observation, I see this current commit as a step backward. And I hope that
the original committer will consider rolling the change back.
[1] will provide the user sufficient choices on how to persist Object o -- when it is assigned
to a Persistence Capable entity, or merely a Serializable at different levels of OpenJPA type
support.
[1]
[2]
> Mapping an unsupported type
> ---------------------------
>
> Key: OPENJPA-1612
> URL:
> Project: OpenJPA
> Issue Type: Improvement
> Components: kernel
> Affects Versions: 1.2.2, 2.0.0, 2.1.0
> Reporter: Rick Curtis
> Assignee: Rick Curtis
> Priority: Minor
> Fix For: 2.1.0
>
>
> As discussed on the dev mailing list [1]...
> I found that the following mapping:
> @Entity
> public class AnnoTest1 {
> @ManyToOne
> Object o;
> ...
> }
> This results in a warning message [2], but it is allowed. This JIRA will be used to detect
this condition and fail fast.
> [1]
> [2] 297 TestConv WARN [main] openjpa.MetaData - OpenJPA cannot map field "test.AnnoTest1.o"
efficiently. It is of an unsupported type. The field value will be serialized to a BLOB by
default.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online. | http://mail-archives.apache.org/mod_mbox/openjpa-dev/201005.mbox/%3C31045854.4221273524989374.JavaMail.jira@thor%3E | CC-MAIN-2015-40 | en | refinedweb |
Eclipse ganymede and jmaki project
Hi all,
I am developing a jmaki webapp using Eclipse.
The Eclipse Ganymede Javascript validator marks as errors the calls:
jmaki.doSomething()
For instance, given the following piece of code:
jmaki.namespace("jmaki.widgets.mywidget1");
The error eclipse prompts is:
Cannot make a static reference to the non-static function namespace(any, any) from the type Jmaki
I have tried to configure jmaki.js as a javascript library but the error persists.
Does anyone solved this problem?
Hi Greg :
I have used your 1.8.1 plugin. But when i want to drag widget and drop into jsp. It doesn't work. The following is the error message :
How can i do to fix this problem?
Thanks
Hi,
The template for html wasn't there for that widget. We have verified all the templates for html in the latest build.
Can you try the template now?
I've updated the plugins (10/31/2008) yesterday.
See: for the latest.
-Greg
I'll look into this but I suspect it has to do with Ganaymede not doing it's JS check based on the assumption that the jMaki object is created.
If you look at the bottom of the jmaki.js you will see we are creating an object:
if (typeof jmaki == 'undefined') {
var jmaki = new Jmaki();
jmaki.widgets = {};
var oldLoad = window.onload;
/**
* onload calls bootstrap function to initialize and load all registered widgets
* override initial onload.
*/
window.onload = function() {
if (!jmaki.initialized) {
jmaki.initialize();
} else {
jmaki.bootstrapWidgets();
return;
}
if (typeof oldLoad == 'function') {
oldLoad();
}
}
}
I might need to report an eclipse bug ;-)
I've been doing extensive re-writing of the jMaki Plugin for Eclispe.
The docs are a little weak but you can give it a try at:
I would be interested in any feedback before we put this on our update center. Your feedback would be very helpful.
-Greg
Thank you for your answer Greg.
I already noticed that the jmaki.js creates a global scoped object called jmaki but eclipse (at least with the jmaki 1.8.0 plugin) doesn't like to reference to the global object.
I will try the 1.8.1 plugin and give you feedback.
Hi Greg,
it's strange... with the 1.8.1 plugin the result is that the jmaki variable is undefined to the javascript validator.
I have tried to adjust the javascript libraries settings but having no success. | https://www.java.net/node/684156 | CC-MAIN-2015-40 | en | refinedweb |
Examine the Components of Structured Investment Products
Opponents of structured investment products in the UK love to talk up the risks and then in a solemn voice intone that they’re fiendishly complex and the work of clever rocket scientists out to diddle the poor unsuspecting investor.
Some structured investments are, indisputably, poorly built, horribly complex and of debateable value (especially those sold on main street by big banks and mutuals). But, in reality most products sold direct to investors through advisers or listed on a stock exchange are fairly transparent in their construction.
Peer under the bonnet of nearly all structured investments and you find a common set of components; you don’t need to worry too much that they exist just understand what’s involved.
Add up these differing components to a structured product and you can quickly see that an issuer can derive a number of different sources of income or capital gain (premiums from issuing the barrier put, forgone dividends contained within the call that tracks an index, plus the difference between the zero coupon bond value at maturity less the sum paid upfront), which all go towards funding the cost of the upside call plus any bank charges and fees.
Discover the call option
The most important working bit of a structured investment component is a call option. This derivative-based option pays the upside return via the annual defined return (say, 5 per cent per annum) or the geared participation rate (say, 5 times the index return).
Underwriting an option that promises to pay out, say, 5 per cent per annum for the next five years doesn’t come free! The call option costs the structured-product issuer real, hard money and other parts of the structure have to pay for it.
Luckily this isn’t completely a black and white case of a call issuer funding the 5 per cent per annum for the next five years. The call option may perhaps track the FTSE 100 index, which can be good news for the call option issuer.
Some profit is to be had by selling away the likely flow of dividends that an investor would have received if the option had tracked the underlying index. Over a five-year period, for instance, the combined companies within the FTSE 100 index may pay out as much as 20 per cent in compound dividend return.
The issuer of the call option pockets that return as part payment for guaranteeing to make an upside return, in turn selling on that dividend participation in the dividend futures markets.
In addition, remember that the call option may never be triggered; that is, markets fall disastrously and the barrier is breached. If that’s the case, the call option is never used, no return is ever paid out and the issuer of the (upside) call option simply pockets the premium income and moves on to the next deal.
In these circumstances guaranteeing to pay out 5 per cent per annum on the upside may seem like a reasonable bet.
Enjoy the downside: The put option
Another option lurking around is called a put option, which is linked to the barrier that’s usually set at around 50 per cent or 40 per cent of the initial index level of the FTSE 100 or S&P 500 when the structure is issued.
In reality, this barrier is a downside or put option, which means that someone (probably a pension fund) has been willing to write an option that pays out a generous premium in return for making a big profit as markets dive by more than 50 per cent over the duration of the structure.
If the barrier isn’t breached the option expires worthless and the structured-product issuer pockets the premium to help pay its costs and fund the cost of the call option. The premium from writing these downside options isn’t huge but pension funds are keen buyers because these options give them some opportunity to make money in a collapsing stock market.
Get insurance: The zero coupon bond
Sitting at the heart of the zero coupon bond structure is a bond the bank issues behind the structured product. In effect, this bond guarantees that your initial investment (say, £100) is paid back in full as long as the barrier isn’t breached. Think of it as the insurance policy on paying you back your initial investment.
But insurance doesn’t come free and the bank has to fund it by going to the markets using something called a zero coupon bond. This bond is like any other bond issued by a bank with one big exception – it doesn’t pay a coupon every year but rolls up the annual cost at maturity.
Assume the bank wants to make a payout of £100 in five years’ time to fund your structured product. Given its current credit rating it discovers that it can issue a zero coupon bond that promises to pay out £100 in five years’ time but which costs just £80 today; that is, the £20 difference is the cost of funding that loan through annual interest rolled up over five years.
A risky bank (that is, one that is regarded as more likely to have problems repaying its debts by the markets) may discover that its high yields work in its favour – it may only have to pay £70 today to make £100 in five years’ time. The key, though, is that the zero coupon bond issued by the bank (it hopes) pays out the initial investment of the investor.
The difference between the eventual cost of the zero coupon bond (£100) and the initial cost (between £70 and £80) represents the cost of borrowing, which is in turn ploughed back into the structured investment to pay for the upside call option. | http://www.dummies.com/how-to/content/examine-the-components-of-structured-investment-pr.html?cid=RSS_DUMMIES2_CONTENT | CC-MAIN-2015-40 | en | refinedweb |
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-10-11 08:43:18
On Oct 10, 2007, at 9:58 AM, Ethan Mallove wrote:
>> - what's your "pass" criteria? Is it looking at wifexited and
>> wexitstatus?
>
> I have this:
>
> pass = &eq(&test_wexitstatus(), 0)
>
> I'm guessing it should it be changed to this?
>
> pass = &and(&test_wifexited(), &eq(&test_wexitstatus(), 0))
Yes -- try this.
>> - what does solaris return as an exit code in this case?
>
> Looks like 137.
>
> $ cc -G -o libbar.so bar.c
> $ file libbar.so
> libbar.so: ELF 32-bit MSB dynamic lib SPARC32PLUS Version 1,
> V8+ Required, dynamically linked, not stripped
> $ cc foo.c -L`pwd` -R`pwd` -lbar -o foo
> $ ldd foo
> libbar.so => /home/emallove/tmp/libbar.so
> ...
> $ ./foo
> $ echo $?
> 0
> $ rm libbar.so
> $ ldd foo
> libbar.so => (file not found)
> ...
> $ ./foo
> ld.so.1: foo: fatal: libbar.so: open failed: No such file or
> directory
> Killed
> $ echo $?
> 137
I honestly don't remember if $? is the exit code or the aggregate
return (i.e., it includes the bit indicating died-due-to-signal-or-not).
--
Jeff Squyres
Cisco Systems | http://www.open-mpi.org/community/lists/mtt-users/2007/10/0449.php | CC-MAIN-2015-40 | en | refinedweb |
Esotope Brainfuck compiler (aka esotope-bfc) is an optimizing Brainfuck-to-C compiler. It aims at the best Brainfuck compiler ever, enabling many possible optimizations other compilers don't.
It is currently in the development phase, but already it shows certain progress. For example the following Brainfuck program prints "Hello World!":
>+++++++++[<++++++++>-]<.>+++++++[<++++>-]<+.+++++++..+++.>>>++++++++[<++++>-]
<.>>>++++++++++[<+++++++++>-]<---.<<<<.+++.------.--------.>>+.
which is translated to the following C code by esotope-bfc (rev 7be42beabad4, 2009-05-09):
/* generated by esotope-bfc */
#include <stdio.h>
#include <stdint.h>
#define PUTS(s) fwrite(s, 1, sizeof(s)-1, stdout)
static uint8_t m[30000], *p = m;
int main(void) {
PUTS("Hello World!");
return 0;
}
Isn't it good? :) See Optimization for what's going on, or Comparison with other compilers available.
Esotope Brainfuck compiler is currently written in Python (2.5 or later). It is a part of Esotope project, which plans to give advanced implementation of every esoteric programming language.
See "source" tab above, or you can download a development snapshot as .zip or .tar.gz archive. It is in the heavy development and guaranteed to contain some kinds of bugs. ;)
Python 2.5 or later is required; for the huge program (such as The Lost Kingdom) Psyco is pretty recommended. | http://code.google.com/p/esotope-bfc/ | crawl-003 | en | refinedweb |
SYNOPSIS
#include <libintl.h>
char * bindtextdomain (const char * domainname, const char * dirname);
DESCRIPTION
The/cate-
gory/domainname.mo, where locale is a locale name and category is a
locale facet such as LC_MESSAGES.
domainname must be a non-empty string.
If dirname is not NULL, the base directory for message catalogs belong-
ing direc--
tion failure occurs, it sets errno to ENOMEM and returns NULL.
ERRORS
The following error can occur, among others:
ENOMEM Not enough memory available.
BUGS
The return type ought to be const char *, but is char * to avoid warn-
ings in C code predating ANSI C.
SEE ALSO
gettext(3), dgettext(3), dcgettext(3), ngettext(3), dngettext(3), | http://www.linux-directory.com/man3/bindtextdomain.shtml | crawl-003 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.