text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
NAME SYSCALL_MODULE -- syscall kernel module declaration macro SYNOPSIS #include <sys/param.h> #include <sys/kernel.h> #include <sys/proc.h> #include <sys/module.h> #include <sys/sysent.h> SYSCALL_MODULE(name, int *offset, struct sysent *new_sysent, modeventhand_t evh, void *arg); DESCRIPTION The SYSCALL_MODULE() macro declares a new syscall. SYSCALL_MODULE() expands into a kernel module declaration named as name. The rest of the arguments expected by this macro are: offset A pointer to an int which saves the offset in struct sysent where the syscall is allocated. new_sysent is a pointer to a structure that specifies the function implementing the syscall and the number of arguments this function needs (see <sys/sysent.h>). evh A pointer to the kernel module event handler function with the argument arg. Please refer to module(9) for more information. arg The argument passed to the callback functions of the evh event handler when it is called. EXAMPLES A minimal example for a syscall module can be found in /usr/share/examples/kld/syscall/module/syscall.c. SEE ALSO module(9) /usr/share/examples/kld/syscall/module/syscall.c AUTHORS This manual page was written by Alexander Langer <alex@FreeBSD.org>.
http://manpages.ubuntu.com/manpages/oneiric/man9/SYSCALL_MODULE.9freebsd.html
CC-MAIN-2014-41
refinedweb
194
51.95
#include "hycomp.h" #include "hyport.h" Returns the total capacity of a pool. numElements in aPool otherwise Clear the contents of a pool but not delete it. Calls a user provided function for each element in the list. Deallocates all memory associated with a pool. Asks for the address of a new pool element. If it succeeds, the address returned will have space for one element of the correct structure size. The contents of the element are undefined. If the current pool is full, a new one will be grafted onto the end of the pool chain and memory from there will be used. pointer to a new element otherwise Returns the number of elements in a given pool. the number of elements in the pool otherwise Deallocates an element from a pool. It is safe to call pool_removeElement() while looping over the pool with pool_startDo / pool_nextDo on the element returned by those calls. This is because the free element is always inserted at either the head of the free list or before the nextFree element in the pool_state. Sorts the free list of the current pool. (ie: does not follow nextPool pointers...) This is a O(n) most of the time. Genereated on Tue Dec 9 14:12:59 2008 by Doxygen. (c) Copyright 2005, 2008 The Apache Software Foundation or its licensors, as applicable.
http://harmony.apache.org/externals/vm_doc/html/hypool_8h.html
CC-MAIN-2016-26
refinedweb
226
67.45
”. Another important step in software development is profiling. GNU offers “ gprof” as a tool to analyze the execution time of functions. The working principle of gprof is that it polls the program state with a small sampling interval and notes the function that is executing. In this case small functions could also not appear in the profiling data because their execution time is smaller than an interval. I recently tried to use a feature of GNU GCC that can be of some help both for tracing and for profiling. It’s the following option (from its GNU GCC Manual section): - -finstrument-functions - Generate instrumentation calls for entry and exit to functions. Just after function entry and just before function exit, the following profiling functions will be called with the address of the current function and its call site. [...]void __cyg_profile_func_enter (void *this_fn, void *call_site); void __cyg_profile_func_exit (void *this_fn, void *call_site); [...] The execution flow can be traced implementing these monitoring points, for example writing on file some useful information. Suppose you have to analyze the following code: #include <stdio.h> void foo() { printf("foo\n"); } int main() { foo(); return 0; } Create a file called “ trace.c” with the following content: #include <stdio.h> #include <time.h> static FILE *fp_trace; void __attribute__ ((constructor)) trace_begin (void) { fp_trace = fopen("trace.out", "w"); } void __attribute__ ((destructor)) trace_end (void) { if(fp_trace != NULL) { fclose(fp_trace); } } void __cyg_profile_func_enter (void *func, void *caller) { if(fp_trace != NULL) { fprintf(fp_trace, "e %p %p %lu\n", func, caller, time(NULL) ); } } void __cyg_profile_func_exit (void *func, void *caller) { if(fp_trace != NULL) { fprintf(fp_trace, "x %p %p %lu\n", func, caller, time(NULL)); } } The idea is to write into a log (in our case “ trace.out“) the function addresses, the address of the call and the execution time. To do so, a file needs to be open at the beginning of execution. The GCC-specific attribute “ constructor” helps in defining a function that is executed before “main”. In the same way the attribute “ destructor” specifies that a function must be executed when the program is going to exit. To compile and execute the program, the command-line is: $ gcc -finstrument-functions -g -c -o main.o main.c $ gcc -c -o trace.o trace.c $ gcc main.o trace.o -o main $ ./main foo $ cat trace.out e 0x400679 0x394281c40b 1286372153 e 0x400648 0x40069a 1286372153 x 0x400648 0x40069a 1286372153 x 0x400679 0x394281c40b 1286372153 To understand the addresses, the “ addr2line” tool can be used: it’s a tool included in “ binutils” package that, given an executable with debug information, maps an execution address to a source code file and line. I put together an executable shell script (“ readtracelog.sh“) that uses addr2line to print the trace log into a readable format: #!/bin/sh if test ! -f "$1" then echo "Error: executable $1 does not exist." exit 1 fi if test ! -f "$2" then echo "Error: trace log $2 does not exist." exit 1 fi EXECUTABLE="$1" TRACELOG="$2" while read LINETYPE FADDR CADDR CTIME; do FNAME="$(addr2line -f -e ${EXECUTABLE} ${FADDR}|head -1)" CDATE="$(date -Iseconds -d @${CTIME})" if test "${LINETYPE}" = "e" then CNAME="$(addr2line -f -e ${EXECUTABLE} ${CADDR}|head -1)" CLINE="$(addr2line -s -e ${EXECUTABLE} ${CADDR})" echo "Enter ${FNAME} at ${CDATE}, called from ${CNAME} (${CLINE})" fi if test "${LINETYPE}" = "x" then echo "Exit ${FNAME} at ${CDATE}" fi done < "${TRACELOG}" Testing the script with the previous output, the result is: $ ./readtracelog.sh main trace.out Enter main at 2010-10-06T15:35:53+0200, called from ?? (??:0) Enter foo at 2010-10-06T15:35:53+0200, called from main (main.c:9) Exit foo at 2010-10-06T15:35:53+0200 Exit main at 2010-10-06T15:35:53+0200 The “ ??” symbol indicates that addr2line has no debug information on that address: in fact it should belong to C runtime libraries that initialize the program and call the main function. In this case the execution time was very small (less than a second) but in more complex scenarios the execution time can be useful to detect where the application spends the most time. It is also a good idea to use the most precise timer on the system, such as gettimeofday in Linux that returns also fractions of a second. Some thoughts for embedded platforms: - It is possible to have fine-grain timing information if the platform contains an internal hardware timer, counting even single clock cycles. It will become then important to reduce the overhead of the entry and exit functions to measure the real function execution time. - The trace information can be sent to the serial port (for example in binary form), and then interpreted by a program running on PC. - The entry and exit functions can be used to monitor also the state of other hardware peripherals, such as a temperature sensor. See also: Sylvain 2010/10/19 This is a great article ! It shows a way to trace and also profile in a manner that fits perfectly the embedded world constraints. I imagine coupling this to OCD. I was initially looking after a way to use the profiling info (-pg gcc option) and redirect it to OCD or USART stream, do you know if it is feasible ? Balau 2010/10/19 If you have an operating system running on your target, then -pg is OK. Otherwise -pg can’t work, for example because it needs a filesystem to write “gmon.out” during execution. The toolchain must also support it. With the method I explained, it is quite trivial to send data to your USART instead of writing to file, it just needs to remove the fprintf calls and add functions to write data on the correct serial port registers. I don’t know about OCD but I suppose it is more complex because it involves having a JTAG adapter and a way to send your custom debug info with it. Kaiwan 2011/03/22 Hi, If I may, I have a kind of unrelated question to this post: how exactly did you get the ‘syntax highlighter’ plugin (i assume that’s the one you’re using above) to work with this wordpress blog? It’s really cool! Am sure others would like to know as well! TIA! Balau 2011/03/22 I didn’t have to do anything, the syntax highlighter is already enabled in wordpress.com blogs: see Posting Source Code. For wordpress.org blog installations, there’s a plugin available. Kaiwan 2011/03/23 Hey, thanks again! it works. indeed Silly of me… Mahmoud Eltahawy 2012/04/26 Hi , Can I use the addr2line with shared library as input instead of executable?? Balau 2012/04/26 Yes you can, but you need to give addr2line the address as an offset from the load address of the shared library. The absolute address that you get run-time won’t work. You can find the load address of the shared library run-time with the dladdr function. Sumi Dodger 2012/08/28 i added this trace.c in my project and compiled in Gxlinux and got this why this so some one help me……….. my project also consists of main.c GxLinux- A complete embedded Linux development platform [root@GxLinux src]# gcc -finstrument -functions -g -c -o main.o main.c cc1: error: unrecognized command line option “-finstrument” cc1: error: unrecognized command line option “-functions” Balau 2012/08/28 You should write “ -finstrument-functions” without the space, you currently have “ -finstrument -functions“ Sumi Dodger 2012/09/05 I want that all the files of my project get traced using this function I have added a file(trace.c) in my code containing these functions but problem is that i have to run these commands for a particular file like main.c here gcc -finstrument-functions -g -c -o main.o main.c gcc -finstrument-functions -g -c -o main.o main.c $ gcc -c -o trace.o trace.c $ gcc main.o trace.o -o main $ ./main foo $ cat trace.out Is there any possible way to trace all the files (near about 70) at a time so that i come to know, what was the particular path or flow of functions was. or better way will be that if a function can be created for running these commands. Sumi Dodger 2012/09/06 someone plz give a solution…………. Balau 2012/09/06 I don’t know if I understand your problem. It seems that what you need is a build system. I suggest using GNU make. I am giving you a quick example: Create a file called “Makefile” where you have your C source files. Suppose your C source files are “main.c”, “foo.c” and “trace.c” The makefile contains: main: main.o foo.o trace.o CFLAGS = -g -finstrument-functions trace.o: CFLAGS = -g Then in the same directory run “make”. You should see: $ make cc -g -finstrument-functions -c -o main.o main.c cc -g -finstrument-functions -c -o foo.o foo.c cc -g -c -o trace.o trace.c cc main.o foo.o trace.o -o main You can then fill the list of object files needed by “main” and let “make” compile them. For everything you need to know on makefiles, see the GNU Make manual. Joseph Hillery 2012/09/14 Very useful! One thing omitted from code sample was to exclude the trace routines from the instrumentation. Otherwise they will recursively fire until stack is exhausted and you get a SIGSEQV. void __cyg_profile_func_enter(void *this_fn, void *call_site) __attribute__((no_instrument_function)); void __cyg_profile_func_exit(void *this_fn, void *call_site) __attribute__((no_instrument_function)); Balau 2012/09/14 Yeah, in the compilation commands that I indicated the “ trace.c” source file is not compiled with “ -finstrument-functions” option, so the trace routines were not instrumented anyway, but it’s a good idea to enforce it also with an attribute. alfred 2013/01/26 To resolve the addresses at run time you could also use the dladdr() function: ex: #include <stdio.h> #define __USE_GNU #include <dlfcn.h> void __cyg_profile_func_enter (void *this_fn, void *call_site) { Dl_info info; if (dladdr(this_fn, &info) != 0) { printf("%s", info.dli_sname); } } Michael R. Hines 2013/01/28 I’ve improved on this tutorial here: Now you can: 1. trace your program with fuzzy, human-readable function names 2. easily trace external libraries linked to by your program 3. blacklist/whitelist functions that you don’t care about 4. output trace results in human-readable format Balau 2013/01/28 Thanks for improving my work! Glad to have inspired you. Pierre Rasmussen 2013/04/22 Nice post! Do you have any trick for handling static functions? Balau 2013/04/22 What about them? My example works in the same wayalmost the same even if function foo is static or inline. Correction: I tried some more, and it seems to have some problems detecting the caller when the callee is inlined. Kaiwan 2013/07/04 Hi Balau, Superb write-up, thanks. When I attempt this on a stand-alone C program on Linux, it works just fine. However, I wanted to try this approach on the Linux kernel itself during boot; so, in the kernel src tree (am using vanilla 3.2.21 configured for ARM Vexpress), I setup a patch to provide a menu option: once enabled, the Makefile sets up the build so as to include the CFLAGS: ifdef CONFIG_KERNEL_INSTRUMENT_FUNCTIONS KBUILD_CFLAGS += -finstrument-functions -fno-inline endif The profiling hooks are similar to what you used, here’s the func entry hook: void __attribute__((no_instrument_function)) __cyg_profile_func_enter(void *func, void *caller) { printk(KERN_ALERT “E %p %p\n”, func, caller); } After some fiddling, I got it to build successfully. But when i boot this kernel (am using QEMU; even tried this on a Raspberry Pi board with same -ve result), it just appears to hang: # qemu-system-arm -m 256 -M vexpress-a9 -kernel ~/3.2.21/arch/arm/boot/zImage -initrd images/rootfs.img.gz -append “console=ttyAMA0 rdinit=/sbin/init” -nographic pulseaudio: set_sink_input_volume() failed pulseaudio: Reason: Invalid argument pulseaudio: set_sink_input_mute() failed pulseaudio: Reason: Invalid argument Uncompressing Linux… done, booting the kernel. … Any idea why it hangs? FYI, when I rebuild the kernel _without_ my new “instrument-functions” option, it boots just fine. Any help appreciated! TIA, Kaiwan. Balau 2013/07/04 First thing that comes to mind, printk itself should have __attribute__((no_instrument_function)), and all functions that are called inside printk. Otherwise you have an infinite recursion. Other function that should not be instrumented are those that are executed before the resources used by printk are initialized (memory, peripherals, stack, …) Other than that, it seems a very invasive modification. I would add that option on a small part of the kernel, and then expand on that little by little. Kaiwan 2013/07/08 Thanks Balau.. will try out stuff.. roger 2013/07/09 Thanks Balau for the great exposé. Is there any possibility to do the reported issue using eclipse? Balau 2013/07/09 It doesn’t depend on Eclipse. If Eclipse uses GCC as the toolchain, then what I wrote can be applied. roger 2013/07/10 Hello Balau, can you please explain me how to build this code using eclipse CDT? When i try to build code in Eclipse using Option -finstrument-functions i do not haave success! Balau 2013/07/10 I don’t want to install Eclipse CDT because I’m afraid it would screw up my Eclipse Android development environment. You gave me very little details on what you did. What does it mean that you did not have success? It doesn’t compile? It doesn’t link? It doesn’t create a file on execution? What did you do to add the -finstrument-functions options? I’m not familiar with Eclipse CDT but I assume it has options pane where you can put options for compilation of C source, and on your main.c you have to add this option. You have to be aware that if you add the -finstrument-functions also on trace.c it will not work because it creates a recursion problem. Usually during build the commands are printed somewhere on a build log. Are the command the same that I have in my post? Roger 2013/07/11 Thanks Balau, the codes above also work with Eclipse CDT. Can you please tell me how to use the readtracelog.sh within Eclipse? Thank you so much. Roger Balau 2013/07/11 I don’t know much about Eclipse options, but you could try adding a Launch Configuration that runs that script. roger 2013/07/12 Thanks so much Balau, but unfortunately. I copied all the needed files under ~/Debug then set the post-build command “readtracelog.sh ThelloW trace.out < t1.txt". ing the i am having this error message: Cannot run program "readtracelog.sh ThelloW trace.out < t1.txt": Launching failed. I will continue tomorrow. Roger roger 2013/09/23 Hello Balau how can i improve the calling trace information in order to have a matrix base on callers and callees methods? Thanks so much. Balau 2013/09/23 I’m not sure if I understand correctly. If you need a matrix with “callers” on the rows and “callees” on the columns, and each cell is true/false whether the caller calls the callee, then you have all the information you need, you just need to post-process it. Roger 2013/09/23 Hi Balau, that is exactly what i am looking for. I tried it every night since many months. I can read the functions address of both callers and callees in c/cpp but i don’t no how to build the corresponding Matrix. Do you have any example? Thanks so much Balau. Balau 2013/09/24 It depends very much on what you really need. Do you need to create a matrix for a particular execution (like a dynamic call graph) or a matrix for all possible executions (like a static call graph)? If you need static information, then this blog post is not what you are searching for. Go search for a static analysis tool that outputs a call graph. Do you need to create it while the program is running or you can run the program first, then post-process the output? If you need to create it while the program is running, then you will most probably need a list of lists, for example a list of callees, where each callee points to a list of its callers. Then at the end of the program you can transform it into a matrix because you can count the total number of items. If you can create it afterwards, you can use the same C code in my post, then you can create a simple script (like in python) and take the name of the callers and callees like I did in the shell script, count the unique callers and unique callees to find the matrix dimensions, create the matrix with all false in the cells, then re-scan the file to put the true values into the cells. I think it’s more difficult to do it run-time and easier to do it in post-processing. I hope this gives you some direction so that you can do it by yourself. Roger 2013/09/25 Hello Balau, thanks for your answer. I am trying to create the matrice afterwards. I have already use your code to generate the list of all callers. I have now to generate the list of the callees. I will try to create the python script for the Goal. Do you have any papers about creating such script? Thanks so much Roger Balau 2013/09/25 No, sorry. Roger 2013/09/25 Hi Balau, Any way thanks so much Balau. Roger Howdy Texas 2014/04/23 Awesome post! I would like to indent each function enter with a tab, and unindent exit with one less tab. Could you please give a hint on how to implement that. Balau 2014/04/23 I think it’s quite easy: just add in trace.ca global intvariable (preferably static), increment it in __cyg_profile_func_enter, decrement it in __cyg_profile_func_exitand use that value as you wish. Philippe Proulx 2014/09/11 For your information, if your target is running Linux, LTTng-UST (the user space tracing component of the LTTng suite) has a preloadable helper library for tracing entries to and exits from functions using -finstrument-functions. It adds LTTng-UST tracepoints to __cyg_profile_func_enterand __cyg_profile_func_exit. Since LTTng also has the ability to trace all kernel tracepoints, you can get a correlated trace of both user space function entries/exits and kernel events using an aggregating viewer like Babeltrace. Once your application (or parts of it) is (are) built with -finstrument-functions, here’s how you can trace it using LTTng: LTTng-UST has the advantage of producing a trace with very little run time overhead. Like in your case, addresses are kept as fields, so you would need another tool (like the aforementioned addr2line) to create a call graph or such.
http://balau82.wordpress.com/2010/10/06/trace-and-profile-function-calls-with-gcc/
CC-MAIN-2014-41
refinedweb
3,178
65.42
No. Ahh crap. Looks like I sucked myself back in. The more I thought about it the easier it seemed like it would be...so I went ahead and did it. Version 1.6- Add setting to match only when cursor is between brackets It is isolated with the setting, so even if it isn't perfect, it will only affect those who enable it, but I am fairly confident in it. But feel free to report bugs, or fix it if you find any bugs. **Edit: **I am actually liking these matching rules better than the old ones. I'd be lying if I said I wasn't hoping that would happen I'm going to go out and make sure I have the most recent right now! Thanks! P.S. Not that you need it, but you have my permission to take the weekend off! Small thing I noticed when testing out new bracket matching rules, is that in general, if adjacent_only is set True, internal string bracket matching no longer functions proper. So now if adjacent_only ** is **True ** and if **match_string_brackets is True, the string quote highlighting will be suppressed, but the internal string bracket matching will still be processed with the same **adjacent_only ** rules. Now I am really done, and now I am taking the weekend off . Version 1.6.1- Suppress string highlighting when adjacent_only is set, but allow internal string brackets to still get highlighted with adjacent_only settings if match_string_brackets is true Looks good, Maybe we can add yet another package "GutterIcons" and collect there some nice similar polished icons for the different sizes and possible OS.Providing an APi or something to use the icons, then other packages use these (if they whish) and the overall application will feel better. Like 4 different linters using the same icons for displaying errors. It would be really easy to do. I would throw together a repository if I thought someone would use it other than me . Hm. I noticed some odd behavior, and that may have been it. I was planning on waiting until the beginning of the week before reporting it (really want you to get some rest!), but I don't think I'll need to now. The new logic is just what I was wanting. Thank you very much for putting it in so quickly (and saving me from having to wrap my brain around yet another pile of code). I did notice what appears to be incorrect highlighting involving Ruby string interpolation, but again, I'll wait until next week to report that. And when I do, I'll post a GitHub issue instead of mentioning it here. This thread is quite long now. Good work, facelessuser. Keep it up (in moderation)! Don't worry. You can post your issue now or later. I don't drop everything when an issue pops up. These are hobby projects, if I am not having fun, I am not doing it. If I don't feel up to tackling something, I wait until I do feel like it. Some people may have an expectation that a dev should jump on an issue right away, but I just like to let people know, I will get to it when I feel like it . . I was just goofing around, but if someone else wanted to use gutter icons I coded a simple script called fancy_regions.py import sublime from glob import glob from os.path import exists, normpath, join, basename REGION_STYLES = { "solid": 0, "none": sublime.HIDDEN, "outline": sublime.DRAW_OUTLINED, "underline": sublime.DRAW_EMPTY_AS_OVERWRITE } class FancyRegions(object): def __init__(self, view): self.view = view self.__icon_path = "FancyRegions/icons" self.__icons = ] self.__icons_cached = False def erase_regions(self, key): """ Erase regions """ self.view.erase_regions(key) def add_outline_regions(self, key, regions, scope="text", icon="none", flags=0): """ Add outline regions """ self.add_regions(key=key, regions=regions, scope=scope, style="outline", flags=flags) def add_underline_regions(self, key, regions, scope="text", icon="none", flags=0): """ Add underline regions """ self.add_regions(key=key, regions=regions, scope=scope, style="underline", flags=flags) def add_hidden_regions(self, key, regions, scope="text", icon="none", flags=0): """ Add hidden regions """ self.add_regions(key=key, regions=regions, scope=scope, style="none", flags=flags) def add_solid_regions(self, key, regions, scope="text", icon="none", flags=0): """ Add solid regions """ self.add_regions(key=key, regions=regions, scope=scope, style="solid", flags=flags) def add_regions(self, key, regions, scope='text', style="solid", icon="none", flags=0): """ Add regions with defined styling to a view """ # Default flag settings options = 0 # Check style type if style in REGION_STYLES: options |= REGION_STYLES[style] # Convert regions suitable for underlining if style underline if style == "underline": regions = self.__underline(regions) # Set additional flags if given if flags: options |= flags # If icon is defined and exists, set the icon path icon_path = "" if icon != "" and icon != "none": if self.view.line_height() < 16: icon += "_small" if exists(normpath(join(sublime.packages_path(), self.__icon_path, icon + ".png"))): icon_path = "../%s/%s" % (self.__icon_path, icon) # Apply region(s) self.view.add_regions( key, regions, scope, icon_path, options ) @property def icons(self): """ Get list of available icons in current icon path """ if self.__icons_cached: return self.__icons:] else: if not self.index_icons(): return self.__icons:] else: return ] @property def icon_path(self): """ Return current path to icons (relative to Packages) """ return self.__icon_path @icon_path.setter def icon_path(self, path): """ Set current path to icons (relative to Packages) """ file_path = path.replace('\\', '/').strip('/') full_path = normpath(join(sublime.packages_path(), file_path)) if exists(full_path): self.__icon_path = file_path self.index_icons() def index_icons(self): """ """ errors = False self.__icons = ] file_path = normpath(join(sublime.packages_path(), self.__icon_path)) if exists(file_path): self.__icons = [basename(png).rstrip(".png") for png in glob(file_path + "/*.png")] self.__icons_cached = True else: self.__icons = ] self.__icons_cached = False errors = True return errors def __underline(self, regions): """ Convert regions to individual empty regions for underlining """ new_regions = ] for region in regions: start = region.begin() end = region.end() while start < end: new_regions.append(sublime.Region(start)) start += 1 return new_regions Allows you to do stuff like this: lets say you wanted to create an underlined region with the default scope and a custom icon found in the custom icon folder. import sublime from fancy_regions import FancyRegions fancy = FancyRegions(sublime.active_window().active_view()) region_list = [sublime.Region(0, 6)] fancy.add_underline_regions('test_region', region_list, icon="some_icon") It does common region types like solid, outline, underline, and it handles all of the custom icon stuff. You can even request a list of icons in the custom icon folder. I could put this in a repository, but I am just not sure if anyone would actually use it or not. But I was just fooling around. hello, i use BracketHiglighter and i like it alot, can u maybe make this option: default behavior in notepad++ Hmmm. Interesting. I was about to say that making that line would not be possible but then I realized, Yes it is. It might take some work, but with the custom gutter icons, a straight line isn't out of the question. I personally don't thing it's that important of a feature but could be an interesting exercise for the gutter's capabilities. I don't think it is out of the realm of possibilities, but it does have some limitations. To be honest, this would only really work if no_multi_select_icons was enabled. Highlighting multi-select brackets like that would just be a mess; it just wouldn't work with how ST2 does things. Tell you what, I will look into how good I could make it look with no_multi_select_icons enabled, and if I can get it to look decent, I might add it disabled by default. in this context, i was trying to recall if in previous Sublime versions the indent guides colored differently to indicate cursor location.i was actully looking for this option a few months ago, guess it wasn't even there, wasn't it? Oh. That was fast. I, personally, don't like it. Makes the gutter too busy; however, I'm impressed that it only took you a couple of minutes to implement.
https://forum.sublimetext.com/t/brackethighlighter/2780/66
CC-MAIN-2017-43
refinedweb
1,342
57.67
#include <GUI_PrimitiveHook.h> Definition at line 65 of file GUI_PrimitiveHook.h. Only one primitive hook will ever be created. Each hook requires a name for identification and error reporting. A hook can target all renderers, a subset, or a specific renderer. Create a new primitive based on the GT or GEO prim. The info and cache_name should be passed to the GR_Primitive constructor. 'processed' should indicate that the primitive is consumed (GR_PROCESSED), or processed but available for further hooks or internal Houdini processing (GR_PROCESSED_NON_EXCLUSIVE). GR_NOT_PROCESSED should only be returned if a GR_Primitive was not created. Hooks with GUI_HOOK_FLAG_PRIM_FILTER do not call this method. Reimplemented in HDK_Sample::GR_PrimTetraHook. For hooks with GUI_HOOK_FLAG_PRIM_FILTER, this method is called instead of createPrimitive. Definition at line 75 of file GUI_PrimitiveHook.h. Bitmask of renderers supported by this hook. Definition at line 78 of file GUI_PrimitiveHook.h. Definition at line 106 of file GUI_PrimitiveHook.h.
https://www.sidefx.com/docs/hdk/class_g_u_i___primitive_hook.html
CC-MAIN-2022-27
refinedweb
149
53.37
Skip list (Java) This article describes an implementation of the skip list data structure written in Java. Traditionally balanced trees have been used to efficiently implement Set and HashMap style data structures. Balanced tree algorithms are characterized by the need to continually rebalance the tree as operations are performed. This is necessary in order to assure good performance. Skip Lists are a data structure that was developed by William Pugh as a probabilistic alternative to balanced trees. The balancing of a Skip List is achieved through the use of a random number generator. The benefit of the Skip List is that the algorithms for search, insertion and deletion are rather simple. This provides improved constant time overhead as compared to balanced trees. The algorithms involved are also relatively easy to understand. In a singly linked list each node contains one pointer to the next node in the list. A node in a Skip List contains an array of pointers. The size of this array is chosen at random (between 0 and some MAX_LEVEL) at the time when the node is created. The size of this array determines the level of the node. For example, a level 3 node has 4 forward pointers, indexed from 0 to 3. The pointer at index 0 (called the level 0 forward pointer) always points to the immediate next node in the list. The other pointers however point to nodes further down the list, and as such if followed allow portions of the list to be skipped over. A level i pointer points to the next node in the list that has level >= i. What follows is an implementation of a (sorted) Set data structure based on Skip Lists, written in Java. <<SkipSet.java>>= SkipNode class SkipSet class [edit] Random Levels Before getting to the main algorithms it is a good idea to tackle the problem of generating random levels for nodes. Each node that is created will have a random level between 0 and MAX_LEVEL inclusive. The desired function will return a random level between 0 and MAX_LEVEL. A probability distribution where half of the nodes that have level i pointers also have level i+1 pointers is used. This gives us a 50% chance of the random_level() function returning 0, a 25% chance of returning 1, a 12.5% chance of returning 2 and so on. <<Defines>>= public static final double P = 0.5; <<Random Level Function>>= public static int randomLevel() { int lvl = (int)(Math.log(1.-Math.random())/Math.log(1.-P)); return Math.min(lvl, MAX_LEVEL); } [edit] Structure Definitions Each element in the list is stored in a node. The level of the node is chosen randomly when the node is created. Each SkipNode stores an array of forward pointers as well as a value which represents the element stored in the node. The type of this element can be any class type which implements the Comparable interface, because we will need to order them later. The maximum level index is 6. Java arrays are indexed starting at 0, therefore each node will have between one and seven forward pointers. <<Defines>>= public static final int MAX_LEVEL = 6; <<SkipNode class>>= class SkipNode<E extends Comparable<? super E>> { public final E value; public final SkipNode<E>[] forward; // array of pointers SkipNode Constructor } To create a SkipNode we must first allocate memory for the node itself then allocate memory for the array of forward pointers. The implementation of skip lists given by William Pugh states that the list should be terminated by a special NIL node that stores a value greater than any legal value. This stipulation was made so that the algorithms described in his paper would be very simple as they never had to explicitly check for pointers pointing at NIL. I will instead set the initial value of all pointer fields to null. A level 0 node will have one forward pointer, a level 1 node will have two forward pointers and so on. Therefore we need to add one to the level when allocating memory for the array of forward pointers. <<SkipNode Constructor>>= @SuppressWarnings("unchecked") public SkipNode(int level, E value) { forward = new SkipNode[level + 1]; this.value = value; } A structure that represents a SkipSet is defined. It stores a pointer to a header node. The value stored in the header node is irrelevant and is never accessed. The current level of the set is also stored as this is needed by the insert, delete and search algorithms. To create a SkipSet we must first allocate memory for the structure and then allocate memory for the header node. The initial level of the set is 0 because Java arrays are indexed starting at 0. <<SkipSet class>>= public class SkipSet<E extends Comparable<? super E>> { Defines Random Level Function public final SkipNode<E> header = new SkipNode<E>(MAX_LEVEL, null); public int level = 0; Print Function Search Function Insert Function Delete Function Main Function } [edit] Printing a SkipSet Before getting to the algorithms for insert, delete and search, let us first start off with a function that will print the contents of a skip set to a string. This function simply traverses the level 0 pointers and visits every node while printing out the values. <<Print Function>>= public String toString() { StringBuilder sb = new StringBuilder(); sb.append("{"); SkipNode<E> x = header.forward[0]; while (x != null) { sb.append(x.value); x = x.forward[0]; if (x != null) sb.append(","); } sb.append("}"); return sb.toString(); } [edit] Search Algorithm The search algorithm will return true if the given value is stored in the set, otherwise false. The pointer x is set to the header node of the list. The search begins at the topmost pointer of the header node according to the current level of the list. The forward pointers at this level are traversed until either a NULL pointer is encountered or a value greater than the value being searched for is encountered. When this occurs we attempt to continue the search at the next lower level until we have traversed as far as possible. At this point x will point at the node with the greatest value that is less than the value being searched for. In the case that the search value is less than any value in the list, or if the list is empty, then x will still be pointing at the header node. Finally the level 0 pointer is traversed once. So now there are three possibilities for x. - x is pointing at the node with the value we are searching for. - x is pointing at a node with value greater than the value we are searching for. - x is null. In the first case the search was successful. <<Find Node>>= for (int i = level; i >= 0; i--) { while (x.forward[i] != null && x.forward[i].value.compareTo(searchValue) < 0) { x = x.forward[i]; } } x = x.forward[0]; <<Search Function>>= public boolean contains(E searchValue) { SkipNode<E> x = header; Find Node return x != null && x.value.equals(searchValue); } [edit] Insert Algorithm To insert a value we first perform the same kind of search as in the search algorithm. But, in order to insert a new node into the list we must maintain an array of pointers to the nodes that must be updated. <<Find and record updates>>= for (int i = level; i >= 0; i--) { while (x.forward[i] != null && x.forward[i].value.compareTo(value) < 0) { x = x.forward[i]; } update[i] = x; } x = x.forward[0]; If a node is found with the same value as the value that is to be inserted then nothing should be done (a mathematical set cannot contain duplicates). Otherwise we must create a new node and insert it into the list. <<Insert Function>>= @SuppressWarnings("unchecked") public void insert(E value) { SkipNode<E> x = header; SkipNode<E>[] update = new SkipNode[MAX_LEVEL + 1]; Find and record updates if (x == null || !x.value.equals(value)) { int lvl = randomLevel(); Record header updates Insert new node } } If the new node is greater than any node already in the list then the header node must be updated and the level of the list must be set to the new level. <<Record header updates>>= if (lvl > level) { for (int i = level + 1; i <= lvl; i++) { update[i] = header; } level = lvl; } Two things must be done to insert the node. We must make the new node x point at what the nodes in the update vector are currently pointing at. Then we update the nodes in the |update| vector to point at x. <<Insert new node>>= x = new SkipNode<E>(lvl, value); for (int i = 0; i <= lvl; i++) { x.forward[i] = update[i].forward[i]; update[i].forward[i] = x; } [edit] Delete Algorithm The deletion algorithm starts the same way as the insertion algorithm. We declare an update array and then search for the node to be deleted. If the node is found we set the nodes in the update array to point at what |x| is pointing at. This effectively removes x from the list and so the memory occupied by the node may be freed. <<Delete Function>>= @SuppressWarnings("unchecked") public void delete(E value) { SkipNode<E> x = header; SkipNode<E>[] update = new SkipNode[MAX_LEVEL + 1]; Find and record updates if (x.value.equals(value)) { Remove x from list Decrease list level } } <<Remove x from list>>= for (int i = 0; i <= level; i++) { if (update[i].forward[i] != x) break; update[i].forward[i] = x.forward[i]; } After deleting the node we must check to see if the level of the list must be lowered. <<Decrease list level>>= while (level > 0 && header.forward[level] == null) { level--; } [edit] Main function For simple illustration purposes we create a SkipSet and perform some tests. <<Main Function>>= public static void main(String[] args) { SkipSet<Integer> ss = new SkipSet<Integer>(); System.out.println(ss); ss.insert(5); ss.insert(10); ss.insert(7); ss.insert(7); ss.insert(6); if (ss.contains(7)) { System.out.println("7 is in the list"); } System.out.println(ss); ss.delete(7); System.out.println(ss); if (!ss.contains(7)) { System.out.println("7 has been deleted"); } }hijackerhijacker
http://en.literateprograms.org/Skip_list_(Java)
CC-MAIN-2015-27
refinedweb
1,691
64.1
Condor allows users to access a wide variety of machines distributed around the world. The Java Virtual Machine (JVM) provides a uniform platform on any machine, regardless of the machine's architecture or operating system. The Condor Java universe brings together these two features to create a distributed, homogeneous computing environment. Compiled Java programs can be submitted to Condor, and Condor can execute the programs on any machine in the pool that will run the Java Virtual Machine. The condor_ status command can be used to see a list of machines in the pool for which Condor can use the Java Virtual Machine. % condor_status -java Name JavaVendor Ver State Activity LoadAv Mem ActvtyTime coral.cs.wisc Sun Microsy 1.2.2 Unclaimed Idle 0.000 511 0+02:28:04 doc.cs.wisc.e Sun Microsy 1.2.2 Unclaimed Idle 0.000 511 0+01:05:04 dsonokwa.cs.w Sun Microsy 1.2.2 Unclaimed Idle 0.000 511 0+01:05:04 ... If there is no output from the condor_ status command, then Condor does not know the location details of the Java Virtual Machine on machines in the pool, or no machines have Java correctly installed. In this case, contact your system administrator or see section 3.13 for more information on getting Condor to work together with Java. Here is a complete, if simple, example. Start with a simple Java program, Hello.java: public class Hello { public static void main( String [] args ) { System.out.println("Hello, world!\n"); } } Build this program using your Java compiler. On most platforms, this is accomplished with the command javac Hello.java Submission to Condor requires a submit description file. If submitting where files are accessible using a shared file system, this simple submit description file works: #################### # # Example 1 # Execute a single Java class # #################### universe = java executable = Hello.class arguments = Hello output = Hello.output error = Hello.error queue The Java universe must be explicitly selected. The main class of the program is given in the executable statement. This is a file name which contains the entry point of the program. The name of the main class (not a file name) must be specified as the first argument to the program. If submitting the job where a shared file system is not accessible, the submit description file becomes: #################### # # Example 1 # Execute a single Java class, # not on a shared file system # #################### universe = java executable = Hello.class arguments = Hello output = Hello.output error = Hello.error should_transfer_files = YES when_to_transfer_output = ON_EXIT queueFor more information about using Condor's file transfer mechanisms, see section 2.5.4. To submit the job, where the submit description file is named Hello.cmd, execute condor_submit Hello.cmd To monitor the job, the commands condor_ q and condor_ rm are used as with all jobs. executable = jar_files = Library.jar Note that the JVM must know whether it is receiving JAR files or class files. Therefore, Condor must also be informed, in order to pass the information on to the JVM. That is why there is a difference in submit description file commands for the two ways of specifying files ( transfer_input_files and jar_files). If. If a job has more sophisticated I/O requirements that cannot be met by Condor's file transfer mechanism, then the Chirp facility may provide a solution. Chirp has two advantages over simple, whole-file transfers. First, it permits the input files to be decided upon at run-time rather than submit time, and second, it permits partial-file I/O with results than can be seen as the program executes. However, small changes to the program are required in order to take advantage of Chirp. Depending on the style of the program, use either Chirp I/O streams or UNIX-like I/O functions. Chirp I/O streams are the easiest way to get started. Modify the program to use the objects ChirpInputStream and ChirpOutputStream instead of FileInputStream and FileOutputStream. These classes are completely documented in the Condor Software Developer's Kit (SDK). Here is a simple code example: import java.io.*; import edu.wisc.cs.condor.chirp.*; public class TestChirp { public static void main( String args[] ) { try { BufferedReader in = new BufferedReader( new InputStreamReader( new ChirpInputStream("input"))); PrintWriter out = new PrintWriter( new OutputStreamWriter( new ChirpOutputStream("output"))); while(true) { String line = in.readLine(); if(line==null) break; out.println(line); } out.close(); } catch( IOException e ) { System.out.println(e); } } } To perform UNIX-like I/O with Chirp, create a ChirpClient object. This object supports familiar operations such as open, read, write, and close. Exhaustive detail of the methods may be found in the Condor SDK, but here is a brief example: import java.io.*; import edu.wisc.cs.condor.chirp.*; public class TestChirp { public static void main( String args[] ) { try { ChirpClient client = new ChirpClient(); String message = "Hello, world!\n"; byte [] buffer = message.getBytes(); // Note that we should check that actual==length. // However, skip it for clarity. int fd = client.open("output","wct",0777); int actual = client.write(fd,buffer,0,buffer.length); client.close(fd); client.rename("output","output.new"); client.unlink("output.new"); } catch( IOException e ) { System.out.println(e); } } } Regardless of which I/O style, the Chirp library must be specified and included with the job. The Chirp JAR (Chirp.jar) is found in the lib directory of the Condor installation. Copy it into your working directory in order to compile the program after modification to use Chirp I/O. % condor_config_val LIB /usr/local/condor/lib % cp /usr/local/condor/lib/Chirp.jar . Rebuild the program with the Chirp JAR file in the class path. % javac -classpath Chirp.jar:. TestChirp.java The Chirp JAR file must be specified in the submit description file. Here is an example submit description file that works for both of the given test programs: universe = java executable = TestChirp.class arguments = TestChirp jar_files = Chirp.jar queue
http://www.cs.wisc.edu/condor/manual/v7.1/2_8Java_Applications.html
crawl-001
refinedweb
976
51.24
Basic Functionality¶ Prerequisites Outcomes Be familiar with datetime Use built-in aggregation functions and be able to create your own and apply them using agg Use built-in Series transformation functions and be able to create your own and apply them using apply Use built-in scalar transformation functions and be able to create your own and apply them using applymap Be able to select subsets of the DataFrame using boolean selection Know what the “want operator” is and how to apply it Data US state unemployment data from Bureau of Labor Statistics # Uncomment following line to install on colab #! pip install State Unemployment Data¶ In this lecture, we will use unemployment data by state at a monthly frequency. import pandas as pd %matplotlib inline pd.__version__ '1.4.4' First, we will download the data directly from a url and read it into a pandas DataFrame. ## Load up the data -- this will take a couple seconds url = "" unemp_raw = pd.read_csv(url, parse_dates=["Date"]) The pandas read_csv will determine most datatypes of the underlying columns. The exception here is that we need to give pandas a hint so it can load up the Date column as a Python datetime type: the parse_dates=["Date"]. We can see the basic structure of the downloaded data by getting the first 5 rows, which directly matches the underlying CSV file. unemp_raw.head() Note that a row has a date, state, labor force size, and unemployment rate. For our analysis, we want to look at the unemployment rate across different states over time, which requires a transformation of the data similar to an Excel pivot-table. # Don't worry about the details here quite yet unemp_all = ( unemp_raw .reset_index() .pivot_table(index="Date", columns="state", values="UnemploymentRate") ) unemp_all.head() 5 rows × 50 columns Finally, we can filter it to look at a subset of the columns (i.e. “state” in this case). states = [ "Arizona", "California", "Florida", "Illinois", "Michigan", "New York", "Texas" ] unemp = unemp_all[states] unemp.head() When plotting, a DataFrame knows the column and index names. unemp.plot(figsize=(8, 6)) <AxesSubplot: Exercise See exercise 1 in the exercise list. Dates in pandas¶ You might have noticed that our index now has a nice format for the dates ( YYYY-MM-DD) rather than just a year. This is because the dtype of the index is a variant of datetime. unemp.index DatetimeIndex(['2000-01-01', '2000-02-01', '2000-03-01', '2000-04-01', '2000-05-01', '2000-06-01', '2000-07-01', '2000-08-01', '2000-09-01', '2000-10-01', ... '2017-03-01', '2017-04-01', '2017-05-01', '2017-06-01', '2017-07-01', '2017-08-01', '2017-09-01', '2017-10-01', '2017-11-01', '2017-12-01'], dtype='datetime64[ns]', name='Date', length=216, freq=None) We can index into a DataFrame with a DatetimeIndex using string representations of dates. For example # Data corresponding to a single date unemp.loc["01/01/2000", :] state Arizona 4.1 California 5.0 Florida 3.7 Illinois 4.2 Michigan 3.3 New York 4.7 Texas 4.6 Name: 2000-01-01 00:00:00, dtype: float64 # Data for all days between New Years Day and June first in the year 2000 unemp.loc["01/01/2000":"06/01/2000", :] We will learn more about what pandas can do with dates and times in an upcoming lecture on time series data. DataFrame Aggregations¶ Let’s talk about aggregations. Loosely speaking, an aggregation is an operation that combines multiple values into a single value. For example, computing the mean of three numbers (for example [0, 1, 2]) returns a single number (1). We will use aggregations extensively as we analyze and manipulate our data. Thankfully, pandas makes this easy! Built-in Aggregations¶ pandas already has some of the most frequently used aggregations. For example: Mean ( mean) Variance ( var) Standard deviation ( std) Minimum ( min) Median ( median) Maximum ( max) etc… Note When looking for common operations, using “tab completion” goes a long way. unemp.mean() state Arizona 6.301389 California 7.299074 Florida 6.048611 Illinois 6.822685 Michigan 7.492593 New York 6.102315 Texas 5.695370 dtype: float64 As seen above, the aggregation’s default is to aggregate each column. However, by using the axis keyword argument, you can do aggregations by row as well. unemp.var(axis=1).head() Date 2000-01-01 0.352381 2000-02-01 0.384762 2000-03-01 0.364762 2000-04-01 0.353333 2000-05-01 0.294762 dtype: float64 Writing Your Own Aggregation¶ The built-in aggregations will get us pretty far in our analysis, but sometimes we need more flexibility. We can have pandas perform custom aggregations by following these two steps: Write a Python function that takes a Seriesas an input and outputs a single value. Call the aggmethod with our new function as an argument. For example, below, we will classify states as “low unemployment” or “high unemployment” based on whether their mean unemployment level is above or below 6.5. # # Step 1: We write the (aggregation) function that we'd like to use # def high_or_low(s): """ This function takes a pandas Series object and returns high if the mean is above 6.5 and low if the mean is below 6.5 """ if s.mean() < 6.5: out = "Low" else: out = "High" return out # # Step 2: Apply it via the agg method. # unemp.agg(high_or_low) state Arizona Low California High Florida Low Illinois High Michigan High New York Low Texas Low dtype: object # How does this differ from unemp.agg(high_or_low)? unemp.agg(high_or_low, axis=1).head() Date 2000-01-01 Low 2000-02-01 Low 2000-03-01 Low 2000-04-01 Low 2000-05-01 Low dtype: object Notice that agg can also accept multiple functions at once. unemp.agg([min, max, high_or_low]) Exercise See exercise 2 in the exercise list. Transforms¶ Many analytical operations do not necessarily involve an aggregation. The output of a function applied to a Series might need to be a new Series. Some examples: Compute the percentage change in unemployment from month to month. Calculate the cumulative sum of elements in each column. Built-in Transforms¶ pandas comes with many transform functions including: Cumulative sum/max/min/product ( cum(sum|min|max|prod)) Difference ( diff) Elementwise addition/subtraction/multiplication/division ( +, -, *, /) Percent change ( pct_change) Number of occurrences of each distinct value ( value_counts) Absolute value ( abs) Again, tab completion is helpful when trying to find these functions. unemp.head() unemp.pct_change(fill_method = None).head() # Skip calculation for missing data unemp.diff().head() Transforms can be split into to several main categories: Series transforms: functions that take in one Series and produce another Series. The index of the input and output does not need to be the same. Scalar transforms: functions that take a single value and produce a single value. An example is the absmethod, or adding a constant to each value of a Series. Custom Series Transforms¶ pandas also simplifies applying custom Series transforms to a Series or the columns of a DataFrame. The steps are: Write a Python function that takes a Series and outputs a new Series. Pass our new function as an argument to the applymethod (alternatively, the transformmethod). As an example, we will standardize our unemployment data to have mean 0 and standard deviation 1. After doing this, we can use an aggregation to determine at which date the unemployment rate is most different from “normal times” for each state. # # Step 1: We write the Series transform function that we'd like to use # def standardize_data(x): """ Changes the data in a Series to become mean 0 with standard deviation 1 """ mu = x.mean() std = x.std() return (x - mu)/std # # Step 2: Apply our function via the apply method. # std_unemp = unemp.apply(standardize_data) std_unemp.head() # Takes the absolute value of all elements of a function abs_std_unemp = std_unemp.abs() abs_std_unemp.head() # find the date when unemployment was "most different from normal" for each State def idxmax(x): # idxmax of Series will return index of maximal value return x.idxmax() abs_std_unemp.agg(idxmax) state Arizona 2009-11-01 California 2010-03-01 Florida 2010-01-01 Illinois 2009-12-01 Michigan 2009-06-01 New York 2009-11-01 Texas 2009-08-01 dtype: datetime64[ns] Custom Scalar Transforms¶ As you may have predicted, we can also apply custom scalar transforms to our pandas data. To do this, we use the following pattern: Define a Python function that takes in a scalar and produces a scalar. Pass this function as an argument to the applymapSeries or DataFrame method. Complete the exercise below to practice writing and using your own scalar transforms. Exercise See exercise 3 in the exercise list. Boolean Selection¶ We have seen how we can select subsets of data by referring to the index or column names. However, we often want to select based on conditions met by the data itself. Some examples are: Restrict analysis to all individuals older than 18. Look at data that corresponds to particular time periods. Analyze only data that corresponds to a recession. Obtain data for a specific product or customer ID. We will be able to do this by using a Series or list of boolean values to index into a Series or DataFrame. Let’s look at some examples. unemp_small = unemp.head() # Create smaller data so we can see what's happening unemp_small # list of booleans selects rows unemp_small.loc[[True, True, True, False, False]] # second argument selects columns, the ``:`` means "all". # here we use it to select all columns unemp_small.loc[[True, False, True, False, True], :] # can use booleans to select both rows and columns unemp_small.loc[[True, True, True, False, False], [True, False, False, False, False, True, True]] Creating Boolean DataFrames/Series¶ We can use conditional statements to construct Series of booleans from our data. unemp_small["Texas"] < 4.5 Date 2000-01-01 False 2000-02-01 False 2000-03-01 False 2000-04-01 True 2000-05-01 True Name: Texas, dtype: bool Once we have our Series of bools, we can use it to extract subsets of rows from our DataFrame. unemp_small.loc[unemp_small["Texas"] < 4.5] unemp_small["New York"] > unemp_small["Texas"] Date 2000-01-01 True 2000-02-01 True 2000-03-01 True 2000-04-01 True 2000-05-01 True dtype: bool big_NY = unemp_small["New York"] > unemp_small["Texas"] unemp_small.loc[big_NY] Multiple Conditions¶ In the boolean section of the basics lecture, we saw that we can use the words and and or to combine multiple booleans into a single bool. Recall: True and False -> False True and True -> True False and False -> False True or False -> True True or True -> True False or False -> False We can do something similar in pandas, but instead of bool1 and bool2 we write: (bool_series1) & (bool_series2) Likewise, instead of bool1 or bool2 we write: (bool_series1) | (bool_series2) small_NYTX = (unemp_small["Texas"] < 4.7) & (unemp_small["New York"] < 4.7) small_NYTX Date 2000-01-01 False 2000-02-01 False 2000-03-01 True 2000-04-01 True 2000-05-01 True dtype: bool unemp_small[small_NYTX] isin¶ Sometimes, we will want to check whether a data point takes on one of a several fixed values. We could do this by writing (df["x"] == val_1) | (df["x"] == val_2) (like we did above), but there is a better way: the .isin method unemp_small["Michigan"].isin([3.3, 3.2]) Date 2000-01-01 True 2000-02-01 True 2000-03-01 True 2000-04-01 True 2000-05-01 False Name: Michigan, dtype: bool # now select full rows where this Series is True unemp_small.loc[unemp_small["Michigan"].isin([3.3, 3.2])] .any and .all¶ Recall from the boolean section of the basics lecture that the Python functions any and all are aggregation functions that take a collection of booleans and return a single boolean. any returns True whenever at least one of the inputs are True while all is True only when all the inputs are True. Series and DataFrames with dtype bool have .any and .all methods that apply this logic to pandas objects. Let’s use these methods to count how many months all the states in our sample had high unemployment. As we work through this example, consider the “want operator”, a helpful concept from Nobel Laureate Tom Sargent for clearly stating the goal of our analysis and determining the steps necessary to reach the goal. We always begin by writing Want: followed by what we want to accomplish. In this case, we would write: Want: Count the number of months in which all states in our sample had unemployment above 6.5% After identifying the want, we work backwards to identify the steps necessary to accomplish our goal. So, starting from the result, we have: Sum the number of Truevalues in a Series indicating dates for which all states had high unemployment. Build the Series used in the last step by using the .allmethod on a DataFrame containing booleans indicating whether each state had high unemployment at each date. Build the DataFrame used in the previous step using a >comparison. Now that we have a clear plan, let’s follow through and apply the want operator: # Step 3: construct the DataFrame of bools high = unemp > 6.5 high.head() # Step 2: use the .all method on axis=1 to get the dates where all states have a True all_high = high.all(axis=1) all_high.head() Date 2000-01-01 False 2000-02-01 False 2000-03-01 False 2000-04-01 False 2000-05-01 False dtype: bool # Step 1: Call .sum to add up the number of True values in `all_high` # (note that True == 1 and False == 0 in Python, so .sum will count Trues) msg = "Out of {} months, {} had high unemployment across all states" print(msg.format(len(all_high), all_high.sum())) Out of 216 months, 41 had high unemployment across all states Exercise See exercise 4 in the exercise list. Exercises¶ Exercise 1¶ Looking at the displayed DataFrame above, can you identify the index? The columns? You can use the cell below to verify your visual intuition. # your code here Exercise 2¶ Do the following exercises in separate code cells below: At each date, what is the minimum unemployment rate across all states in our sample? What was the median unemployment rate in each state? What was the maximum unemployment rate across the states in our sample? What state did it happen in? In what month/year was this achieved? Hint What Python type (not dtype) is returned by the aggregation? Hint Read documentation for the method idxmax. Classify each state as high or low volatility based on whether the variance of their unemployment is above or below 4. # min unemployment rate by state # median unemployment rate by state # max unemployment rate across all states and Year # low or high volatility Exercise 3¶ Imagine that we want to determine whether unemployment was high (> 6.5), medium (4.5 < x <= 6.5), or low (<= 4.5) for each state and each month. Write a Python function that takes a single number as an input and outputs a single string noting if that number is high, medium, or low. Pass your function to applymap(quiz: why applymapand not aggor apply?) and save the result in a new DataFrame called unemp_bins. (Challenging) This exercise has multiple parts: Use another transform on unemp_binsto count how many times each state had each of the three classifications. Hint Will this value counting function be a Series or scalar transform? Hint Try googling “pandas count unique value” or something similar to find the right transform. Construct a horizontal bar chart of the number of occurrences of each level with one bar per state and classification (21 total bars). (Challenging) Repeat the previous step, but count how many states had each classification in each month. Which month had the most states with high unemployment? What about medium and low? # Part 1: Write a Python function to classify unemployment levels. # Part 2: Pass your function from part 1 to applymap unemp_bins = unemp.applymap#replace this comment with your code!! # Part 3: Count the number of times each state had each classification. ## then make a horizontal bar chart here # Part 4: Apply the same transform from part 4, but to each date instead of to each state. Exercise 4¶ For a single state of your choice, determine what the mean unemployment is during “Low”, “Medium”, and “High” unemployment times (recall your unemp_binsDataFrame from the exercise above). Think about how you would do this for all the states in our sample and write your thoughts… We will soon learn tools that will greatly simplify operations like this that operate on distinct groups of data at a time. Which states in our sample performs the best during “bad times?” To determine this, compute the mean unemployment for each state only for months in which the mean unemployment rate in our sample is greater than 7.
https://datascience.quantecon.org/pandas/basics.html
CC-MAIN-2022-40
refinedweb
2,834
64.3
a pointer to Pointer (Or Double Pointer) in C? int **pr; There should be two *s before double pointer variable Consider the below figure and program to understand this concept better – Diagram – As per the figure, pr2 is a pointer for num (as pr2 is having address of variable num), similarly pr1 is a pointer for another pointer pr1 (as pr1 is having the address of pointer pr2). A pointer which points to another pointer is known as double pointer. In this example pr1 is a double pointer. Values from above diagram – Variable num has address: XX771230 Address of Pointer pr1 is: XX661111 Address of Pointer pr2 is: 66X123X1 Here in this example pr1 is a double pointer. #include <stdio.h> int main() { int num=123; /*Pointer for num*/ int *pr2; /*Double pointer for pr2*/ int **pr1; /* I’m reading the address of variable num and * storing it in pointer pr2*/ pr2 = # /* storing the address of pointer pr2 into another: %u", &num); printf("\n Address of num using pr2 is: %u", pr2); printf("\n Address of num using pr1 is: %u", *pr1); /*Find value of pointer*/ printf("\n Value of Pointer pr2 is: %u", pr2); printf("\n Value of Pointer pr2 using pr1 is: %u", *pr1); /*Ways to find address of pointer*/ printf("\n Address of Pointer pr2 is:%u",&pr2); printf("\n Address of Pointer pr2 using pr1 is:%u",*pr1); /*Double pointer value and address*/ printf("\n Value of Pointer pr1 is:%u",pr1); printf("\n Address of Pointer pr1 is: First observe the output, you may find some weird thing happening over here. But below points will help out to clarify few things in a better way –.
http://beginnersbook.com/2014/01/c-pointer-to-pointer/
CC-MAIN-2017-09
refinedweb
280
50.23
This topic will show you how to start developing cross-platforms apps in your machine using the .NET Core CLI tools. If you're unfamiliar with the .NET Core CLI toolset, read the .NET Core SDK overview. Prerequisites - .NET Core SDK 1.0. - A text editor or code editor of your choice. Hello, Console App! You can view or download the sample code from the dotnet/docs GitHub repository. For download instructions, see Samples and Tutorials. Open a command prompt and create a folder named Hello. Navigate to the folder you created and type the following: $ dotnet new console $ dotnet restore $ dotnet run Let's do a quick walkthrough: $ dotnet new console dotnet newcreates an up-to-date Hello.csprojproject file with the dependencies necessary to build a console app. It also creates a Program.cs, a basic file containing the entry point for the application. Hello.csproj: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <OutputType>Exe</OutputType> <TargetFramework>netcoreapp1.0</TargetFramework> </PropertyGroup> </Project> The project file specifies everything that's needed to restore dependencies and build the program. - The OutputTypetag specifies that we're building an executable, in other words a console application. - The TargetFrameworktag specifies what .NET implementation we're targeting. In an advance scenario, you can specify multiple target frameworks and build to all those in a single operation. In this tutorial, we'll stick to building only for .NET Core 1.0. Program.cs: using System; namespace Hello { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); } } } The program starts by using System, which means "bring everything in the Systemnamespace into scope for this file". The Systemnamespace includes basic constructs such as string, or numeric types. We then define a namespace called Hello. You can change this to anything you want. A class named Programis defined within that namespace, with a Mainmethod that takes an array of strings as its argument. This array contains the list of arguments passed in when the compiled program is called. As it is, this array is not used: all the program is doing is to write "Hello World!" to the console. Later, we'll make changes to the code that will make use of this argument. $ dotnet restore dotnet restorecalls. The project.assets.json file is a persisted and complete set of the graph of NuGet dependencies and other information describing an app. This file is read by other tools, such as dotnet buildand dotnet run, enabling them to process the source code with a correct set of NuGet dependencies and binding resolutions. $ dotnet run dotnet runcalls dotnet buildto ensure that the build targets have been built, and then calls dotnet <assembly.dll>to run the target application. $ dotnet run Hello World! Alternatively, you can also execute dotnet buildto compile the code without running the build console applications. This results in a compiled application as a DLL file that can be run with dotnet bin\Debug\netcoreapp1.0\Hello.dllon Windows (use /for non-Windows systems). You may also specify arguments to the application as you'll see later on the topic. $ dotnet bin\Debug\netcoreapp1.0\Hello.dll Hello World! As an advanced scenario, it's possible to build the application as a self-contained set of platform-specific files that can be deployed and run to a machine that doesn't necessarily have .NET Core installed. See .NET Core Application Deployment for details. Augmenting the program Let's change the program a bit. Fibonacci numbers are fun, so let's add that in addition to use the argument to greet the person running the app. Replace the contents of your Program.cs file with the following code: using System; namespace Hello { class Program { static void Main(string[] args) { if (args.Length > 0) { Console.WriteLine($"Hello {args[0]}!"); } else { Console.WriteLine("Hello!"); } Console.WriteLine("Fibonacci Numbers 1-15:"); for (int i = 0; i < 15; i++) { Console.WriteLine($"{i + 1}: {FibonacciNumber(i)}"); } } static int FibonacciNumber(int n) { int a = 0; int b = 1; int tmp; for (int i = 0; i < n; i++) { tmp = a; a = b; b += tmp; } return a; } } } Execute dotnet buildto compile the changes. Run the program passing a parameter to the app: $ dotnet run -- John Hello John! Fibonacci Numbers 1-15: 1: 0 2: 1 3: 1 4: 2 5: 3 6: 5 7: 8 8: 13 9: 21 10: 34 11: 55 12: 89 13: 144 14: 233 15: 377 And that's it! You can augment Program.cs any way you like. Working with multiple files Single files are fine for simple one-off programs, but if you're building a more complex app, you're probably going to have multiple source files on your project Let's build off of the previous Fibonacci example by caching some Fibonacci values and add some recursive features. Add a new file inside the Hello directory named FibonacciGenerator.cs with the following code: using System; using System.Collections.Generic; namespace Hello { public class FibonacciGenerator { private Dictionary<int, int> _cache = new Dictionary<int, int>(); private int Fib(int n) => n < 2 ? n : FibValue(n - 1) + FibValue(n - 2); private int FibValue(int n) { if (!_cache.ContainsKey(n)) { _cache.Add(n, Fib(n)); } return _cache[n]; } public IEnumerable<int> Generate(int n) { for (int i = 0; i < n; i++) { yield return FibValue(i); } } } } Change the Mainmethod in your Program.cs file to instantiate the new class and call its method as in the following example: using System; namespace Hello { class Program { static void Main(string[] args) { var generator = new FibonacciGenerator(); foreach (var digit in generator.Generate(15)) { Console.WriteLine(digit); } } } } Execute dotnet buildto compile the changes. Run your app by executing dotnet run. The following shows the program output: 0 1 1 2 3 5 8 13 21 34 55 89 144 233 377 And that's it! Now, you can start using the basic concepts learned here to create your own programs. Note that the commands and steps shown in this tutorial to run your application are used during development time only. Once you're ready to deploy your app, you'll want to take a look at the different deployment strategies for .NET Core apps and the dotnet publish command. See also Organizing and testing projects with the .NET Core CLI tools
https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli
CC-MAIN-2017-34
refinedweb
1,045
58.69
A* path finding with Dart A simple A* algorithm implemented in Dart. An example of path finding. Last updated 2013-11. The original 2D algorithm was ported from this JavaScript example. No effort has been made to optimize it. A more generic A* algorithm was added in November 2013. That one is fairly optimized. See LICENSE file for license details. See running example at Example There are two separate A algorithms in this package. One of them, aStar2D, is specific to 2D grid maps.* The usage can be as simple as: import 'package:a_star/a_star_2d.dart'; main() { String textMap = """ sooooooo oxxxxxoo oxxoxooo oxoogxxx """; Maze maze = new Maze.parse(textMap); Queue<Tile> solution = aStar2D(maze); } The second algorithm is generic and works on any graph (e.g. 3D grids, mesh networks). The usage is best explained with an example (details below): import 'package:a_star/a_star.dart'; class TerrainTile extends Object with Node { // ... } class TerrainMap implements Graph<TerrainTile> { // Must implement 4 methods. Iterable<T> get allNodes => /* ... */ num getDistance(T a, T b) => /* ... */ num getHeuristicDistance(T a, T b) => /* ... */ Iterable<T> getNeighboursOf(T node) => /* ... */ } main() { var map = new TerrainMap(); var pathFinder = new AStar(map); var start = /* ... */ var goal = /* ... */ pathFinder.findPath(start, goal) .then((path) => print("The best path from $start to $goal is: $path")); } Explanation: Here, we have a TerrainMap of TerrainTile nodes. The only requirements are that TerrainMap implements Graph (4 methods) and TerrainTile is extended with the Node mixin (no additional work). Then, we can simply instantiate the A* algorithm by new AStar(map) and find paths between two nodes by calling the findPath(start, goal) method. Normally, we would only create the AStar instance once and then reuse it throughout our program. This saves performance. You can also use findPathSync(start, goal) if you don't need to worry about blocking. All the three classes ( AStar, Graph and Node) are well documented in lib/a_star.dart. For a complete example, see the minimal unit tests or one of the two generalized benchmarks ( benchmark.dart or benchmark_generalized_2d). Reporting bugs Please file bugs at Contributors - -
https://www.dartdocs.org/documentation/a_star/0.3.0/index.html
CC-MAIN-2017-22
refinedweb
343
61.53
in reply to Perl object memory overhead There's no difference, since an allocated hash is already set up for being blessed. push @l,{} for 1..3e7; # allocate 3 mill. anon hashes system "ps -o vsz= -p $$"; # get memory usage of process $_ = bless $_ for @l; # bless each hash into 'main' system "ps -o vsz= -p $$"; # again get memory usage __END__ 2650968 2650968 [download] which shows: making an anonymous hash into an object means blessing it into a namespace. That operation signifies no overhead as far as memory is concerned, since the namespace bits are already allocated in the first place. update: added comments update: AnomalousMonk correctly noted that 3e7 for 3 millions is wrong by an order of magnitude. Who else noticed? ;) Priority 1, Priority 2, Priority 3 Priority 1, Priority 0, Priority -1 Urgent, important, favour Data loss, bug, enhancement Out of scope, out of budget, out of line Family, friends, work Impossible, inconceivable, implemented Other priorities Results (252 votes), past polls
http://www.perlmonks.org/index.pl?node_id=1080023
CC-MAIN-2015-32
refinedweb
165
50.77
Logging¶ Mbientlab boards are equipped with flash memory that can store data if there is no connection with an Android device. While the RouteComponent directs data to the logger, the Logging interface provides the functions for controlling and configuring the logger. import com.mbientlab.metawear.module.Logging; final Logging logging = board.getModule(Logging.class); Start the Logger¶ To start logging, call the start method. If you wish to overwrite existing entries, pass true to the method. When you are done logging, call stop. // start logging, if log is full, no new data will be added logging.start(false); // start logging, if log is full, overrite existing data logging.start(true); // stop the logger logging.stop(); Note for the MMS¶ The MMS (MetaMotionS) board uses NAND flash memory to store data on the device itself. The NAND memory stores data in pages that are 512 entries large. When data is retrieved, it is retrieved in page sized chunks. Before doing a full download of the log memory on the MMS, the final set of data needs to be written to the NAND flash before it can be downloaded as a page. To do this, you must call the function: logging.flushPage(board); This should not be called if you are still logging data. Downloading Data¶ When you are ready to retrieve the data, call downloadAsync; the method returns a Task object that will be completed when the download finishes. There are variants of downloadAsync that provide progress updates and error handing during the download. //; } });
https://mbientlab.com/androiddocs/latest/logging.html
CC-MAIN-2022-05
refinedweb
253
65.62
jGuru Forums Posted By: dibakar_ray Posted On: Thursday, January 1, 2004 03:12 AM I am in the process of designing an application. In the application I am planning to use a bean to store some key value pairs. For example Name ->character Address-> character, age ->integer etc. I need to render them at run time.In order to render them properly I need all the Key value pairs, so that I can format them properly. I thought of using some custom tags to render the content of the bean. But as I was going through the tag library tutorial, I found that attributes passed to the tag library can only be of type string. Moreover as I dont know the number of key value pairs in advance and they can be quite a few, passing them as an attribute will be difficult. So my question is- How to pass a set of Key value pair to custom tag libray in JSP. Thanks Dibakar Re: Tag library and java beans Posted By: Anin_Mathen Posted On: Friday, January 2, 2004 09:19 AM public class MenuTagExtraInfo extends TagExtraInfo { public VariableInfo[] getVariableInfo(TagData tagdata) { return ( new VariableInfo[] { new VariableInfo( tagdata.getId(), "com.x.webui.MenuRenderer", true, VariableInfo.NESTED)}); }}
http://www.jguru.com/forums/view.jsp?EID=1136541
CC-MAIN-2015-22
refinedweb
206
61.77
In the previous article Seaborn Library for Data Visualization in Python: Part 1, we looked at how the Seaborn Library is used to plot distributional and categorial plots. In this article we will continue our discussion and will see some of the other functionalities offered by Seaborn to draw different types of plots. We will start our discussion with Matrix Plots. Matrix Plots Matrix plots are the type of plots that show data in the form of rows and columns. Heat maps are the prime examples of matrix plots. Heat Maps Heat maps are normally used to plot correlation between numeric columns in the form of a matrix. It is important to mention here that to draw matrix plots, you need to have meaningful information on rows as well as columns. Continuing with the theme from teh last article, let's plot the first five rows of the Titanic dataset to see if both the rows and column headers have meaningful information. Execute the following script: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dataset = sns.load_dataset('titanic') dataset.head() In the output, you will see the following result: From the output, you can see that the column headers contain useful information such as passengers survived, their age, fare etc. However the row headers only contains indexes 0, 1, 2, etc. To plot matrix plots, we need useful information on both columns and row headers. One way to do this is to call the corr() method on the dataset. The corr() function returns the correlation between all the numeric columns of the dataset. Execute the following script: dataset.corr() In the output, you will see that both the columns and the rows have meaningful header information, as shown below: Now to create a heat map with these correlation values, you need to call the heatmap() function and pass it your correlation dataframe. Look at the following script: corr = dataset.corr() sns.heatmap(corr) The output looks like this: From the output, it can be seen that what heatmap essentially does is that it plots a box for every combination of rows and column value. The color of the box depends upon the gradient. For instance, in the above image if there is a high correlation between two features, the corresponding cell or the box is white, on the other hand if there is no correlation, the corresponding cell remains black. The correlation values can also be plotted on the heatmap by passing True for the annot parameter. Execute the following script to see this in action: corr = dataset.corr() sns.heatmap(corr, annot=True) Output: You can also change the color of the heatmap by passing an argument for the cmap parameter. For now, just look at the following script: corr = dataset.corr() sns.heatmap(corr, cmap='winter') The output looks like this: In addition to simply using correlation between all the columns, you can also use pivot_table function to specify the index, the column and the values that you want to see corresponding to the index and the columns. To see pivot_table function in action, we will use the "flights" data set that contains the information about the year, the month and the number of passengers that traveled in that month. Execute the following script to import the data set and to see the first five rows of the dataset: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dataset = sns.load_dataset('flights') dataset.head() Output: Now using the pivot_table function, we can create a heat map that displays the number of passengers that traveled in a specific month of a specific year. To do so, we will pass month as the value for the index parameter. The index attribute corresponds to the rows. Next we need to pass year as value for the column parameter. And finally for the values parameter, we will pass the passengers column. Execute the following script: data = dataset.pivot_table(index='month', columns='year', values='passengers') sns.heatmap(data) The output looks like this: It is evident from the output that in the early years the number of passengers who took the flights was less. As the years progress, the number of passengers increases. Currently, you can see that the boxes or the cells are overlapping in some cases and the distinction between the boundaries of the cells is not very clear. To create a clear boundary between the cells, you can make use of the linecolor and linewidths parameters. Take a look at the following script: data = dataset.pivot_table(index='month', columns='year', values='passengers' ) sns.heatmap(data, linecolor='blue', linewidth=1) In the script above, we passed "blue" as the value for the linecolor parameter, while the linewidth parameter is set to 1. In the output you will see a blue boundary around each cell: You can increase the value for the linewidth parameter if you want thicker boundaries. Cluster Map In addition to heat map, another commonly used matrix plot is the cluster map. The cluster map basically uses Hierarchical Clustering to cluster the rows and columns of the matrix. Let's plot a cluster map for the number of passengers who traveled in a specific month of a specific year. Execute the following script: data = dataset.pivot_table(index='month', columns='year', values='passengers') sns.clustermap(data) To plot a cluster map, clustermap function is used, and like the heat map function, the dataset passed should have meaningful headers for both rows and columns. The output of the script above looks like this: In the output, you can see months and years clustered together on the basis of number of passengers that traveled in a specific month. With this, we conclude our discussion about the Matrix plots. In the next section we will start our discussion about grid capabilities of the Seaborn library. Seaborn Grids Grids in Seaborn allow us to manipulate the subplots depending upon the features used in the plots. Pair Grid In Part 1 of this article series, we saw how pair plot can be used to draw scatter plot for all possible combinations of the numeric columns in the dataset. Let's revise the pair plot here before we can move on to the pair grid. The dataset we are going to use for the pair grid section is the "iris" dataset which is downloaded by default when you download the seaborn library. Execute the following script to load the iris dataset: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dataset = sns.load_dataset('iris') dataset.head() The first five rows of the iris dataset look like this: Now let's draw a pair plot on the iris dataset. Execute the following script: sns.pairplot(dataset) A snapshot of the out looks like this: Now let's plot pair grid and see the difference between the pair plot and the pair grid. To create a pair grid, you simply have to pass the dataset to the PairGrid function, as shown below: sns.PairGrid(dataset) Output: In the output, you can see empty grids. This is essentially what the pair grid function does. It returns an empty set of grids for all the features in the dataset. Next, you need to call map function on the object returned by the pair grid function and pass it the type of plot that you want to draw on the grids. Let's plot a scatter plot using the pair grid. grids = sns.PairGrid(dataset) grids.map(plt.scatter) The output looks like this: You can see scatter plots for all the combinations of numeric columns in the "iris" dataset. You can also plot different types of graphs on the same pair grid. For instance, if you want to plot a "distribution" plot on the diagonal, "kdeplot" on the upper half of the diagonal, and "scatter" plot on the lower part of the diagonal you can use map_diagonal, map_upper, and map_lower functions, respectively. The type of plot to be drawn is passed as the parameter to these functions. Take a look at the following script: grids = sns.PairGrid(dataset) grids.map_diag(sns.distplot) grids.map_upper(sns.kdeplot) grids.map_lower(plt.scatter) The output of the script above looks like this: You can see the true power of the pair grid function from the image above. On the diagonals we have distribution plots, on the upper half we have the kernel density plots, while on the lower half we have the scatter plots. Facet Grids The facet grids are used to plot two or more than two categorical features against two or more than two numeric features. Let's plot a facet grid which plots the distributional plot of gender vs alive with respect to the age of the passengers. For this section, we will again use the Titanic dataset. Execute the following script to load the Titanic dataset: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dataset = sns.load_dataset('titanic') To draw facet grid, the FacetGrid() function is used. The first parameter to the function is the dataset, the second parameter col specifies the feature to plot on columns while the row parameter specifies the feature on the rows. The FacetGrid() function returns an object. Like the pair grid, you can use the map function to specify the type of plot you want to draw. Execute the following script: grid = sns.FacetGrid(data=dataset, col='alive', row='sex') grid.map(sns.distplot, 'age') In the above script, we plot the distributional plot for age on the facet grid. The output looks like this: From the output, you can see four plots. One for each combination of gender and survival of the passenger. The columns contain information about the survival while the rows contain information about the sex, as specified by the FacetGrid() function. The first row and first column contain age distribution of the passengers where sex is male and the passengers did not survive. The first row and second column contain age distribution of the passengers where sex is male and the passengers survived. Similarly, the second row and first column contain age distribution of the passengers where sex is female and the passengers did not survive while the second row and second column contain age distribution of the passengers where sex is female and the passengers survived. In addition to distributional plots for one feature, we can also plot scatter plots that involve two features on the facet grid. For instance, the following script plots the scatter plot for age and fare for both the genders of the passengers who survived and who didn't. grid = sns.FacetGrid(data= dataset, col= 'alive', row = 'sex') grid.map(plt.scatter, 'age', 'fare') The output of the script above looks like this: Regression Plots Regression plots, as the name suggests are used to perform regression analysis between two or more variables. In this section, we will study the linear model plot that plots a linear relationship between two variables along with the best-fit regression line depending upon the data. The dataset that we are going to use for this section is the "diamonds" dataset which is downloaded by default with the seaborn library. Execute the following script to load the dataset: import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dataset = sns.load_dataset('diamonds') dataset.head() The dataset looks like this: The dataset contains different features of a diamond such as weight in carats, color, clarity, price, etc. Let's plot a linear relationship between, carat and price of the diamond. Ideally, the heavier the diamond is, the higher the price should be. Let's see if this is actually true based on the information available in the diamonds dataset. To plot the linear model, the lmplot() function is used. The first parameter is the feature you want to plot on the x-axis, while the second variable is the feature you want to plot on the y-axis. The last parameter is the dataset. Execute the following script: sns.lmplot(x='carat', y='price', data=dataset) The output looks like this: You can also plot multiple linear models based on a categorical feature. The feature name is passed as value to the hue parameter. For instance, if you want to plot multiple linear models for the relationship between carat and price feature, based on the cut of the diamond, you can use lmplot function as follows: sns.lmplot(x='carat', y='price', data=dataset, hue='cut') The output looks like this: From the output, you can see that the linear relationship between the carat and the price of the diamond is steepest for the ideal cut diamond as expected and the linear model is shallowest for fair cut diamond. In addition to plotting the data for the cut feature with different hues, we can also have one plot for each cut. To do so, you need to pass the column name to the cols attribute. Take a look at the following script: sns.lmplot(x='carat', y='price', data=dataset, col='cut') In the output, you will see a separate column for each value in the cut column of the diamonds dataset as shown below: You can also change the size and aspect ratio of the plots using the aspect and size parameters. Take a look at the following script: sns.lmplot(x='carat', y = 'price', data= dataset, col = 'cut', aspect = 0.5, size = 8 ) The aspect parameter defines the aspect ratio between the width and height. An aspect ratio of 0.5 means that the width is half of the height as shown in the output. You can see through the size of the plot has changed, the font size is still very small. In the next section, we will see how to control the fonts and styles of the Seaborn plots. Plot Styling Seaborn library comes with a variety of styling options. In this section, we will see some of them. Set Style The set_style() function is used to set the style of the grid. You can pass the darkgrid, whitegrid, dark, white and ticks as the parameters to the set_style function. For this section, we will again use the "titanic dataset". Execute the following script to see darkgrid style. sns.set_style('darkgrid') sns.distplot(dataset['fare']) The output looks like this; In the output, you can see that we have dark back ground with grids. Let's see how whitegrid looks like. Execute the following script: sns.set_style('whitegrid') sns.distplot(dataset['fare']) The output looks like this: Now you can see that we still have grids in the background but the dark grey background is not visible. I would suggest that you try and play with the rest of the options and see which style suits you. Change Figure Size Since Seaborn uses Matplotlib functions behind the scenes, you can use Matplotlib's pyplot package to change the figure size as shown below: plt.figure(figsize=(8,4)) sns.distplot(dataset['fare']) In the script above, we set the width and height of the plot to 8 and 4 inches respectively. The output of the script above looks like this: Set Context Apart from the notebook, you may need to create plots for posters. To do so, you can use the set_context() function and pass it poster as the only attribute as shown below: sns.set_context('poster') sns.distplot(dataset['fare']) In the output, you should see a plot with the poster specifications as shown below. For instance, you can see that the fonts are much bigger compared to normal plots. Conclusion Seaborn Library is an advanced Python library for data visualization. This article is Part 2 of the series of articles on Seaborn for Data Visualization in Python. In this article, we saw how to plot regression and matrix plots in Seaborn. We also saw how to change plot styles and use grid functions to manipulate subplots. In the next article, we will see how Python's Pandas library's built-in capabilities can be used for data visualization.
https://stackabuse.com/seaborn-library-for-data-visualization-in-python-part-2/
CC-MAIN-2019-43
refinedweb
2,716
62.07
SCANF(3V) SCANF(3V) NAME scanf, fscanf, sscanf - formatted input conversion SYNOPSIS #include <<stdio.h>> int scanf(format [ , pointer... ] ) char *format; int fscanf(stream, format [ , pointer... ] ) FILE *stream; char *format; int sscanf(s, format [ , pointer... ] ) char *s, *format; SYSTEM V SYNOPSIS The following are provided for XPG2 compatibility: #define nl_scanf scanf #define nl_fscanf fscanf #define nl_sscanf sscanf DESCRIPTION scanf() reads from the standard input stream stdin. fscanf() reads from the named input stream. sscanf() reads from the character string s. Each function reads characters, interprets them according to a for- mat, and stores the results in its arguments. Each expects, as argu- ments, a control string format, described below, and a set of pointer arguments indicating where the converted input should be stored. The results are undefined in there are insufficient args for the format. If the format is exhausted while args remain, the excess args are sim- ply ignored. The control string usually contains conversion specifications, which are used to direct interpretation of input sequences. The control string may contain: o White-space characters (SPACE, TAB, or NEWLINE) which, except in two cases described below, cause input to be read up to the next non-white-space character. o An ordinary character (not `%'), which must match the next character of the input stream. o Conversion specifications, consisting of the character `%' or the character sequence %digit$, an optional assignment sup- pressing character `*', an optional numerical maximum field width, an optional l (ell) or h indicating the size of the receiving variable, and a conversion code. Conversion specifications are introduced by the character % or the character sequence %digit$. A conversion specification directs the conversion of the next input field; the result is placed in the vari- able pointed to by the corresponding argument, unless assignment sup- pression was indicated by `*'. The suppression of assignment provides a way of describing an input field which is to be skipped. An input field is defined as a string of non-space characters; it extends to the next inappropriate character or until the field width, if specified, is exhausted. For all descriptors except ``['' and ``c'', white space leading an input field is ignored. The conversion character indicates the interpretation of the input field; the corresponding pointer argument must usually be of a restricted type. For a suppressed field, no pointer argument is given. The following conversion characters are legal: % A single % is expected in the input at this point; no assignment is done. d A decimal integer is expected; the corresponding argument should be an integer pointer. u An unsigned decimal integer is expected; the correspond- ing argument should be an unsigned integer pointer. o An octal integer is expected; the corresponding argument should be an integer pointer. x A hexadecimal integer is expected; the corresponding argument should be an integer pointer. i An integer is expected; the corresponding argument should be an integer pointer. It will store the value of the next input item interpreted according to C conventions: a leading ``0'' implies octal; a leading ``0x'' implies hexadecimal; otherwise, decimal. n Stores in an integer argument the total number of charac- ters (including white space) that have been scanned so far since the function call. No input is consumed. e,f,g A floating point number is expected; the next field is converted accordingly and stored through the correspond- ing argument, which should be a pointer to a float. The input format for floating point numbers is as described for string_to_decimal(3), with fortran_conventions zero. s A character string is expected; the corresponding argu- ment should be a character pointer pointing to an array of characters large enough to accept the string and a terminating \0, which will be added automatically. The input field is terminated by a white space character. c A character is expected; the corresponding argument should be a character pointer. The normal skip over white space is suppressed in this case; to read the next non-space character, use %1s. If a field width is given, the corresponding argument should refer to a character array, and the indicated number of characters is read. [ Indicates string data; the normal skip over leading white space is suppressed. The left bracket is followed by a set of characters, which we will call the scanset, and a right bracket; the input field is the maximal sequence of input characters consisting entirely of characters in the scanset. The circumflex (^), when it appears as the first character in the scanset, serves as a complement operator and redefines the scanset as the set of all characters not contained in the remainder of the scanset string. There are some conventions used in the construc- tion of the scanset. A range of characters may be repre- sented by the construct first-last, thus [0123456789] may be expressed [0-9]. Using this convention, first must be lexically less than or equal to last, or else the dash will stand for itself. The dash will also stand for itself whenever it is the first or the last character in the scanset. To include the right square bracket as an element of the scanset, it must appear as the first char- acter (possibly preceded by a circumflex) of the scanset, and in this case it will not be syntactically interpreted as the closing bracket. The corresponding argument must point to a character array large enough to hold the data field and the terminating \0, which will be added auto- matically. At least one character must match for this conversion to be considered successful. The conversion characters d, u, o, x, and i may be preceded by l or h to indicate that a pointer to long or to short rather than to int is in the argument list. Similarly, the conversion characters e, f, and g may be preceded by l to indicate that a pointer to double rather than to float is in the argument list. The l or h modifier is ignored for other conversion characters. Avoid this common error: because printf(3V) does not require that the lengths of conversion descriptors and actual parameters match, coders sometimes are careless with the scanf() functions. But converting %f to &double or %lf to &float does not work; the results are quite incor- rect.. The constant EOF is returned upon end of input. Note: this is different from 0, which means that no conversion was done; if conversion was intended, it was frus- trated by an inappropriate character in the input. If the input ends before the first conflict or conversion, EOF is returned. If the input ends after the first conflict or conversion, the number of successfully matched items is returned. Conversions can be applied to the nth argument in the argument list, rather than the next unused argument. In this case, the conversion character % (see below) is replaced by the sequence %digit$, where digit is a decimal integer n in the range [1,9], giving the position of the argument in the argument list. This feature provides for the defi- nition of format strings that select arguments in an order appropriate to specific languages. The format string can contain either form of a conversion specifica- tion, that is % or %digit$, although the two forms cannot be mixed within a single format string. All forms of the scanf() functions allow for the detection of a lan- guage dependent radix character in the input string. The radix charac- ter is defined by the program's locale (category LC_NUMERIC). In the "C" locale, or in a locale where the radix character is not defined, the radix character defaults to `.'. SYSTEM V DESCRIPTION FORMFEED is allowed as a white space character in control strings. XPG2 requires that nl_scanf, nl_fscanf and nl_sscanf be defined as scanf, fscanf and sscanf, respectively for backward compatibility. RETURN VALUES If any items are converted, scanf(), fscanf() and sscanf() return the number of items converted successfully. This number may smaller than the number of items requested. If no items are converted, these func- tions return 0. scanf(), fscanf() and sscanf() return EOF on end of input. EXAMPLES The call:. Or: int i, j; float x; char name[50]; (void) scanf("%i%2d%f%*d %[0-9]", &&j, &&i, &&x, name); with input: 011 56789 0123 56a72 will assign 9 to j, 56 to i, 789.0 to x, skip 0123, and place the string 56\0 in name. The next call to getchar() (see getc(3V)) will return a. Or: int i, j, s, e; char name[50]; (void) scanf("%i %i %n%s%n", &&i, &&j, &&s, name, &&e); with input: 0x11 0xy johnson will assign 17 to i, 0 to j, 6 to s, will place the string xy\0 in name, and will assign 8 to e. Thus, the length of name is e - s = 2. The next call to getchar() (see getc(3V)) will return a SPACE. SEE ALSO getc(3V), printf(3V), setlocale(3V), stdio(3V), string_to_decimal(3), strtol(3) WARNINGS Trailing white space (including a NEWLINE) is left unread unless matched in the control string. BUGS The success of literal matches and suppressed assignments is not directly determinable. 21 January 1990 SCANF(3V)
http://modman.unixdev.net/?sektion=3&page=scanf&manpath=SunOS-4.1.3
CC-MAIN-2017-17
refinedweb
1,530
52.6
I’m looking for all the old documentation for plotly before plotly express was introduced. Upgrading to a later plotly version won’t be possible for us in the near future, so I still need access to the old docs. For example, old versions of the following docs would be nice, as the APIs have changed a lot: . Versioned documentation (have new plotly express docs overwritten old api docs?) @amittleider, you can find the older versions of our docs at. It says “v3” but those docs are also compatible with v2. Thank you, @michaelbabyn ! Hi @michaelbabyn Thanks for this link because I was going crazy trying to find the version 3 docs (since like the original person, I can’t upgrade any time soon but I still need to maintain the old stuff). Is there a way this could be linked in a more obvious way? I might just be blind, but I couldn’t find any link on the Python docs telling me that this even existed anymore. Hi @felixvelariusbos, unfortunately we’re actually going in the other direction with our v3 (v2 compatible) docs and we plan on getting rid of them in the future. So if you plan on continuing to use and update v2/v3 of plotly.py, I would recommend looking over which may help you to convert the V4 style of creating charts that we recommend and use in the new docs to the older V3-compatible style. Essentially that boils down to not being able to use the graph_objects update_* methods and the new magic underscore syndax so you’ll have to convert something like this: import plotly.graph_objects as go fig = go.Figure(data=go.Bar(x=[1, 2, 3], y=[1, 3, 2])) fig.update_layout(title_text="A Bar Chart", title_font_size=30) fig.show() to import plotly.offline as py data = [go.Bar(x=[1, 2, 3], y=[1, 3, 2]))] layout = dict( title="A Bar Chart", titlefont=dict(size=30) ) ) fig = dict(data=data, layout=layout) py.iplot(fig) I.e you’ll need to replace underscores with nested dicts and you’ll have to explicitly create a layout object. Additionally, you will need to know that we changed the layout attribute titlefont to font as a subattribute of a new title object attribute (see this PR for more details about that change). This process will work for 90% of the non-plotly-express examples in the new docs Running into similar issue where I need access to documentation for the previous version. The above link no longer seems to work. @michaelbabyn are they still available anywhere? Tnx Hi @michaelbabyn and @rsciban Sorry, I hadn’t seen your reply. I too can no longer find the old documentation. I would like put in another vote for putting the old docs back. If the new documentation reflected what you just showed I would not mind…however all the new documentation heavily emphasizes Plotly Express (example:), which is completely different than the old style. While it seems most pages have information on the old way of doing it…they hide the old style way at the bottom, which makes me worry that it’s going to go away. To add to it, dumping the old documentation is going to make it harder to migrate. I’m not even using that “update_layout” method much, but based on the current docs, I’ll have convert almost every single chart I have to match the new plotly express model, and I have absolutely no idea where to start. Not having those original docs on hand to remember why I implemented things the way I did in the first place makes it even harder. Is there a particular reason why Plotly is pushing so hard on erasing this? Hi folks, I’m sorry this is causing problems… Here’s how we’ve been thinking about this: - Almost everything that you could do in v3 still works in v4 (see our migration guide for exceptions:) providing you don’t use any new Plotly.js features or attributes. - Our old v3 docs heavily emphasized our Chart Studio product: every single page said you had to sign up to our service to get started and every single example suggested you had to host your chart on Chart Studio, and this turned off thousands of potential users. In version 4 we cut this out and put it into a separate package. - Some of the v3 docs hadn’t been touched or regenerated since the early v2 days and hence were really misleading/out of date even before v4 came out. - When we released v4, it took hundreds of person-hours to rewrite the docs to our current standards, and we simply don’t have the resources to upgrade our old v3 docs to fix the problems they had. - Our v3 docs were ranking higher than our v4 docs in Google searches in some cases, leading to confused users, and people still using/documenting/blogging about old patterns to accomplish certain tasks for which there are much better approaches now. Taking all these points together, we felt it would be the least-bad option to continue investing all of our time and energy into making the v4 docs the best we could make them, and to stop hosting the v3 docs so as to make sure we’re not confusing new users about how Plotly.py works, and especially to stop reinforcing the old notion that Plotly.py is mostly a client for Chart Studio Cloud. All that said, I believe we still have the old docs in our repo in the gh-pages branch if you go back about a year… Maybe I could provide some instructions about how to download and locally browse them if people are really interested? PS: regarding the “old style going away”… as I’ve said, almost everything in v3 is still possible with v4, and we have no plans to get rid of graph_objects or otherwise breaking that API Hi @felixvelariusbos , The new style it’s not a plotly express style, but it’s the new style for plotly.py (graph_objects)(see here what is plotly express). To understand how to interpret the new style, please read this answer to a question: Follow-up: I went back through our Github history and tagged a particular commit, pre-v4-switchover in May 2019, around Plotly.py 3.9. As a result, you can now do the following: - download the tarball and unpack it (NB it’s around 200 megs compressed, 900 megs uncompressed!) - in that directory, run python -m http.serverwhich will spawn a local webserver - point your browser to - browse our docs roughly they way they looked on May 14, 2019! You’ll see some CSS breakage as some common assets and URLs have changed since then, but there’s not a lot I can do about that. @nicolaskruchten @empet I appreciate you all taking the time to write this up, and do a tag in the Git repo for the old-style. I’ll definitely check it out. I personally appreciate the switch away from emphasizing Chart Studio, so no complaints there! And appreciate the clarification there are no plans to get rid of graph_objects. I’m just paranoid.
https://community.plotly.com/t/versioned-documentation-have-new-plotly-express-docs-overwritten-old-api-docs/27569
CC-MAIN-2020-34
refinedweb
1,215
68.2
Can. Can. Possibly related – it mentions the same “Annotation processing got disabled” error: so on the actual RASPBIAN i see java -version java version "1.8.0_65" Java(TM) SE Runtime Environment (build 1.8.0_65-b17) Java HotSpot(TM) Client VM (build 25.65-b01, mixed mode) with apt-cache search openjdk find only openjdk-8 openjdk-9 and also not want to check how to install a older version… but here have a list of old systems, just need to find out what java they use (when). also see: in my own BLOG i see Nov 2013 a test #________________________try upgrade to new version wget tar -xvzf processing-2.1-linux32.tgz cd /home/pi/processing-2.1/ rm -rf java ln -s /usr/lib/jvm/java-6-openjdk-armhf java failed and i changed back/ sorry not sure but i think it was still on NOOBS v1.3 info from 2014 //___________________________________ pls. give us a few words why you are on that museums trip do you have the situation that you have some old code what not work under processing IDE 3.4? and would it be worth to down / back-grade OS/JAVA(not hardfloat)/processing? for about 5 years back? Thanks to the both of you for you replies and sorry for the delay. I am new to processing and I have seen both of the posts that you mention in my research prior to posting my question but as I am new to processing I did not think that they applied. As I write this I am still confused and do not know if there is a solution for my issue. Background: I am trying to build up an Adalight setup and I have gotten it running on a Windows 7 machine as well as a Linux Mint machine however when I go to transfer it over to it’s final home, the Pi 3B+ I get the above error. According to the Adafruit Adalight documentation there are know issues with running the processing sketch on Processing 3 so this is why I am trying to run Processing 2.2.1 on the Pi. Again I have managed to get Processing 2.2.1 working on both the Windows 7 & Mint builds. Because of this I am hopeful that there is something simple that I am missing with the Pi. When I try and run Processing 3.4 on the Pi I get an error where it tells me that it cannot determine the size of the sketch and that I need to determine a numeric value for the size of the sketch rather than the equation that the Sketch currently uses to calculate the size. Is the better solution to fix the code/sketch so that it can run on Processing 3.4? // "Adalight" is a do-it-yourself facsimile of the Philips Ambilight concept // for desktop computers and home theater PCs. This is the host PC-side code // written in Processing, intended for use with a USB-connected Arduino // microcontroller running the accompanying LED streaming code. Requires one // or more strands of Digital RGB LED Pixels (Adafruit product ID #322, // specifically the newer WS2801-based type, strand of 25) and a 5 Volt power // supply (such as Adafruit #276). You may need to adapt the code and the // hardware arrangement for your specific display configuration. // Screen capture adapted from code by Cedrik Kiefer (processing.org forum) // -------------------------------------------------------------------- // // <>. // -------------------------------------------------------------------- import java.awt.*; import java.awt.image.*; import processing.serial.*; // CONFIGURABLE PROGRAM CONSTANTS -------------------------------------------- // Minimum LED brightness; some users prefer a small amount of backlighting // at all times, regardless of screen content. Higher values are brighter, // or set to 0 to disable this feature. static final short minBrightness = 120; // LED transition speed; it's sometimes distracting if LEDs instantaneously // track screen contents (such as during bright flashing sequences), so this // feature enables a gradual fade to each new LED state. Higher numbers yield // slower transitions (max of 255), or set to 0 to disable this feature // (immediate transition of all LEDs). static final short fade = 75; // Pixel size for the live preview image. static final int pixelSize = 20; // Depending on many factors, it may be faster either to capture full // screens and process only the pixels needed, or to capture multiple // smaller sub-blocks bounding each region to be processed. Try both, // look at the reported frame rates in the Processing output console, // and run with whichever works best for you. static final boolean useFullScreenCaps = true; // Serial device timeout (in milliseconds), for locating Arduino device // running the corresponding LEDstream code. See notes later in the code... // in some situations you may want to entirely comment out that block. static final int timeout = 5000; // 5 seconds // PER-DISPLAY INFORMATION --------------------------------------------------- // This array contains details for each display that the software will // process. If you have screen(s) attached that are not among those being // "Adalighted," they should not be in this list. Each triplet in this // array represents one display. The first number is the system screen // number...typically the "primary" display on most systems is identified // as screen #1, but since arrays are indexed from zero, use 0 to indicate // the first screen, 1 to indicate the second screen, and so forth. This // is the ONLY place system screen numbers are used...ANY subsequent // references to displays are an index into this list, NOT necessarily the // same as the system screen number. For example, if you have a three- // screen setup and are illuminating only the third display, use '2' for // the screen number here...and then, in subsequent section, '0' will be // used to refer to the first/only display in this list. // The second and third numbers of each triplet represent the width and // height of a grid of LED pixels attached to the perimeter of this display. // For example, '9,6' = 9 LEDs across, 6 LEDs down. static final int displays[][] = new int[][] { // {0,9,6} // Screen 0, 9 LEDs across, 6 LEDs down {0,19,10} // Screen 0, 19 LEDs across, 10 LEDs down //,{1,9,6} // Screen 1, also 9 LEDs across and 6 LEDs down }; // PER-LED INFORMATION ------------------------------------------------------- // This array contains the 2D coordinates corresponding to each pixel in the // LED strand, in the order that they're connected (i.e. the first element // here belongs to the first LED in the strand, second element is the second // LED, and so forth). Each triplet in this array consists of a display // number (an index into the display array above, NOT necessarily the same as // the system screen number) and an X and Y coordinate specified in the grid // units given for that display. {0,0,0} is the top-left corner of the first // display in the array. // For our example purposes, the coordinate list below forms a ring around // the perimeter of a single screen, with a one pixel gap at the bottom to // accommodate a monitor stand. Modify this to match your own setup: static final int leds[][] = new int[][] { {0,3,9}, {0,2,9}, {0,1,9}, {0,0,9}, // Bottom edge, left half {0,7,9}, {0,6,9}, {0,5,9}, {0,4,9}, {0,0,4}, {0,0,3}, {0,0,2}, {0,0,1}, // Left edge {0,0,8}, {0,0,7}, {0,0,6}, {0,0,5}, // Left edge {0,0,0}, {0,1,0}, {0,2,0}, {0,3,0}, {0,4,0}, // Top edge {0,5,0}, {0,6,0}, {0,7,0}, {0,8,0}, // More top edge {0,9,0}, {0,10,0}, {0,11,0}, {0,12,0}, {0,13,0}, {0,14,0}, {0,15,0}, {0,16,0}, {0,17,0}, {0,18,0}, {0,18,1}, {0,18,2}, {0,18,3}, {0,18,4}, // Right edge {0,18,5}, {0,18,6}, {0,18,7}, {0,18,8}, {0,14,9}, {0,13,9}, {0,12,9}, {0,11,9}, // Bottom edge, right half {0,18,9}, {0,17,9}, {0,16,9}, {0,15,9} /* Hypothetical second display has the same arrangement as the first. But you might not want both displays completely ringed with LEDs; the screens might be positioned where they share an edge in common. ,{1,3,5}, {1,2,5}, {1,1,5}, {1,0,5}, // Bottom edge, left half {1,0,4}, {1,0,3}, {1,0,2}, {1,0,1}, // Left edge {1,0,0}, {1,1,0}, {1,2,0}, {1,3,0}, {1,4,0}, // Top edge {1,5,0}, {1,6,0}, {1,7,0}, {1,8,0}, // More top edge {1,8,1}, {1,8,2}, {1,8,3}, {1,8,4}, // Right edge {1,8,5}, {1,7,5}, {1,6,5}, {1,5,5} // Bottom edge, right half */ }; // GLOBAL VARIABLES ---- You probably won't need to modify any of this ------- byte[] serialData = new byte[6 + leds.length * 3]; short[][] ledColor = new short[leds.length][3], prevColor = new short[leds.length][3]; byte[][] gamma = new byte[256][3]; int nDisplays = displays.length; Robot[] bot = new Robot[displays.length]; Rectangle[] dispBounds = new Rectangle[displays.length], ledBounds; // Alloc'd only if per-LED captures int[][] pixelOffset = new int[leds.length][256], screenData; // Alloc'd only if full-screen captures PImage[] preview = new PImage[displays.length]; Serial port; DisposeHandler dh; // For disabling LEDs on exit // INITIALIZATION ------------------------------------------------------------ void setup() { GraphicsEnvironment ge; GraphicsConfiguration[] gc; GraphicsDevice[] gd; int d, i, totalWidth, maxHeight, row, col, rowOffset; int[] x = new int[16], y = new int[16]; float f, range, step, start; dh = new DisposeHandler(this); // Init DisposeHandler ASAP println(Serial.list()); // Open serial port. As written here, this assumes the Arduino is the // first/only serial device on the system. If that's not the case, // change "Serial.list()[0]" to the name of the port to be used: // port = new Serial(this, Serial.list()[1], 115200); // Alternately, in certain situations the following line can be used // to detect the Arduino automatically. But this works ONLY with SOME // Arduino boards and versions of Processing! This is so convoluted // to explain, it's easier just to test it yourself and see whether // it works...if not, leave it commented out and use the prior port- // opening technique. port = openPort(); // And finally, to test the software alone without an Arduino connected, // don't open a port...just comment out the serial lines above. // Initialize screen capture code for each display's dimensions. dispBounds = new Rectangle[displays.length]; if(useFullScreenCaps == true) { screenData = new int[displays.length][]; // ledBounds[] not used } else { ledBounds = new Rectangle[leds.length]; // screenData[][] not used } ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); gd = ge.getScreenDevices(); if(nDisplays > gd.length) nDisplays = gd.length; totalWidth = maxHeight = 0; for(d=0; d<nDisplays; d++) { // For each display... try { bot[d] = new Robot(gd[displays[d][0]]); } catch(AWTException e) { System.out.println("new Robot() failed"); continue; } gc = gd[displays[d][0]].getConfigurations(); dispBounds[d] = gc[0].getBounds(); dispBounds[d].x = dispBounds[d].y = 0; preview[d] = createImage(displays[d][1], displays[d][2], RGB); preview[d].loadPixels(); totalWidth += displays[d][1]; if(d > 0) totalWidth++; if(displays[d][2] > maxHeight) maxHeight = displays[d][2]; } // Precompute locations of every pixel to read when downsampling. // Saves a bunch of math on each frame, at the expense of a chunk // of RAM. Number of samples is now fixed at 256; this allows for // some crazy optimizations in the downsampling code. for(i=0; i<leds.length; i++) { // For each LED... d = leds[i][0]; // Corresponding display index // Precompute columns, rows of each sampled point for this LED range = (float)dispBounds[d].width / (float)displays[d][1]; step = range / 16.0; start = range * (float)leds[i][1] + step * 0.5; for(col=0; col<16; col++) x[col] = (int)(start + step * (float)col); range = (float)dispBounds[d].height / (float)displays[d][2]; step = range / 16.0; start = range * (float)leds[i][2] + step * 0.5; for(row=0; row<16; row++) y[row] = (int)(start + step * (float)row); if(useFullScreenCaps == true) { // Get offset to each pixel within full screen capture for(row=0; row<16; row++) { for(col=0; col<16; col++) { pixelOffset[i][row * 16 + col] = y[row] * dispBounds[d].width + x[col]; } } } else { // Calc min bounding rect for LED, get offset to each pixel within ledBounds[i] = new Rectangle(x[0], y[0], x[15]-x[0]+1, y[15]-y[0]+1); for(row=0; row<16; row++) { for(col=0; col<16; col++) { pixelOffset[i][row * 16 + col] = (y[row] - y[0]) * ledBounds[i].width + x[col] - x[0]; } } } } for(i=0; i<prevColor.length; i++) { prevColor[i][0] = prevColor[i][1] = prevColor[i][2] = minBrightness / 3; } // Preview window shows all screens side-by-side size(totalWidth * pixelSize, maxHeight * pixelSize, JAVA2D); noSmooth(); // A special header / magic word is expected by the corresponding LED // streaming code running on the Arduino. This only needs to be initialized // once (not in draw() loop) because the number of LEDs remains constant: serialData[0] = 'A'; // Magic word serialData[1] = 'd'; serialData[2] = 'a'; serialData[3] = (byte)((leds.length - 1) >> 8); // LED count high byte serialData[4] = (byte)((leds.length - 1) & 0xff); // LED count low byte serialData[5] = (byte)(serialData[3] ^ serialData[4] ^ 0x55); // Checksum // Pre-compute gamma correction table for LED brightness levels: for(i=0; i<256; i++) { f = pow((float)i / 255.0, 2.8); gamma[i][0] = (byte)(f * 255.0); gamma[i][1] = (byte)(f * 240.0); gamma[i][2] = (byte)(f * 220.0); } } // Open and return serial connection to Arduino running LEDstream code. This // attempts to open and read from each serial device on the system, until the // matching "Ada\n" acknowledgement string is found. Due to the serial // timeout, if you have multiple serial devices/ports and the Arduino is late // in the list, this can take seemingly forever...so if you KNOW the Arduino // will always be on a specific port (e.g. "COM6"), you might want to comment // out most of this to bypass the checks and instead just open that port // directly! (Modify last line in this method with the serial port name.) Serial openPort() { String[] ports; String ack; int i, start; Serial s; ports = Serial.list(); // List of all serial ports/devices on system. for(i=0; i<ports.length; i++) { // For each serial port... System.out.format("Trying serial port %s\n",ports[i]); try { s = new Serial(this, ports[i], 115200); } catch(Exception e) { // Can't open port, probably in use by other software. continue; } // Port open...watch for acknowledgement string... start = millis(); while((millis() - start) < timeout) { if((s.available() >= 4) && ((ack = s.readString()) != null) && ack.contains("Ada\n")) { return s; // Got it! } } // Connection timed out. Close port and move on to the next. s.stop(); } // Didn't locate a device returning the acknowledgment string. // Maybe it's out there but running the old LEDstream code, which // didn't have the ACK. Can't say for sure, so we'll take our // changes with the first/only serial device out there... return new Serial(this, ports[0], 115200); } // PER_FRAME PROCESSING ------------------------------------------------------ void draw () { BufferedImage img; int d, i, j, o, c, weight, rb, g, sum, deficit, s2; int[] pxls, offs; if(useFullScreenCaps == true ) { // Capture each screen in the displays array. for(d=0; d<nDisplays; d++) { img = bot[d].createScreenCapture(dispBounds[d]); // Get location of source pixel data screenData[d] = ((DataBufferInt)img.getRaster().getDataBuffer()).getData(); } } weight = 257 - fade; // 'Weighting factor' for new frame vs. old j = 6; // Serial led data follows header / magic word // This computes a single pixel value filtered down from a rectangular // section of the screen. While it would seem tempting to use the native // image scaling in Processing/Java, in practice this didn't look very // good -- either too pixelated or too blurry, no happy medium. So // instead, a "manual" downsampling is done here. In the interest of // speed, it doesn't actually sample every pixel within a block, just // a selection of 256 pixels spaced within the block...the results still // look reasonably smooth and are handled quickly enough for video. for(i=0; i<leds.length; i++) { // For each LED... d = leds[i][0]; // Corresponding display index if(useFullScreenCaps == true) { // Get location of source data from prior full-screen capture: pxls = screenData[d]; } else { // Capture section of screen (LED bounds rect) and locate data:: img = bot[d].createScreenCapture(ledBounds[i]); pxls = ((DataBufferInt)img.getRaster().getDataBuffer()).getData(); } offs = pixelOffset[i]; rb = g = 0; for(o=0; o<256; o++) { c = pxls[offs[o]]; rb += c & 0x00ff00ff; // Bit trickery: R+B can accumulate in one var g += c & 0x0000ff00; } // Blend new pixel value with the value from the prior frame ledColor[i][0] = (short)((((rb >> 24) & 0xff) * weight + prevColor[i][0] * fade) >> 8); ledColor[i][1] = (short)(((( g >> 16) & 0xff) * weight + prevColor[i][1] * fade) >> 8); ledColor[i][2] = (short)((((rb >> 8) & 0xff) * weight + prevColor[i][2] * fade) >> 8); // Boost pixels that fall below the minimum brightness sum = ledColor[i][0] + ledColor[i][1] + ledColor[i][2]; if(sum < minBrightness) { if(sum == 0) { // To avoid divide-by-zero deficit = minBrightness / 3; // Spread equally to R,G,B ledColor[i][0] += deficit; ledColor[i][1] += deficit; ledColor[i][2] += deficit; } else { deficit = minBrightness - sum; s2 = sum * 2; // Spread the "brightness deficit" back into R,G,B in proportion to // their individual contribition to that deficit. Rather than simply // boosting all pixels at the low end, this allows deep (but saturated) // colors to stay saturated...they don't "pink out." ledColor[i][0] += deficit * (sum - ledColor[i][0]) / s2; ledColor[i][1] += deficit * (sum - ledColor[i][1]) / s2; ledColor[i][2] += deficit * (sum - ledColor[i][2]) / s2; } } // Apply gamma curve and place in serial output buffer serialData[j++] = gamma[ledColor[i][0]][0]; serialData[j++] = gamma[ledColor[i][1]][1]; serialData[j++] = gamma[ledColor[i][2]][2]; // Update pixels in preview image preview[d].pixels[leds[i][2] * displays[d][1] + leds[i][1]] = (ledColor[i][0] << 16) | (ledColor[i][1] << 8) | ledColor[i][2]; } if(port != null) port.write(serialData); // Issue data to Arduino // Show live preview image(s) scale(pixelSize); for(i=d=0; d<nDisplays; d++) { preview[d].updatePixels(); image(preview[d], i, 0); i += displays[d][1] + 1; } println(frameRate); // How are we doing? // Copy LED color data to prior frame array for next pass arraycopy(ledColor, 0, prevColor, 0, ledColor.length); } // CLEANUP ------------------------------------------------------------------- // The DisposeHandler is called on program exit (but before the Serial library // is shutdown), in order to turn off the LEDs (reportedly more reliable than // stop()). Seems to work for the window close box and escape key exit, but // not the 'Quit' menu option. Thanks to phi.lho in the Processing forums. public class DisposeHandler { DisposeHandler(PApplet pa) { pa.registerDispose(this); } public void dispose() { // Fill serialData (after header) with 0's, and issue to Arduino... // Arrays.fill(serialData, 6, serialData.length, (byte)0); java.util.Arrays.fill(serialData, 6, serialData.length, (byte)0); if(port != null) port.write(serialData); } } Unfortunately I’m not able to test on that Pi right now and not familiar with Adalight specifically. However, it sounds like this is ambilight-related – you might possibly be interested in this related project, and / or contacting the author to see if they have any ideas: i found already 2 small things about the code running here DEBIAN stretch Linux version 4.14.79-v7+ Raspberry Pi 3 Model B Plus Rev 1.3 but only with sudo raspi-config G3 legacy GL and just now no arduino connected testing in Processing 3.4 -a- for setup size try: // Preview window shows all screens side-by-side // size(totalWidth * pixelSize, maxHeight * pixelSize, JAVA2D); //_________________________________________________________________________________________________ KLL println("totalWidth "+totalWidth); // totalWidth 19 println("pixelSize "+pixelSize); // pixelSize 20 println("maxHeight "+maxHeight); // maxHeight 10 println("pixelSize "+pixelSize); // pixelSize 20 size(380, 200, JAVA2D); -b- and at the end i needed: public class DisposeHandler { DisposeHandler(PApplet pa) { //_________________________________________________________________________________________________ KLL // pa.registerDispose(this); pa.registerMethod("dispose",this); } i hope that gives you a start i would love to follow your project ( but not get the neo pixel /smart RGB LED here in town ) still when you post here pls. try to give more info list and link required hardware , libraries, documentation what is the concept? arduino and 5V WS… LED strip could work alone, Raspberry USB Arduino is also the right way to avoid 3v3 || 5v conflicts but what would be the Raspberry’s job? Raspberry processing could anyway not talk to the LED’s as there is no library ( but there is one for python at Adafruit )) Thanks kll, I’ll give this a try and see. I will let you know how I make out. Thanks again kll. Made the changes and it work with 3.4 the first time out. I lost a week trying to make 2.2.1 work. Two more questions for you is there something I’m missing when it comes to exporting an application. I created the file on the Raspberry Pi itself and it ran once but it has never worked again. It either doesn’t open when launched or it opens to this a blank window. Also I want to write a script so that this launches the export application every time I launch Kodi. Do I write the Processing script for Kodi or LibreELEC? What does it say when you run it through the terminal? sorry no idea about KODI but for a autostart at boot see //____________________ note: many startups fail if you are not in that directory, so a bash first makes a cd to that path for the app, and then starts it. and a .desktop starts the bash script, not the app and i like to use the user autostart path, but there are many ways…
https://discourse.processing.org/t/annotation-processing-got-disabled-since-it-requires-a-1-6-compliant-jvm/6697
CC-MAIN-2022-27
refinedweb
3,624
63.19
In this final post of the series “How to embed Power BI in SharePoint”, I’m going to show you how to embed a PowerBI report in SharePoint, using Power BI Embedded and the SharePoint Framework! All new technologies, so exciting times 🙂 Please refer to part 1 and part 2 of this series. Previously I used the approach where the PowerBI service was used to render the report. The big downside is authentication, as this approach requires the user to log in. I had tried to solve that by embedding the token, but looking at the comments that did not work for everyone. PowerBI Embedded is a different approach, where Azure is added a layer to render and store the reports. It took me a while to get my head around PowerBI Embedded, but this image explains it perfectly: In my case, “Your app” refers to a SharePoint Framework app, i.e. all client-side technologies. To embed a PowerBI report in a SharePoint Framework app, I’ve followed these steps: - Create a Workspace collection in Azure - Create a Workspace in Azure - Upload the PBIX to this Workspace - Create a report access token - Render the report in the client-side web part Let me discuss each step in more details. How to: Create a PowerBI Workspace collection in Azure This was the easiest step, and can be achieved via the Azure Portal. It is discussed in detail on the PowerBI website, scroll down to Create a workspace collection. As a result of this step you need to copy/paste one of the access tokens, plus the name of your workspace collection. How to: Create a PowerBI Workspace in Azure You can’t create a workspace via the UI. So, we have to use the Power BI API. The examples on the PowerBI website use either C#, or manually craft HTTP POST’s to the PowerBI API. I did not want to do either, and luckily there is an alternative: The PowerBI CLI. This is a Node.js tool, so you need to have NPM + Node.js installed on your machine. Another prerequisite is that you’ve saved the PBIX file from PowerBI to your computer. - Open a command prompt - Go to the folder containing the PBIX file, and stay there for the rest of this tutorial - Run “npm install -g powerbi-cli” - Configure the settings for workspace collection and access key, so you don’t have to type them every time: powerbi config -c <workspace_collection_name> -k <access_token> - Create a workspace via: powerbi create-workspace [ powerbi ] Workspace created: <workspace_id> - And configure this workspace ID as the default config setting using: powerbi config -w <workspace_id> How to: Upload a PowerBI PBIX file to your workspace The next step is to upload the PBIX file to your create workspace. From the command line, type the following: powerbi import -f <filename> -n <name> e.g. powerbi import -f .\hotels.pbix -n Hotels [ powerbi ] Importing .\hotels.pbix to workspace: <workspace_id> [ powerbi ] File uploaded successfully [ powerbi ] Import ID: 24774420-4a32-4623-8cd5-4c3b7513e965 [ powerbi ] Checking import state: Publishing [ powerbi ] Checking import state: Succeeded [ powerbi ] Import succeeded The next thing to do, is to grab the report ID, as you will need this later: $ powerbi get-reports [ powerbi ] ========================================= [ powerbi ] Gettings reports for Collection: <workspace_collection_id> [ powerbi ] ========================================= [ powerbi ] ID: <report_id> | Name: Hotels Save the report_id for later use. How to: Create a report access token Authentication happens via access tokens. As I’m building a client-side application, the access tokens are accessible to anyone. The access token from the Azure Portal gives a user full control on your workspace collection, you don’t want that it ends up in the wrong hands! Therefore you must create an access token that only has access permissions to a particular report. To do this, run the command “create-embed-token” with as parameter the report id, saved in the previous step: powerbi create-embed-token -r<report_id> This will return something like this: [ powerbi ] Embed Token: eyJ0eXAiO.....S75E9qdBwpRHEA Save this embed token for the next step. How to: Render the report in the client-side web part The last and final step will create the client-side web part, and let it render the PowerBI report. First of all, install all the prerequisites for the SharePoint Framework. This is explained in detail here, so I won’t repeat that. To communicate with PowerBI via JavaScript, I’m using the PowerBI Client plugin. First, create your web part: - Open a new command line instance - Go to a folder where you want to create the web part, e.g. “c:\projects\sp” - Make a folder, e.g. “PowerBI_WebPart”, and navigate to it. - Run the yeoman SharePoint generator with “yo @microsoft/sharepoint” and stick to the default values. - Add the additional dependencies via: “npm install powerbi-client –save” - Open a Visual Studio Code instance via “code .” (or use any other editor) The next thing is the implementation of the actual web part. I was struggling with this, as the documentation on the PowerBI client did not work initially. I could not find the global variable powerbi; so after some digging I found an alternative by creating the service manually. Next, the example documentation uses jQuery, but it’s not required, so I’m using vanilla JavaScript. Go to the webpart file in your editor (i.e. PowerBIEmbeddedWebPart.ts) and add the following import: import * as pbi from ‘powerbi-client’; Then, replace the render method with the following implementation: this.domElement.innerHTML = ` <div id="reportContainer" style="height:500px;"></div> <a id="fullscreen">FULL SCREEN</a>`; // embed configuration. // generate the access token with "powerbi create-embed-token -r<report_id>" // get the report id via "powerbi get-reports" var embedConfiguration = { type: ‘report’, accessToken: ‘eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ2ZXIiOiIwLjIuMCIsImF1ZCI6Imh0dHBzOi8vYW5hbHlzaXMud2luZG93cy5uZXQvcG93ZXJiaS9hcGkiLCJpc3MiOiJQb3dlciBCSSBOb2RlIFNESyIsIndjbiI6IlBvd2VyQkktUm9sYW5kIiwid2lkIjoiN2JmNmU0ZDMtNjJkZi00YmY1LWJhZDQtYjA0OTI3OWQ0NmQxIiwicmlkIjoiY2NhMGVjYjQtYjc4Yi00Njk4LWJlYzQtZjc3YmU4OGY0YTFmIiwibmJmIjoxNDczNzI2MTcyLCJleHAiOjE0NzM3Mjk3NzJ9.tI-yc_YGuw0krR0T-FNZ9e1ueyMcdQLlbP5L2o3K2I0’, id: ‘cca0ecb4-b78b-4698-bec4-f77be88f4a1f’, embedUrl: ‘’ }; // grab a reference to the HTML element containing the report var reportContainer = document.getElementById(‘reportContainer’); // construct a PBI service; according to the documentation this should be already available as a global variable, // but in my case that did not work. let powerbi = new pbi.service.Service(pbi.factories.hpmFactory, pbi.factories.wpmpFactory, pbi.factories.routerFactory); var report = powerbi.embed(reportContainer, embedConfiguration); // attach an event handler for the document.getElementById("fullscreen").addEventListener("click", () => { var report = powerbi.get(reportContainer); report.fullscreen(); }); Then, run “gulp serve”, and wait until the workbench loads. Add the web part: After it has loaded, you should see your report, with the option to open it in full screen! Embedding reports via PowerBI embedded in SharePoint I’ve shown you how to embed your PowerBI reports in SharePoint via the SharePoint Framework. User won’t need to log in, but are still able to see data they normally would not be allowed to see. Be careful though, the data in your report is available to anyone with access to the page! Another big advantage of using the SharePoint Framework is that it’s backwards compatible, i.e. you can use this approach to embed PowerBI reports in “old school” server-side page layouts! The source code is available on Github. September 21, 2016 at 9:35 am Nice! – I got this working pretty easily. The one issue I am seeing is how do I keep a scheduled refresh for this item? September 23, 2016 at 9:06 am Data refresh can be configured in the report itself, see here: March 1, 2017 at 1:09 am Hey Roland, If I was to set this is to use many different reports (report access tokens) would I have to create a new webpart for each report? March 2, 2017 at 12:45 pm With the current code, yes, but nothing prohibits you from updating the code to accept a list of report access tokens of course 🙂 March 30, 2017 at 6:51 am Very nice blog post! Thank you. Your solution implements pre-generated access token. It is only good for a short period of time. Do you have any suggestions for generating fresh access token from within the webpart? Thanks, Pawel June 7, 2017 at 9:16 am Hi Roland, Thanks for explaining it so well. Do you know how can I increase the token expiry time? Thanks, Rishabh
http://rolandoldengarm.com/index.php/2016/09/13/part-3-how-to-embed-a-power-bi-report-in-sharepoint-with-the-sharepoint-framework/
CC-MAIN-2017-39
refinedweb
1,368
53.61
Re: Jesus Climent in <[🔎] 20050814165840.GI8770@genarin.hispalinux.es> > > * Package name : dealer > > Version : 0.20040530 > > Upstream Authors: Hans van Staveren <sater@sater.home.cs.vu.nl> > > Henk Uijterwaal <henk@ripe.net> > > * URL : > > * License : Public domain with some files GPLv2+ > > Description : bridge hand generator > > > > This program generates bridge hands for partnerships bidding training or for > > generating statistics that can be used to design conventions, or win > > postmortems. Dealer has been used in many bridge publications. > > . > > > > Few points: > > 1. the name of the package might be a namespace polution, since it is too > generic. Most texts cite "dealer" in conjunction with the first author's name, but everyone how has dealed [:-)] with bridge programs knows the program by that name. Note that I'm already maintaining "deal" which serves the same purpose, but isn't used as widely. And if one name is generic that's probably the shorter "deal". > 2. after reading and re-reading both the description and the long description, > i have no clue whatsoever what the program is for. Hopefuly in the final > package it will be reworded... I'll include the botton line from the latex-bridge description: "Bridge is an intellectually challenging card game for four players." (Maybe ITPs should include the proposed section (games in this case) to resolve confusions like these.) Thanks for your suggestions. Re: Frans Pop in <[🔎] 200508141911.44019.aragorn@tiscali.nl> > > 1. the name of the package might be a namespace polution, since it is > > too generic. > > Possibly bridge-player would be a good alternative. I don't think cramming the short description into the package name improves things. > > :-) Thanks :-) Christoph -- cb@df7cb.de | Attachment: signature.asc Description: Digital signature
https://lists.debian.org/debian-devel/2005/08/msg00727.html
CC-MAIN-2017-13
refinedweb
279
60.41
Alphabetical Summary of Commands This chapter presents the Linux user, programmer, and system administration commands. listed in the Preface. For help in locating commands, see the index at the back of this book. We've tried to be as thorough as possible in listing options. The basic command information and most options should be correct; however, there are many Linux distributions and many versions of commands. New options are added and sometimes old options are dropped. You may, therefore, find some differences between the options you find described here and the ones on your system. When there seems to be a discrepancy, check the manpage. For most commands you can also use the option --help to get a brief usage message. (Even when it isn't a valid option, it will usually result in an "invalid option" error along with the usage Traditionally, commands take single-letter options preceded by a single hyphen, like -d. A more recent convention allows long options preceded by two hyphens, like --debug. Often, a feature can be invoked through either the old style or the new style of options. agetty [options] port baudrate [term] System administration command. The Linux version of getty. Set terminal type, modes, speed, and line discipline. agetty is invoked by init. It is the second process in the series init-getty-login-shell, which ultimately connects a user with the Linux system. agetty reads the user's login name and invokes the login command with the user's name as an argument. While reading the name, agetty attempts to adapt the system to the speed and type of device being used. You must specify a port, which agetty will search for in the /dev directory. You may use -, in which case agetty reads from standard input. You must also specify baudrate, which may be a comma-separated list of rates, through which agetty will step. Optionally, you may specify the term, which is used to override the TERM environment variable. Specify hardware, not software, flow control. Suppress printing of /etc/issue before printing the login prompt. Specify the use of program instead of /bin/login. Attempt to guess the appropriate baud rate. Specify that agetty should exit if the open on the line succeeds and there is no response to the login prompt in timeout seconds. Do not require carrier detect; operate locally only. Use this when connecting terminals. apmd [options] System administration command. apmd handles events reported by the Advanced Power Management BIOS driver. The driver reports on battery level and requests to enter sleep or suspend mode. apmd will log any reports it gets via syslogd and take steps to make sure that basic sleep and suspend requests are handled gracefully. You can fine-tune the behavior of apmd by specifying an apmd_proxy command to run when it receives an event. Set the number of seconds to wait for an event before rechecking the power level. Default is to wait indefinitely. Setting this causes the battery levels to be checked more frequently. Specify the apmd_proxy command to run when APM driver events are reported. This is generally a shell script. The command will be invoked with parameters indicating what kind of event was received. The parameters are in the next list. Log information whenever the power changes by n percent. The default is 5. Values greater than 100 will disable logging of power changes. Print version and exit. Verbose mode; all events are logged. Use wall to alert all users of a low battery status. Log a warning at ALERT level when the battery charge drops below n percent. The default is 10. Negative values disable low battery level warnings. Disable low battery level warnings. Print help summary and exit. The apmd proxy script will be invoked with the following parameters: Invoked when the daemon starts. Invoked when the daemon stops. Invoked when a suspend request has been made. The second parameter indicates whether the request was made by the system or by the user. Invoked when a standby request has been made. The second parameter indicates whether the request was made by the system or by the user. Invoked when the system resumes normal operation. The second parameter indicates the mode the system was in before resuming. (critical suspends indicate an emergency shutdown. After a critical suspend the system may be unstable and you can use the resume command to help you recover from the suspension. Invoked when system power is changed from AC to battery or from battery to AC. Invoked when the APM BIOS driver reports that the battery is low. Invoked when the APM BIOS driver reports some hardware that affects its capability has been added or removed. apropos string ... Search the short manual page descriptions in the whatis database for occurrences of each string and display the result on the standard output. Like whatis, except that it searches for strings instead of words. Equivalent to man -k. ar [-V] key [args] [posname] archive [files] Maintain a group of files that are combined into a file archive. Used most commonly to create and update library files as used by the link editor (ld). Only one key letter may be used, but each can be combined with additional args (with no separations between). posname is the name of a file in archive. When moving or replacing files, you can specify that they be placed before or after posname. -V prints the version number of ar on standard error. Delete files from archive. Move files to end of archive. Print files in archive. Append files to archive. Replace files in archive. List the contents of archive or list the named files. Extract contents from archive or only the named files. Use with r or m key to place files in the archive after posname. Same as a but before posname. Create archive silently. Truncate long filenames. Same as b. For backward compatibility; meaningless in Linux. Preserve original timestamps. Force regeneration of archive symbol table (useful after running strip). Do not regenerate symbol table. Use with r to replace only files that have changed since being put in archive. Verbose; print a description of actions taken. Replace mylib.a with object files from the current directory: ar r mylib.a `ls *.o arch Print machine architecture type to standard output. Equivalent to uname -m. arp [options] TCP/IP command. Clear, add to, or dump the kernel's ARP cache (/proc/net/arp). Verbose mode. must be ether (Ethernet) or ax25 (AX.25 packet radio); ether is the default. Display hosts' entries or, if none are specified, all entries. Remove host's entry. Add the entry host hardware-address, where ether class addresses are 6 hexadecimal bytes, colon-separated. Read entries from file and add them. as [options] files Generate an object file from each specified assembly language source file. Object files have the same root name as source files but replace the .s suffix with .o. There may be some additional system-specific options. Read input files from standard input, or from files if the pipe is used. With only the -a option, list source code, assembler listing, and symbol table. The other options specify additional things to list or omit: Omit debugging directives. Include the high-level source code, if available. Include an assembly listing. Suppress forms processing. Include a symbol listing. Set the listing filename to file. Define the symbol to have the value value, which must be an integer. Skip preprocessing. Generate stabs debugging information. Place output in object file objfile (default is file.o). Display the version number of the assembler. Include path when searching for .include directives. Warn before altering difference tables. Do not remove local symbols, which begin with L. Combine both data and text in text section. Quiet mode. at [options] time. Display the specified jobs on the standard output. This option does not take a time specification. Delete the specified jobs. Same as atrm. Read job from file,. Note that the first two commands are equivalent: at 1945 pm December 9 at 7:45pm Dec 9 at 3 am Saturday at now + 5 hours at noon next day atq [options] List the user's pending jobs, unless the user is a privileged user; in that case, everybody's jobs are listed. Same as at -l. Query only the specified queue and ignore all other queues. Show jobs that have completed but not yet been deleted. Print the version number. atrm [options] job [job...] Delete jobs that have been queued for future execution. Same as at -d. Remove job from the specified queue. Print the version number and then exit. badblocks [options] device block-count System administration command. Search device for bad blocks. You must specify the number of blocks on the device (block-count). Expect blocksize-byte blocks. Direct output to file. Test by writing to each block and then reading back from it. banner [option] [characters] Print characters as a poster. If no characters are supplied, banner prompts for them and reads an input line from standard input. By default, the results go to standard output, but they are intended to be sent to a printer. Set width to width characters. Note that if your banner is in all lowercase, it will be narrower than width characters. If -w is not specified, the default width is 132. If -w is specified but width is not provided, the default is 80. /usr/games/banner -w50 Happy Birthday! |lpr basename name [suffix] basename option Remove leading directory components from a path. If suffix is given, remove that also. The result is printed to standard output. Print help message and then exit. % basename /usr/lib/libm.a libm.a % basename /usr/lib/libm.a .a libm batch [options] [time] Execute commands entered on standard input. If time is omitted, execute them when the system load permits (when the load average falls below 0.8). Very similar to at, but does not insist that the execution time be entered on the command line. See at for details. Place job in queue denoted by letter, where letter is one letter from a-z or A-Z. The default queue is a. (The batch queue defaults to b.) Higher-lettered queues run at a lower priority. Display the time a job will be executed. bash [options] [file [arguments;]] sh [options] [file [arguments]] Standard Linux shell, a command interpreter into which all other commands are entered. For more information, see Chapter 7, "bash: The Bourne-Again Shell". bc [options] [files] bc is a language (and compiler) whose syntax resembles that of C, but with unlimited-precision arithmetic. bc consists of identifiers, keywords, and symbols, which are briefly described in the following entries. Examples are given at the end. Interactively perform arbitrary-precision arithmetic or convert numbers from one base to another. Input can be taken from files or read from the standard input. To exit, type quit or EOF. Make functions from the math library available. Ignore all extensions, and process exactly as in POSIX. When extensions to POSIX bc are used, print a warning. Do not display welcome message. Print version number. An identifier is a series of one or more characters. It must begin with a lowercase letter but may also contain digits and underscores. No uppercase letters are allowed. Identifiers are used as names for variables, arrays, and functions. Variables normally store arbitrary-precision numbers. Within the same program you may name a variable, an array, and a function using the same letter. The following identifiers would not conflict: Variable x. Element i of array x. i can range from 0 to 2047 and can also be an expression. Call function x with parameters y and z. ibase, obase, scale, and last store a value. Typing them on a line by themselves displays their current value. You can also change their values through assignment. The letters A-F are treated as digits whose values are 10-15. Numbers that are input (e.g., typed) are read as base n (default is 10). Numbers that are displayed are in base n (default is 10). Note: Once ibase has been changed from 10, use A to restore ibase or obase to decimal. Display computations using n decimal places (default is 0, meaning that results are truncated to integers). scale is normally used only for base-10 computations. Value of last printed number. A semicolon or a newline separates one statement from another. Curly braces are needed when grouping multiple statements. Do one or more statements if relational expression rel-expr is true. Otherwise, do nothing, or if else (an extension) is specified, do alternative statements. For example: if(x==y) {i = i + 1} else {i = i - 1} Repeat one or more statements while rel-expr is true; for example: while(i>0) {p = p*n; q = a/b; i = i-1} Similar to while; for example, to print the first 10 multiples of 5, you could type: for(i=1; i<=10; i++) i*5 GNU bf does not require three arguments to for. A missing argument 1 or 3 means that those expressions will never be evaluated. A missing argument 2 evaluates to the value 1. Terminate a while or for statement. GNU extension. It provides an alternate means of output. list consists of a series of comma-separated strings and expressions; print displays these entities in the order of the list. It does not print a newline when it terminates. Expressions are evaluated, printed, and assigned to the special variable last. Strings (which may contain special characters, i.e., characters beginning with \) are simply printed. Special characters can be: Alert or bell Backspace Form feed Newline Carriage return Double quote Tab Backslash GNU extension. When within a for statement, jump to the next iteration. GNU extension. Cause the bc processor to quit. GNU extension. Print the limits enforced by the local version of bc. Begin the definition of function f having the arguments args. The arguments are separated by commas. Statements follow on successive lines. End with a }. Set up x and y as variables local to a function definition, initialized to 0 and meaningless outside the function. Must appear first. Pass the value of expression expr back to the program. Return 0 if (expr) is left off. Used in function definitions. Compute the square root of expression expr. Compute how many significant digits are in expr. Same as length, but count only digits to the right of the decimal point. GNU extension. Read a number from standard input. Return value is the number read, converted via the value of ibase. These are available when bc is invoked with -l. Library functions set scale to 20. Compute the sine of angle, a constant or expression in radians. Compute the cosine of angle, a constant or expression in radians. Compute the arctangent of n, returning an angle in radians. Compute e to the power of expr. Compute the natural log of expr. Compute the Bessel function of integer order n. These consist of operators and other symbols. Operators can be arithmetic, unary, assignment, or relational: + - * / % ^ - ++ -- =+ =- =* =/ =% =^ = < <= > >= == != Enclose comments. Control the evaluation of expressions (change precedence). Can also be used around assignment statements to force the result to print. Use to group statements. Indicate array index. Use as a statement to print text. Note in these examples that when you type some quantity (a number or expression), it is evaluated and printed, but assignment statements produce no display. ibase = 8 Octal input 20 Evaluate this octal number 16 Terminal displays decimal value obase = 2 Display output in base 2 instead of base 10 20 Octal input 10000 Terminal now displays binary value ibase = A Restore base-10 input scale = 3 Truncate results to 3 decimal places 8/7 Evaluate a division 1.001001000 Oops! Forgot to reset output base to 10 obase=10 Input is decimal now, so A isn't needed 8/7 1.142 Terminal displays result (truncated) The following lines show the use of functions: define p(r,n){ Function p uses two arguments auto v v is a local variable v = r^n r raised to the n power return(v)} Value returned scale=5 x=p(2.5,2) x = 2.5 ^ 2 x Print value of x 6.25 length(x) Number of digits 3 scale(x) Number of places right of decimal point 2 biff [arguments] Notify user of mail arrival and sender's name. biff operates asynchronously. Mail notification works only if your system is running the comsat(8) server. The command biff y enables notification, and the command biff n disables notification. With no arguments, biff reports biff's current status. bison [options] file Given a file containing context-free grammar, convert into tables for subsequent parsing while sending output to file.c. This utility is both to a large extent compatible with yacc and named for it. All input files should use the suffix .y; output files will use the original prefix. All long options (those preceded by --) may instead be preceded by +. Use prefix for all output files. Generate file.h, producing #define statements that relate bison's token codes to the token names declared by the user. Use bison token numbers, not yacc-compatible translations, in file.h. Include token names and values of YYNTOKENS, YYNNTS, YYNRULES, and YYNSTATES in file.c. Exclude #line constructs from code produced in file.c. (Use after debugging is complete.) Suppress parser code in output, allowing only declarations. Assemble all translations into a switch statement body and print it to file.act. Output to file. Substitute prefix for yy in all external symbols. Compile runtime debugging code. Verbose mode. Print diagnostics and notes about parsing tables to file.output. Display version number. Duplicate yacc's conventions for naming output files. bootpd [options] [configfile [dumpfile] ] TCP/IP command. Internet Boot Protocol server. bootpd normally is run by /etc/inetd by including the following line in the file /etc/inetd.conf: bootps dgram udp wait root /etc/bootpd bootpd This causes bootpd to be started only when a boot request arrives. It may also be started in standalone mode, from the command line. Upon startup, bootpd first reads its configuration file, /etc/bootptab (or the configfile listed on the command line), then begins listening for BOOTREQUEST packets. bootpd looks in /etc/services to find the port numbers it should use. Two entries are extracted: bootps -- the bootp server listening port -- and bootpc -- the destination port used to reply to clients. If bootpd is compiled with the -DDEBUG option, receipt of a SIGUSR1 signal causes it to dump its memory-resident database to the file /etc/bootpd.dump or the command-line specified dumpfile. Force bootpd to work in directory. Specify the debugging level. Omitting level will increment the level by 1. Specify a timeout value in minutes. A timeout value of 0 means wait forever. The bootpd configuration file has a format in which two-character, case-sensitive tag symbols are used to represent host parameters. These parameter declarations are separated by colons. The general format is: hostname:tg=value:tg=value:tg=value where hostname is the name of a bootp client and tg is a tag symbol. The currently recognized tags are listed next. There is also a generic tag, Tn, where n is an RFC 1048 vendor field tag number. Generic data may be represented as either a stream of hexadecimal numbers or as a quoted string of ASCII characters. bootpgw [options] server Internet Boot Protocol Gateway. Maintain a gateway that forwards bootpd requests to server. In addition to dealing with BOOTREPLY packets, also deal with BOOTREQUEST packets. bootpgw is normally run by /etc/inetd by including the following line in the file /etc/inetd.conf: bootps dgram udp wait root /etc/bootpgw bootpgw This causes bootpgw to be started only when a boot request arrives. bootpgw takes all the same options as bootpd, except -c. bootptest [options] server [template] TCP/IP command. Test server's bootpd daemon by sending requests every second for 10 seconds or until the server responds. Read options from the template file, if provided. Read the boot filename from file. Identify client by hardware, not IP, address. Provide magic-number as the first word of the vendor options field. .bz2 extension appended. bunzip2 decompresses each file compressed by bzip2 (ignoring other files, except to print a warning). bzcat decompresses all specified files to standard output, and bzip2recover is used to try to recover data from damaged files. End of options; treat all subsequent arguments as filenames. Set block size to dig × 100KB. Quiet. Print only critical messages. Use less memory, at the expense of speed. Check the integrity of the files, but don't actually compress them. Verbose. Show the compression ratio for each file processed. Add more -v's to increase the verbosity. Forces compression, even if invoked as bunzip2 or bzcat. Sometimes useful in versions earlier than 0.9.5 (which has an improved sorting algorithm) for providing some control over the algorithm. c++ [options] files See g++. cal [-jy] [[month] year] Print a 12-month calendar (beginning with January) for the given year or a one-month calendar of the given month and year. month ranges from 1 to 12. year ranges from 1 to 9999. With no arguments, print a calendar for the current month. Display Julian dates (days numbered 1 to 365, starting from January 1). Display Monday as the first day of the week. Display entire year. cal 12 1995 cal 1994 > year_file cardctl [options] command System administration command. Control PCMCIA sockets or select the current scheme. The current scheme is sent along with the address of any inserted cards to configuration scripts (by default located in /etc/pcmcia). The scheme command displays or changes the scheme. The other commands operate on a named card socket number or, if no number is given, all sockets. Display current socket configuration. Prepare the system for the card(s) to be ejected. Display card identification information. Notify system that a card has been inserted. Send reset signal to card. Restore power to socket and reconfigure for use. Display current scheme or change to specified scheme name. Display current socket status. Shut down device and cut power to socket. Look for card configuration information in directory instead of /etc/pcmcia. Use file to keep track of the current scheme instead of /var/run/pcmcia-scheme. Look for current socket information in file instead of /var/run/stab. cardmgr [options] System administration command. The PCMCIA card daemon. cardmgr monitors PCMCIA sockets for devices that have been added or removed. When a card is detected, it attempts to get the card's ID and configure it according to the card configuration database (usually stored in /etc/pcmcia/config). By default, cardmgr both creates a system log entry when it detects cards and beeps. Two high beeps mean it successfully identified and configured a device. One high beep followed by one low beep means it identified the device, but was unable to configure it successfully. One low beep means it could not identify the inserted card. Information on the currently configured cards can be found in /var/run/stab. Look in directory for the card configuration database instead of /etc/pcmcia. use modprobe instead of insmod to load the PCMCIA device driver. Run in the foreground to process the current cards, then run as a daemon. Look in directory for card device modules instead of /lib/modules/ `uname -r`. Configure the cards present in one pass, then exit. Write cardmgr's process ID to file instead of /var/run/cardmgr.pid. Run in quiet mode. No beeps. Write current socket information to file instead of /var/run/stab. Print version number and exit. cat [options] [files] Read (concatenates). Same as -vET. Number all nonblank output lines, starting with 1. Same as -vE. Print $ at the end of each line. Number all output lines, starting with 1. Squeeze down multiple blank lines to one blank line. Same as -vT. Print TAB characters as ^I. Ignored; retained for Unix compatibility. Display control and nonprinting characters, with the exception of LINEFEED and TAB. cat ch1 Display a file cat ch1 ch2 ch3 > all Combine files cat note5 >> notes Append to a file cat > temp1 Create file at terminal; end with EOF cat > temp2 << STOP Create file at terminal; end with STOP cc [options] files See gcc. cpp [options] [ ifile [ ofile ] ] GNU C language preprocessor. cpp is invoked as the first pass of any C compilation by the gcc command. The output of cpp is a form acceptable as input to the next pass of the C compiler, and cpp normally invokes gcc after it finishes processing. ifile and ofile are, respectively, the input and output for the preprocessor; they default to standard input and standard output. Do not allow $ in identifiers. Suppress normal output. Print series of #defines that create the macros used in the source file. Similar to -dM but exclude predefined macros and include results of preprocessing. Search dir for header files when a header file is not found in any of the included directories. Process macros in file before processing main files. Process file before main file. When adding directories with -iwithprefix, prepend prefix to the directory's name. Append dir. Warn verbosely. Produce a fatal error in every case in which -pedantic would have produced a warning. Behave like traditional C, not ANSI. Suppress definition of all nonstandard macros. Assert name with value def as if defined by a #assert. Pass along all comments (except those found on cpp directive lines). By default, cpp strips C-style comments. Define name with value def as if by a #define. If no =def is given, name is defined with value 1. -D has lower precedence than -U.. Suppress normal output. Print a rule for make that describes the main source file's dependencies. If -MG is specified, assume that missing header files are actually generated files, and look for them in the source file's directory. Similar to -M, but output to file; also compile the source. Similar to -M. Describe only those files included as a result of #include "file". Similar to -MD, but describe only the user's header files. Preprocess input without producing line-control information used by next pass of C compiler. Remove any initial definition of name, where name is a reserved symbol predefined by the preprocessor or a name defined on a -D option. Names predefined by cpp are unix and i386 (for Intel systems). Warn when encountering the beginning of a nested comment. Warn when encountering constructs that are interpreted differently in ANSI from traditional C. cpp understands various special names, some of which are: Current date (e.g., Oct 10 1999) Current filename (as a C string) Current source line number (as a decimal integer) Current time (e.g., 12:00:00) These special names can be used anywhere, including 0, and all intervening #elif directives evaluate to 0. No additional tokens are permitted on the directive line. Report fatal errors. Report warnings, but then continue processing. cfdisk [options] [device] System administration command. Partition a hard disk. device may be /dev/hda (default), /dev/hdb, /dev/sda, /dev/sdb, /dev/sdc, or /dev/sdd. See also fdisk. Highlight the current partition with a cursor, not reverse video. Specify the number of cylinders. Specify the number of heads. Specify the number of sectors per track. Do not read the partition table; partition from scratch. Display the partition table in format, which must be r (raw data), s (sector order), or t (raw format). Move among partitions. Toggle partition's bootable flag. Delete partition (allow other partitions to use its space). Alter the disk's geometry. Prompt for what to change: cylinders, heads, or sectors (c, h, or s, respectively). Attempt to ensure maximum usage of disk space in the partition. Create a new partition. Prompt for more information. Display the partition table. Quit without saving information. Prompt for a new filesystem type, and change to that type. Change the partition size units, rotating from megabytes to sectors to cylinders and back. Save information. Note that this letter must be uppercase. chattr [options] mode files Modify file attributes. Specific to Linux Second Extended Filesystem. Behaves similarly to symbolic chmod, using +, -, and =. mode is in the form opcode attribute. See also lsattr. Modify directories and their contents recursively. Print modes of attributes after changing them. Set the file's version. Add attribute. Remove attribute. Assign attributes (removing unspecified attributes). Don't update access time on modify. Append only for writing. Can be set or cleared only by a privileged user. Compressed. No dump. Immutable. Can be set or cleared only by a privileged user. Secure deletion; the contents are zeroed on deletion. Undeletable. Synchronous updates. chattr +a myfile As superuser chfn [options] [username] Change the information that is stored in /etc/passwd and displayed when a user is fingered. Without options, chfn enters interactive mode and prompts for changes. To make a field blank, enter the keyword none. Only a privileged user can change information for another user. For regular users, chfn prompts for the user's password before making the change. Specify new full name. Specify new home phone number. Specify new office number. Specify new office phone number. Print version information and then exit. chfn -f "Ellen Siever" ellen chgrp [options] newgroup files chgrp [options] Change the group of one or more files to newgroup. newgroup is either a group ID number or a group name located in /etc/group. Only the owner of a file or a privileged user may change its group. Print information about those files that are changed. Do not print error messages about files that cannot be changed. Traverse subdirectories recursively, applying changes. Change the group to that associated with filename. In this case, newgroup is not specified. Verbosely describe ownership changes. chmod [options] mode files chmod [options] --reference=filename files Change the access mode (permissions) of one or more files. Only the owner of a file or a privileged user may change its mode. mode can be numeric or an expression in the form of who opcode permission. who is optional (if omitted, default is a); choose only one opcode. Multiple modes may be specified, separated by commas. Print information about files that are changed. Do not notify user of files that chmod cannot change. Change permissions to those associated with filename. Print information about each file, whether changed or not. User Group Other All (default) Add permission. Remove permission. Assign permission (and remove permission of the unspecified fields). Read. Write. Execute. Set user (or group) ID. Sticky bit; save text (file) mode or prevent removal of files by nonowners (directory). User's present permission. Group's present permission. Other's present permission. Alternatively, specify permissions by a three-digit octal number. The first digit designates owner permission; the second, group permission; and the third, other's permission. Permissions are calculated by adding the following octal values: Note: A fourth digit may precede this sequence. This digit assigns the following modes: Set user ID on execution to grant permissions to process based on file's owner, not on permissions of user who created the process. Set group ID on execution to grant permissions to process based on the file's group, not on permissions Set the user ID, assign read/write/execute permission by owner, and assign read/execute permission by group and others: chmod 4755 file its owner. Follow symbolic links. Change the ownership of each symbolic link (on systems that allow it), rather than the referenced file. Print information about all files that chown attempts to change, whether or not they are actually changed. Change owner to the owner of filename instead of specifying a new owner explicitly. chpasswd [option] System administration command. Change user passwords in a batch. chpasswd accepts input in the form of one username:password pair per line. If the -e option is not specified, password will be encrypted before being stored. Passwords given are already encrypted. chroot newroot [command] System administration command. Change root directory for command or, if none is specified, for a new copy of the user's shell. This command or shell is executed relative to the new root. The meaning of any initial / in pathnames is changed to newroot for a command and any of its children. In addition, the initial working directory is newroot. This command is restricted to privileged users. chsh [options] [username] Change your login shell, interactively or on the command line. Warn if shell does not exist in /etc/shells. Specify the full path to the shell. chsh prompts for your password. Only a privileged user can change another user's shell. Print valid shells, as listed in /etc/shells, and then exit. Specify new login shell. chsh -s /bin/tcsh cksum [files] Compute a cyclic redundancy check (CRC) checksum for all files; used to ensure that a file was not corrupted during transfer. Read from standard input if the character - or no files are given. Display the resulting checksum, the number of bytes in the file, and (unless reading from standard input) the filename. clear Clear the terminal display. cmp [options] file1 file2 [skip1 [skip2]] Compare file1 with file2. Use standard input if file1 is - or missing. See also comm and diff. Files can be of any type. skip1 and skip2 are optional offsets in the files at which the comparison is to start. Print differing bytes as characters. Ignore the first num bytes of input. Print offsets and codes of all differing bytes. Work silently; print nothing, but return exit codes: Files are identical. Files are different. Files are inaccessible. Print a message if two files are the same (exit code is 0): cmp -s old new && echo 'no changes' col [options] A postprocessing filter that handles reverse linefeeds and escape characters, allowing output from tbl or nroff to appear in reasonable form on a terminal. Ignore backspace characters; helpful when printing manpages. Process half-line vertical motions, but not reverse line motion. (Normally, half-line input motion is displayed on the next full line.) Buffer at least n lines in memory. The default buffer size is 128 lines. Normally, col saves printing time by converting sequences of spaces to tabs. Use -x to suppress this conversion. Run myfile through tbl and nroff, then capture output on screen by filtering through col and more: tbl myfile | nroff | col | more Save manpage output for the ls command in out.print, stripping out backspaces (which would otherwise appear as ^H): man ls | col -b > out.print colcrt [options] [files] A postprocessing filter that handles reverse linefeeds and escape characters, allowing output from tbl or nroff to appear in reasonable form on a terminal. Put half-line characters (e.g., subscripts or superscripts) and underlining (changed to dashes) on a new line between output lines. Do not underline. Double space by printing all half-lines. colrm [start [stop]] Remove specified columns from a file, where a column is a single character in a line. Read from standard input and write to standard output. Columns are numbered starting with 1; begin deleting columns at (including) the start column, and stop at (including) the stop column. Entering a tab increments the column count to the next multiple of either the start or stop column; entering a backspace decrements it by 1. colrm 3 5 < test1 > test2 column [options] [files] Format input from one or more files into columns, filling rows first. Read from standard input if no files are specified. Format output into num columns. Delimit table columns with char. Meaningful only with -t. Format input into a table. Delimit with whitespace, unless an alternate delimiter has been provided with -s. Fill columns before filling rows. comm [options] file1 file2 Compare lines common to the sorted files file1 and file2. Three-column output is produced: lines unique to file1, lines unique to file2, and lines common to both files. comm is similar to diff in that both commands compare two files. But comm can also be used like uniq; that is, comm selects duplicate or unique lines between two sorted files, whereas uniq selects duplicate or unique lines within the same sorted file. Read the standard input. Suppress printing of column num. Multiple columns may be specified and should not be space-separated. Print help message and exit. Print version information and exit. Compare two lists of top-10 movies, and display items that appear in both lists: comm -12 siskel_top10 ebert_top10 compress [options] files Compress one or more files, replacing each with the compressed file of the same name with .Z appended. If no file is specified, compress standard input. Each file specified is compressed separately. compress ignores files that are symbolic links. See also gzip. Limit the maximum number of bits. Write output to standard output, not to a .Z file. Decompress instead of compressing. Same as uncompress. Force generation of an output file even if one already exists. If any of the specified files is a directory, compress recursively. Print compression statistics. Print version and compilation information and then exit. cp [options] file1 file2 cp [options] files directory). Preserve attributes of original files where possible. Same as -dpR. Back up files that would otherwise be overwritten. Do not dereference symbolic links; preserve hard link relationships between source and copy. Remove existing files in the destination. Prompt before overwriting destination files. Make hard links, not copies, of nondirectories. Preserve all information, including owner, group, permissions, and timestamps. Preserve intermediate directories in source. The last argument must be the name of an existing directory. For example, the command: cp --parents jphekman/book/ch1 newdir Copy directories recursively. Set suffix to be appended to backup files. This may also be set with the SIMPLE_BACKUP_SUFFIX environment variable. The default is ~. You need to explicitly include a period if you want one before the suffix (e.g., specify .bak, not bak). Make symbolic links instead of copying. Source filenames must be absolute. Do not copy a file to an existing destination with the same or newer modification time. Before copying, print the name of each file. Set the type of backups made. You may also use the VERSION_CONTROL environment variable. The default is existing. Valid arguments are: Always make numbered backups. Make numbered backups of files that already have them; otherwise, make simple backups. Always make simple backups. Ignore subdirectories on other filesystems. cpio flags [options] Copy file archives in from or out to tape or disk, or to another location on the local machine. Each of the three flags -i, -o, or -p accepts different options. Copy in (extract) from an archive files whose names match selected patterns. Each pattern can include Bourne shell filename metacharacters. (Patterns should be quoted or escaped so they are interpreted by cpio, not by the shell.) If pattern is omitted, all files are copied in. Existing files are not overwritten by older versions from the archive unless -u is specified. Copy out to an archive a list of files whose names are given on the standard input. Copy (pass) files to another directory on the same system. Destination pathnames are interpreted relative to the named directory. Options available to the -i, -o, and -p flags are shown here. (The - is omitted for clarity): i: bcdf mnrtsuv B SVCEHMR IF o: 0a c vABL VC HM O F p: 0a d lm uv L V R blocksize to size × 512 bytes. Read or write header information as ASCII characters; useful when source and destination machines are different types. Like -B, but blocksize can be any positive integer n. Create directories as needed. Extract filenames from the archives; for copy-in the default is autodetection of the format. Valid formats (all caps also accepted) are: Binary Old (POSIX.1) portable format New (SVR4) portable format New (SVR4) portable format with checksum added Tar POSIX.1 tar (also recognizes GNU tar archives) HP-UX's binary (obsolete) HP-UX's portable format Read file as an input archive. May be on a remote machine (see -F). Ignored. For backward compatibility. Link files instead of copying.. Useful. Generate a list of files whose names end in .old using find; use list as input to cpio: find . -name "*.old" -print | cpio -ocBv > /dev/rst8 Restore from a tape drive all files whose names contain save (subdirectories are created if needed): cpio -icdv "*save*" < /dev/rst8 Move a directory tree: find . -depth -print | cpio -padm /mydir cron System administration command. Normally started in a system startup file. Execute commands at scheduled times, as specified in users' files in /var/cron/tabs. Each file shares its name with the user who owns it. The files are controlled via the command cront, and an asterisk to indicate all possible values. For example, assuming these crontab entries: 59 3 * * 5 find / -print | backup_program 0. Indicates which user's crontab file will be acted upon. csh [options] [file [arguments]] C shell, a command interpreter into which all other commands are entered. For more information, see Chapter 8, "csh and tcsh". csplit [options] file arguments Separate file into context-based sections and place sections in files named xx00 through xxn (n < 100), breaking file at each pattern specified in arguments. See also split. Read from standard input. Append suffix to output filename. This option causes -n to be ignored. suffix must specify how to convert the binary integer to readable form by including exactly one of the following: %d, %i, %u, %o, %x, or %X. The value of suffix determines the format for numbers as follows: Signed decimal Same as %d Unsigned decimal Octal Hexadecimal Same as %x. Name new files prefix00 through prefixn (default is xx00 through xxn). Keep newly created files, even when an error occurs (which would normally remove these files). This is useful when you need to specify an arbitrarily large repeat argument, {n}, and you don't want an out-of-range error to cause removal of the new files. Use output filenames with numbers num digits long. The default is 2. Suppress all character counts. Do not create empty output files. However, number as if those files had been created. Any one or a combination of the following expressions may be specified as arguments. Arguments containing blanks or other special characters should be surrounded by single quotes. Create file from the current line up to the line containing the regular expression expr. offset should be of the form +n or -n, where n is the number of lines below or above expr. Same as /expr/ except no file is created for lines previous to line containing expr. Create file from current line up to (but not including) line number num. When followed by a repeat count (number inside {}), put the next num lines of input into another output file. Repeat argument n times. May follow any of the preceding arguments. Files will split at instances of expr or in blocks of num lines. If * is given instead of n, repeat argument until input is exhausted. Create up to 20 chapter files from the file novel: csplit -k -f chap. novel '/CHAPTER/' '{20}' Create up to 100 address files (xx00 through xx99), each four lines long, from a database named address_list: csplit -k address_list 4 {99} ctags [options] files Create a list of function and macro names that are defined in the specified C, C++, FORTRAN, Java, Perl, yacc, or other source files. The output list (named tags by default) contains lines of the form: name file context where name is the function or macro name, file is the source file in which name is defined, and context is a search pattern that shows the line of code containing name. After the list of tags is created, you can invoke vi on any file and type: :set tags=tagsfile :tag name This switches the vi editor to the source file associated with the name listed in tagsfile (which you specify with -t). etags produces an equivalent file for tags to be used with Emacs. Append tag output to existing list of tags. Include tag entries for C preprocessor definitions. Add a note to the tags file that file should be consulted in addition to the normal input file. Consider the files that follow this option to be written in language. Use the -h option for a list of languages and their default filename extensions. Write to file. Include a tag for each line that matches regexp in the files following this option. Don't include tags based on regular-expression matching for the files that follow this option. Include tag entries for typedefs. Update tags file to reflect new locations of functions (e.g., when functions are moved to a different source file). Old tags are deleted; new tags are appended. Print to standard output a listing (index) of each function, source file, and page number (1 page = 64 lines). Suppress warning messages. Produce a listing of each function, and its line number, source file, and context. Expect .c and .h files to contain C++, not C, code. Print usage information and exit. Normally ctags uses indentation to parse the tag file; this option tells it to rely on it less. Include tag entries for typedefs, structs, enums, unions, and C++ member functions. Print the version number and exit. cut options [files] Cut out selected columns or fields from one or more files. In the following options, list is a sequence of integers. Use a comma between separate values and a hyphen to specify a range (e.g., 1-10,15,20 or 50-). See also paste and join. Specify list of positions; only bytes in these positions will be printed. Cut the column positions identified in list. Use with -f to specify field delimiter as character c (default is tab); special characters (e.g., a space) must be quoted. Cut the fields identified in list. Don't split multibyte characters. Use with -f to suppress lines without delimiters. Use string as the output delimiter. By default, the output delimiter is the same as the input delimiter. Extract usernames and real names from /etc/passwd: cut -d: -f1,5 /etc/passwd Find out who is logged on, but list only login names: who | cut -d"" -f1 Cut characters in the fourth column of file, and paste them back as the first column in the same file: cut -c4 file | paste - file date 2000. Display date, which should be in quotes and may be in the format d days or m months d days to print a date in the future. Specify ago to print a date in the past. You may include formatting (see the "Format" section that follows). Like -d but printed once for each line of datefile. Display in ISO-8601 format. If specified, timespec can have one of the values date (for date only), hours, minutes, or seconds to get the indicated precision. Display the time file was last modified. Display the date in RFC 822 format. Set the date. Set the date to Greenwich Mean Time, not local time. AM or PM. Time in %I:%M:%S %p (12-hour) format. Seconds since "The Epoch," 1970-01-01 00:00:00 UTC (a nonstandard extension). Insert a tab. Day of week (Sunday = 0). Country-specific date format.. Four-digit year (e.g., 1996).), 1995 (95): date 0701040095 The command: date +"Hello%t Date is %D %n%t Time is %T" produces a formatted date as follows: Hello Date is 05/09/93 Time is 17:53:39 dd options Make a copy of an input file (if) using the specified conditions, and send the results to the output file (or standard output if of is not specified). Any number of options can be supplied, although if and of are the most common and are usually specified first. Because dd can handle arbitrary blocksizes, it is useful when converting between raw physical devices. Set input and output blocksize to n bytes; this option overrides ibs and obs. Set the size of the conversion buffer (logical record length) to n bytes. Use only if the conversion flag is ascii, ebcdic, ibm, block, or unblock. Convert the input according to one or more (comma-separated) flags listed next. The first five flags are mutually exclusive. EBCDIC to ASCII. ASCII to EBCDIC. ASCII to EBCDIC with IBM conventions. Variable-length records (i.e., those terminated by a newline) to fixed-length records. Fixed-length records to variable-length. Uppercase to lowercase. Lowercase to uppercase. Continue processing after read errors. Don't truncate output file. Swap each pair of input bytes. Pad input blocks to ibs with trailing zeros. Copy only n input blocks. Set input blocksize to n bytes (default is 512). Read input from file (default is standard input). Set output blocksize to n bytes (default is 512). Write output to file (default is standard output). Skip n output-sized blocks from start of output file. Skip n input-sized blocks from start of input file. You can multiply size values (n) by a factor of 1024, 512, or 2 by appending the letter k, b, or w, respectively. You can use the letter x as a multiplication operator between two numbers. Convert an input file to all lowercase: dd if=caps_file of=small_file conv=lcase Retrieve variable-length data; write it as fixed-length to out: data_retrieval_cmd | dd of=out conv=sync,block debugfs [[option] device] System administration command. Debug an ext2 filesystem. device is the special file corresponding to the device containing the ext2 filesystem (e.g., /dev/hda3). Open the filesystem read-write. Dump the contents of an inode to standard output. Change the current working directory to directory. Change the root directory to be the specified inode. Close the currently open filesystem. Clear the contents of the inode corresponding to file. Dump the contents of an inode to out_file. Expand directory. Find first free block starting from goal (if specified) and allocate it. Find a free inode and allocate it. Mark block as not allocated. Free the inode corresponding to file. Print a list of commands understood by debugfs. Do block-to-inode translation. Create an ext2 filesystem on device. Remove file and deallocate its blocks. Create a link. Emulate the ls command. Modify the contents of the inode corresponding to file. Make directory. Create a special device file. Do inode-to-name translation. Open a filesystem. Print the current working directory. Quit debugfs. Remove file. Remove directory. Mark block as allocated. Mark in use the inode corresponding to file. List the contents of the super block. Dump the contents of the inode corresponding to file. Test whether block is marked as allocated. Test whether the inode corresponding to file is marked as allocated. Remove a link. Create a file in the filesystem named file, and copy the contents of source_file into the destination file. depmod [options] modules System administration command. Create a dependency file for the modules given on the command line. This dependency file can be used by modprobe to automatically load the relevant modules. The normal use of depmod is to include the line /sbin/depmod -a in one of the files in /etc/rc.d so the correct module dependencies will be available after booting the system. Create dependencies for all modules listed in /etc/conf.modules. Debug mode. Show all commands being issued. Print a list of all unresolved symbols. Print a list of all processed modules. Information about modules: which ones depend on others, and which directories correspond to particular types of modules. Programs that depmod relies on. df [options] [name] Report the amount of free disk space available on all mounted filesystems or on the given name. (df cannot report on unmounted filesystems.) Disk space is shown in 1KB blocks (default) or 512-byte blocks (if the environment variable POSIXLY_CORRECT is set). name can be a device name (e.g., /dev/hd*), the directory name of a mounting point (e.g., /usr), or a directory name (in which case df reports on the entire filesystem in which that directory is mounted). Include empty filesystems (those with 0 blocks). Show space as n-byte blocks. Print sizes in a format friendly to human readers (e.g., 1.9G instead of 1967156). Like -h, but show as power of 1000 rather than 1024. Report free, used, and percent-used inodes. Print sizes in kilobytes. Show local filesystems only. Print sizes in megabytes. Show results without invoking sync first (i.e., without flushing the buffers). This is the default. Use POSIX output format (i.e., print information about each filesystem on exactly one line). Invoke sync (flush buffers) before getting and showing sizes. Show only type filesystems. Print the type of each filesystem in addition to the sizes. Show only filesystems that are not of type type. Print the version and then exit.. diff3 [options] file1 file2 file3 Compare 3 are places where both of the newer files differ from each other and at least one of them. Begin lines with a tab instead of two spaces in output to line tabs up properly. dip [options] [chat scriptfile] System administration command. Set up or initiate dial-up Internet connections. dip can be used to establish connections for users dialing out or dialing in. Commands can be used in interactive mode or placed in a script file for use in dial-out connections. To establish dial-in connections, dip is often is used as a shell and may be executed using the commands diplogin or diplogini. In dial-in mode, prompt for username and password. Same as the diplogini command. Initiate a login shell for a dial-in connection. Same as the diplogin command. Kill the most recent dip process or the process running on the device specified by the -l option. Used with the -k option. Specifies a tty device. Maximum Transfer Unit. The default is 296. The protocol to use: SLIP, CSLIP, PPP, or TERM. Command mode. This is usually done for testing. Most of these commands can be used either in interactive mode or in a script file. Beep the terminal the specified number of times. Retrieve local and remote IP addresses using the BOOTP protocol. Send a BREAK. Map a modem response keyword to a numeric code. Modify interface characteristics or the routing table, before the link comes up, when it is up, when it goes down, or after it is down. The syntax for arguments is the same as arguments for the ifconfig or route commands. Set the number of data bits. Decrement $variable by value. The default is 1. Set default route to the IP address of the host connected to. Dial phonenumber. Abort if remote modem doesn't answer within timeout seconds. Set $errlvl according to the modem response. Enable or disable the display of modem commands. Exit the script. Optionally return the number n as the exit status. Clear the input buffer. Set $variable to value. If ask is specified, prompt the user for a value. If remote is specified, retrieve the value from the remote system. Abort after timeout seconds. List available commands. evaluates to true. An expression compares a variable to a constant using one of these operators: =, !=, <, >, <=, or >=. Increment $variable by value. The default is 1. Set the string used to initialize the modem. The default is ATE0 Q0 V1 X1. Set the connection protocol. Valid values are SLIP, CSLIP, PPP, and TERM. The default is SLIP. Set the subnet mask. Set the line parity to even, odd, or none. Prompt user for password. Install a proxy ARP entry in the local ARP table. Display the content of $variable. Execute command in a shell, and send output to the serial device. Commands are executed using the user's real UID. Specify the serial device the modem is attached to. Exit with a nonzero exit status. Abort the connection. Reset the modem. Prompt user for the variable part of an ACE System SecureID password and send it together with the stored prefix to the remote system. Store the fixed part of an ACE System SecureID password. Send string to the serial device. Execute command in a shell using the user's real UID. Wait for an S/Key challenge, then prompt user for the secret key. Generate and send the response. Abort if challenge is not received within timeout seconds. S/Key support must be compiled into dip. Wait time seconds. Set the port speed. Default is 38400. Set the number of stop bits. Enable terminal mode. Pass keyboard input directly to the serial device. Set the number of seconds the line can be inactive before the link is closed. Wait timeout seconds for text to arrive from the remote system. If timeout is not specified, wait forever. dirname pathname Print pathname excluding the last level. Useful for stripping the actual filename from a pathname. If there are no slashes (no directory levels) in pathname, dirname prints . to indicate the current directory. See also basename. dmesg [options] System administration command. Display the system control messages from the kernel ring buffer. This buffer stores all messages since the last system boot or the most recent ones, if the buffer has been filled. Clear buffer after printing messages. Set the level of system message that will display on console. dnsdomainname TCP/IP command. Print the system's DNS domain name. See also hostname. domainname [name] NFS/NIS command. Set or display name of current NIS domain. With no argument, domainname displays the name of the current NIS domain. Only a privileged user can set the domain name by giving an argument; this is usually done in a startup script. dosfsck [options] device fsck.ext2 . Use the Atari version of the MS-DOS filesystem. Drop the named file from the file allocation table. Force checking, even if kernel has already marked the filesystem as valid. dosfsck will normally exit without checking if the system appears to be clean. Consult file for a list of bad blocks, in addition to checking for others. Ensure that no changes are made to the filesystem. When queried, answer "no." "Preen." Repair all bad blocks noninteractively. Display timing statistics. Verbose. When queried, answer "yes." Expect to find the superblock at size; if it's not there, exit. Flush buffer caches before checking. Consult file for list of bad blocks instead of checking filesystem for them. du [options] [directories] Print disk usage (as the number of 1KB blocks used by each named directory and its subdirectories; default is current directory). Print usage for all files, not just subdirectories. Print sizes in bytes. In addition to normal output, print grand total of all arguments. Follow symbolic links, but only if they are command-line arguments. Print sizes in human-reader-friendly format. Print sizes in kilobytes (this is the default). Count the size of all files, whether or not they have already appeared (i.e., via a hard link). Exclude files that match pattern. Report sizes for directories only down to num levels below the starting point (which is level 0). Print only the grand total for each named directory. Do not include the sizes of subdirectories when totaling the size of parent directories. Display usage of files in current filesystem only. Exclude files that match any pattern in file. dumpe2fs device System administration command. Print information about device's superblock and blocks group. dumpkeys [options] Print information about the keyboard driver's translation tables to standard output. Further information is available in the manual pages under keytables.: Default Same as --full-table Same as --separate-lines One line for each keycode up to the first hole, then one line per modifier/keycode pair e2fsck [options] device fsck.ext2 [options] device System administration command. Similar to fsck, but specifically intended for Linux Second Extended Filesystems. When checking a second extended filesystem, fsck calls this command. Use superblock instead of default superblock. Debugging mode. Force checking, even if kernel has already marked the filesystem as valid. e2fsck will normally exit without checking if the system appears to be clean. echo [-n] [string] This is the /bin/echo command. echo also exists as a command built into the C shell and bash. The following character sequences have special meanings: Alert (bell) Suppress trailing newline Horizontal tab Vertical tab Literal backslash The octal character whose ASCII code is nnn. Enable character sequences with special meaning. (In some versions, this option is not required in order to make the sequences work.) Disable character sequences with special meaning. Suppress printing of newline after text. /bin/echo "testing printer" | lp /bin/echo "TITLE\nTITLE" > file ; cat doc1 doc2 >> file /bin/echo "Warning: ringing bell \a" egrep [options] [regexp] [files] Search one or more files for lines that match an extended regular expression regexp. egrep doesn't support the regular expressions \(, \), \n, \<, \>, \{, or \} but does support the other expressions, as well as the extended set +, ?, |, and ( ). Remember to enclose these characters in quotes. Regular expressions are described in Chapter 9, "Pattern Matching". Exit status is 0 if any lines match, 1 if none match, and 2 for errors. See grep for the list of available options. Also see fgrep. egrep typically runs faster than those commands. egrep 'Victor(ia)*' file egrep '(Victor|Victoria)' file Find and print strings such as old.doc1 or new.doc2 in files, and include their line numbers: egrep -n '(old|new)\.doc?' files emacs [options] [files] A text editor and all-purpose work environment. For more information, see Chapter 10, "The Emacs Editor". env [option] [variable=value ... ] [command] Display the current environment or, if an environment variable is specified, set it to a new value and display the modified environment. If command is specified, execute it under the modified environment. Ignore current environment entirely. Unset the specified variable. etags [options] files Create a list of function and macro names that are defined in the specified C, Pascal, FORTRAN, yacc, or flex source files. The output list (named tags by default) contains lines of the form: where name is the function or macro name, file is the source file in which name is defined, and context is a search pattern that shows the line of code containing name. After the list of tags is created, you can invoke Emacs on any file and type: ESC-x visit-tags-table You will be prompted for the name of the tag table; the default is TAGS. To switch to the source file associated with the name listed in tagsfile, type: ESC-x find-tag You will be prompted for the tag you would like Emacs to search for. ctags produces an equivalent tags file for use with vi. Consider the files that follow this option to be written in language. Use the -h option for a list of languages and their default filename extensions. Include a tag for each line that matches regexp in the files following this option. Do not include tag entries for C preprocessor definitions. Print usage information. Don't include tags based on regular-expression matching for the files that follow this option. Normally etags uses indentation to parse the tag file; this option tells it to rely on it less. ex [options] file An interactive command-based editor. For more information, see Chapter 11, "The vi Editor". expand [options] files Convert tabs in given files (or standard input, if the file is named -) to appropriate number of spaces; write results to standard output. tabs is a comma-separated list of integers that specify the placement of tab stops. If exactly one integer is provided, the tab stops are set to every integer spaces. By default, tab stops are 8 spaces apart. With -t and --tabs, the list may be separated by whitespace instead of commas. Convert tabs only at the beginning of lines. expr arg1 operator arg2 [ operator arg3 ... ] Evaluate arguments as expressions and print the result. Arguments and operators must be separated by spaces. In most cases, an argument is an integer, typed literally or represented by a shell variable. There are three types of operators: arithmetic, relational, and logical, as well as keyword expressions. Exit status for expr is 0 (expression is nonzero and nonnull), 1 (expression is 0 or null), or 2 (expression is invalid). Use these to produce mathematical expressions whose results are printed: Add arg2 to arg1. Subtract arg2 from arg1. Multiply the arguments. Divide arg1 by arg2. Take the remainder when arg1 is divided by arg2. Addition and subtraction are evaluated last, unless they are grouped inside parentheses. The symbols *, (, and ) have meaning to the shell, so they must be escaped (preceded by a backslash or enclosed in single quotes). Use these to compare two arguments. Arguments can also be words, in which case comparisons are defined by the locale. If the comparison statement is true, the result is 1; if false, the result is 0. Symbols > and < must be escaped. Are the arguments equal? Are the arguments different? Is arg1 greater than arg2? Is arg1 greater than or equal to arg2? Is arg1 less than arg2? Is arg1 less than or equal to arg2? Use these to compare two arguments. Depending on the values, the result can be arg1 (or some portion of it), arg2, or 0. Symbols | and & must be escaped. Logical OR; if arg1 has a nonzero (and nonnull) value, the result is arg1; otherwise, the result is arg2. Logical AND; if both arg1 and arg2 have a nonzero (and nonnull) value, the result is arg1; otherwise, the result is 0. Like grep; arg2 is a pattern to search for in arg1. arg2 must be a regular expression. If part of the arg2 pattern is enclosed in \( \), the result is the portion of arg1 that matches; otherwise, the result is simply the number of characters that match. By default, a pattern match always applies to the beginning of the first argument (the search string implicitly begins with a ^). Start the search string with .* to match other parts of the string. Return the first position in string that matches the first possible character in character-list. Continue through character-list until a match is found, or return 0. Return the length of string. Same as string : regex. Treat token as a string, even if it would normally be a keyword or an operator. Return a section of string, beginning with start, with a maximum length of length characters. Return null when given a negative or nonnumeric start or length. Division happens first; result is 10: expr 5 + 10 / 2 Addition happens first; result is 7 (truncated from 7.5): expr \( 5 + 10 \) / 2 Add 1 to variable i. This is how variables are incremented in shell scripts: i=`expr $i + 1 Print 1 (true) if variable a is the string "hello": expr $a = hello Print 1 (true) if b plus 5 equals 10 or more: expr $b + 5 \>= 10 Find the 5th, 6th, and 7th letters of the word character: expr substr character 5 3 In the examples that follow, variable p is the string "version.100". This command prints the number of characters in p: expr $p : '.*' Result is 11 Match all characters and print them: expr $p : '\(.*\)' Result is "version.100" Print the number of lowercase letters at the beginning of p: expr $p : '[a-z]*' Result is 7 Match the lowercase letters at the beginning of p: expr $p : '\([a-z]*\)' Result is "version" Truncate $x if it contains five or more characters; if not, just print $x. (Logical OR uses the second argument when the first one is 0 or null; i.e., when the match fails.) expr $x : '\(.....\)' \| $x In a shell script, rename files to their first five letters: mv $x `expr $x : '\(.....\)' \| $x (To avoid overwriting files with similar names, use mv -i.) false A null command that returns an unsuccessful (nonzero) exit status. Normally used in bash scripts. See also true. fdformat [options] device Low-level format of a floppy disk. The device for a standard format is usually /dev/fd0 or /dev/fd1. Do not verify format after completion. fdisk [options] [device] System administration command. Maintain disk partitions via a menu. fdisk displays information about disk partitions, creates and deletes disk partitions, and changes the active partition. It is possible to assign a different operating system to each of the four partitions, though only one partition is active at any given time. You can also divide a physical partition into several logical partitions. The minimum recommended size for a Linux system partition is 40MB. Normally, device will be /dev/hda, /dev/hdb, /dev/sda, /dev/sdb, /dev/hdc, /dev/hdd, and so on. See also cfdisk. List partition tables and exit. Display the size of partition, unless it is a DOS partition. Toggle a bootable flag on current partition. Delete current partition. List all partition types. Main menu. Create a new partition; prompt for more information. Print a list of all partitions and information about each. Quit; do not save. Replace the type of the current partition. Modify the display/entry units, which must be cylinders or sectors. Verify: check for errors; display a summary of the number of unallocated sectors. Save changes; exit.. Retrieve all messages from server, even ones that have already been seen but left on the server. The default is to only retrieve new messages. Specify the type of authentication. type may be: password, kerberos_v5, or kerberos. Authentication type is usually established by fetchmail by default, so this option isn't very useful. Set the maximum number of messages (n) accepted from a server per query. Set the maximum number of messages sent to an SMTP listener per connection. When this limit is reached, the connection will be broken and reestablished. The default of 0 means no limit. Check for mail on a single server without retrieving or deleting messages. Works with IMAP but not well with other protocols, if at all. Specify the domain name placed in RCPT TO lines sent to SMTP. The default is the local host. Change the header assumed to contain the mail's envelope address (usually "X-Envelope-to:") to header. Tell an IMAP server to EXPUNGE (i.e., purge messages marked for deletion) after n deletes. A setting of 0 indicates expunging only at the end of the session. Normally, an expunge occurs after each delete. For POP3 and IMAP servers, remove previously retrieved messages from the server before retrieving new ones. Specify a nondefault name for the fetchmail configuration file.. Delete all retrieved messages from the mail server. Keep copies of all retrieved messages on the mail server. Set the maximum message size that will be retrieved from a server. Messages larger than this size will be left on the server and marked unread. In daemon mode, monitor the specified TCP/IP interface for any activity besides itself, and skip the poll if there is no other activity. Useful for PPP connections that automatically time out with no activity. Pass mail directly to mail delivery agent, rather than send to port 25. The command is the path and options for the mailer, such as /usr/lib/sendmail -oem. A %T in the command will be replaced with the local delivery address, and an %F will be replaced with the message's From address. Do not expand local mail IDs to full addresses. This option will disable expected addressing and should only be used to find problems. Specify a port to connect to on the mail server. The default port numbers for supported protocols are usually sufficient. Specify the protocol to use when polling a mail server. proto can be: Post Office Protocol 2. Post Office Protocol 3. POP3 with MD5 authentication. POP3 with RPOP authentication. POP3 with Kerberos v4 authentication on port 1109. IMAP2bis, IMAP4, or IMAP4rev1. fetchmail autodetects their capabilities. IMAP4 or IMAP4rev1 with Kerberos v4 authentication. IMAP4 or IMAP4rev1 with GSSAPI authentication. ESMTP. Remove the prefix string, which is the local user's hostid, from the address in the envelope header (such as "Delivered-To:"). Retrieve the specified mail folder from the mail server. Suppress status messages during a fetch. For POP3, track the age of kept messages via unique ID listing. Specify the user name to use when logging into the mail server. Print the version information for fetchmail and display the options set for each mail server. Performs no fetch. Display all status messages during a fetch. Specify the SMTP error nnn to signal a spam block from the client. If nnn is -1, this option is disabled. fgrep [options] pattern [files] Search one or more files for lines that match a literal text string pattern. Exit status is 0 if any lines match, 1 if not, and 2 for errors. See grep for the list of available options. Also see egrep. Print lines in file that don't contain any spaces: fgrep -v '' file Print lines in file that contain the words in spell_list: fgrep -f spell_list file file [options] files Classify the named files according to the type of data they contain. file checks the magic file (usually /usr/share/magic) to identify some file types. Brief mode; do not prepend filenames to output lines. Check the format of the magic file (files argument is invalid with -c). Usually used with -m. Read the names of files to be checked from file. Follow symbolic links. By default, symbolic links are not followed. /usr/share/magic. Flush standard output after checking a file. Check files that are block or character special files in addition to checking ordinary files. Print the version. Attempt checking of compressed files. Many file types are understood. Output lists each filename, followed by a brief classification such as: ascii text c program text c-shell commands data empty iAPX 386 executable directory [nt]roff, tbl, or eqn input text shell commands symbolic link to ../usr/etc/arp List all files that are deemed to be troff/nroff input: file * | grep roff -print (which is the default if no other expression is given), -name and -type (for general use), -exec and -size (for advanced users), and -mtime and -user (for administrators). Conditions may be grouped by enclosing them in \( \) (escaped parentheses), negated with ! (use \! in the C shell), given as alternatives by separating them with -o, or repeated (adding restrictions to the match; usually only for -name, -type, -perm). Modification refers to editing of a file's contents. Change refers to modification, permission or ownership changes, and so on; therefore, for example, -ctime is more inclusive than -atime or -mtime. Find files that were last accessed more than n (+n), less than n (-n), or exactly n days ago. Note that find changes the access time of directories supplied as pathnames. Find files that were changed more than n (+n), less than n (-n), or exactly n days ago. A change is anything that changes the directory entry for the file, such as a chmod. Descend the directory tree, skipping directories and working on actual files first (and then the parent directories). Useful when files reside in unwritable directories (e.g., when using find with cpio). Run the Linux command, from the starting directory on each file matched by find (provided command executes successfully on that file; i.e., returns a 0 exit status). When command runs, the argument { } substitutes the current file. Follow the entire sequence with an escaped semicolon (\;). Follow symbolic links and track the directories visited (don't use this with -type l). Find files belonging to group gname. gname can be a group name or a group ID number. Find files whose inode number is n. Find files having n links. Search for files that reside only have been modified more recently than file; similar to -mtime. Affected by -follow only if it occurs after -follow on the command line. Same as -exec but prompts user to respond with y before command is executed.). Calculate times from the start of the day today, not 24 hours ago. Do not descend more than num levels of directories. Begin applying tests and actions only at levels deeper than num levels.). Find files last accessed more than n (+n), less than n (-n), or exactly n minutes ago. Find files that were accessed after file was last modified. Affected by -follow when after -follow on the command line. Find files last changed more than n (+n), less than n (-n), or exactly n minutes ago. Find files that were changed after they were last modified. Affected by -follow when after -follow on the command line. Continue if file is empty. Applies to regular files and directories. Return false value for each file encountered. Match files only on type filesystems. Acceptable types include minix, ext, ext2, xia, msdos, umsdos, vfat, proc, nfs, iso9660, hpfs, sysv, smb, and ncpfs. Find files with numeric group ID of num. A case-insensitive version of -lname. A case-insensitive version of -name. A case-insensitive version of -path. A case-insensitive version of -regex. files named pattern. pattern can include shell metacharacters and does not treat / or . specially. The match is case-insensitive. Find files last modified more than n (+n), less than n (-n), or exactly n minutes ago. The file's user ID does not correspond to any user. The file's group ID does not correspond to any group. Find files whose names match pattern. Expect full pathnames relative to the starting pathname (i.e., do not treat / or . specially). List all files (and subdirectories) in your home directory: find $HOME -print List all files named chapter1 in the /work directory: find /work -name chapter1 -print finger [options] users Display data about one or more users, including information listed in the files .plan and .project in each user's home directory. You can specify each user either as a login name (exact match) or as a first or last name (display information on all matching names). Networked environments recognize arguments of the form user@host and @host. Force long format (default): everything included by the -s option and home directory, home phone, login shell, mail status, .plan, .project, and .forward. Suppress matching of users' "real" names. Omit .plan and .project files from display. Show short format: login name, real name, terminal name, write status, idle time, office location, and office phone number. in.fingerd [option] TCP/IP command. Remote user information server. fingerd provides a network interface to the finger program. It listens for TCP connections on the finger port and, for each connection, reads a single input line, passes the line to finger, and copies the output of finger to the user on the client machine. fingerd is started by inetd and must have an entry in inetd's configuration file, /etc/inetd.conf. Include additional information, such as uptime and the name of the operating system. by John Levine, Tony Mason, and Doug Brown. Generate backup information to lex.backup. Debug mode. Use faster compilation (limited to small programs). Help summary. Scan case-insensitively. Maximum lex compatibility. Write output to file instead of lex.yy.c. Print performance report. Exit if the scanner encounters input that does not match any of its rules. Print to standard out. (By default, flex prints to lex.yy.c.) Print a summary of statistics. Generate batch (noninteractive) scanner. Use the fast scanner table representation. Generate an interactive scanner (default). Suppress #line directives in lex.yy.c. Change default yy prefix to prefix for all globally visible variable and function names. Generate a 7-bit scanner. Generate an 8-bit scanner (default). Generate a C++ scanner class. Compress scanner tables but do not use equivalence classes. Align tables for memory access and computation. This creates larger tables but gives faster performance. Construct equivalence classes. This creates smaller tables and sacrifices little performance (default). Generate full scanner tables, not compressed. Generate faster scanner tables, like -F. Construct metaequivalence classes (default). Bypass use of the standard I/O library. Instead use read() system calls.. Crown margin mode. Do not change each paragraph's first two lines' indentation. Use the second line's indentation as the default for subsequent lines. Format only lines beginning with prefix. Suppress line-joining. Tagged paragraph mode. Same as crown mode when the indentation of the first and second lines differs. If the indentation is the same, treat the first line as its own separate paragraph. Print exactly one space between words and two between sentences. Set output width to width. The default is 75. fold [option] [files] Break the lines of the named files so that they are no wider than the specified width (default is 80). fold breaks lines exactly at the specified width, even in the middle of a word. Reads from standard input when given - as a file. Count bytes, not columns (i.e., consider tabs, backspaces, and carriage returns to be one column). Break at spaces only, if possible. Set the maximum line width to width. Default is 80. formail [options] Filter standard input into mailbox format. they are empty. For use only with -r. Keep the body as well as the fields specified by -r. Require at least. free [options] Display statistics about memory usage: total free, used, physical, swap, shared, and buffers used by the kernel. Calculate memory in bytes. Default. Calculate memory in kilobytes. Calculate memory in megabytes. Do not display "buffer adjusted" line. The -o switch disables the display "-/+ buffers" line. Check memory usage every time seconds. Display all totals on one line at the bottom of output. Display version information. fsck [options] [filesystem] ... System administration command. Call the filesystem checker for the appropriate system type, to check and repair filesystems. If a filesystem is consistent, the number of files, number of blocks used, and number of blocks free are reported. If a filesystem is inconsistent, fsck prompts before each correction is attempted. fsck's exit code can be interpreted as the sum of all of those conditions that apply: Errors were found and corrected. Reboot suggested. Errors were found but not corrected. fsck encountered an operational error. fsck was called incorrectly. A shared library error was detected. Pass all subsequent options to filesystem-specific checker. All options that fsck doesn't recognize will also be passed. Interactive mode; prompt before making any repairs. Serial mode. Specify the filesystem type. Do not check filesystems of any other type. Check all filesystems listed in /etc/fstab. Suppress normal execution; just display what would be done. Meaningful only with -A: check all filesystems listed in /etc/fstab except the root filesystem. Suppress printing of title. fsck.minix [options] device System administration command. Similar to fsck, but specifically intended for Linux MINIX filesystems. Automatic mode; repair without prompting. Force checking, even if kernel has already marked the filesystem. fsck.minix will normally exit without checking if the system appears to be clean. List filesystems. Enable MINIX-like "mode not cleared" warnings. Display information about superblocks.. Enable debugging. Disable filename globbing. Turn off interactive prompting. No autologin upon initial connection. Verbose. Show all responses from remote server. Invoke an interactive shell on the local machine. If arguments are given, the first is taken as a command to execute directly, with the rest of the arguments as that command's arguments. Execute the macro macro-name that was defined with the macdef command. Arguments are passed to the macro unglobbed. Supply a supplemental password that will be required by a remote system for access to resources once a login has been successfully completed. If no argument is given, the user will be prompted for an account password in a nonechoing mode. Append a local file to a file on the remote machine. If remote-file is not given, the local filename is used after being altered by any ntrans or nmap setting. File transfer uses the current settings for type, format, mode, and structure. Set the file transfer type to network ASCII (default). Sound a bell after each file transfer command is completed. Set file transfer type to support binary image transfer. Terminate FTP session and then exit ftp. Toggle remote computer filename case mapping during mget. The default is off. When case is on, files on the remote machine with all-uppercase names will be copied to the local machine with all-lowercase names. Change working directory on remote machine to remote-directory. Change working directory of remote machine to its parent directory. Change file permissions of remote-file. If options are omitted, the command prompts for them. Terminate FTP session and return to command interpreter. Toggle carriage return stripping during ASCII-type file retrieval. Delete file remote-file on remote machine. Toggle debugging mode. If debug-value is specified, it is used to set the debugging level.. Synonym for close. Set the file transfer form to format. Default format is. Toggle filename expansion for mdelete, mget, and mput. If globbing is turned off, the filename arguments are taken literally and not expanded. Toggle hash-sign (#) printing for each data block transferred. Print help information for command. With no argument, ftp prints a list of commands. Get/set idle timer on remote machine. seconds specifies the length of the idle timer; if omitted, the current idle timer is displayed. Same as binary. Change working directory on local machine. If directory is not specified, the user's home directory is used. Print listing of contents of directory on remote machine, in a format chosen by the remote machine. If remote-directory is not specified, current working directory is used. Define a macro. Subsequent lines are stored as the macro macro-name; a null line terminates macro input mode. When $i is included in the macro, loop through arguments, substituting the current argument for $i on each pass. Escape $ with \. Delete the remote-files on the remote machine. Like dir, except multiple remote files may be specified. Expand the wildcard expression remote-files on the remote machine and do a get for each filename thus produced. Make a directory on the remote machine. Like nlist, except multiple remote files may be specified, and the local file must be specified. Set file transfer mode to mode-name. Default mode is stream mode. Show last modification time of the file on the remote machine. Expand wildcards in local-files given as arguments and do a put for each file in the resulting list. Get file if remote file is newer than local file. Print list of files of a directory on the remote machine to local-file (or the screen if local-file is not specified). If remote-directory is unspecified, the current working directory is used.. Establish a connection to the specified host FTP server. An optional port number may be supplied, in which case ftp will attempt to contact an FTP server at that port. Toggle interactive prompting. Execute an FTP command on a secondary control connection (i.e., send commands to two separate remote hosts simultaneously). Store a local file on the remote machine. If remote-file is left unspecified, the local filename is used after processing according to any ntrans or nmap settings in naming the remote file. File transfer uses the current settings for type, file, structure, and transfer mode. Print name of the current working directory on the remote machine. Synonym for bye. Send the arguments specified, verbatim, to the remote FTP server. Synonym for get. Retrieve a file (like get), but restart at the end of local-file. Useful for restarting a dropped transfer. Request help from the remote FTP server. If command-name is specified, remote help for that command is returned. Show status of the remote machine, or, if filename is specified, filename on remote machine. Rename file from on remote machine to to. Clear reply queue. Restart the transfer of a file from a particular byte count. Delete a directory on the remote machine. Toggle storing of files on the local system with unique filenames. When this option is on, rename files as .1 or .2, and soon, as appropriate, to preserve unique filenames, and report each such action. Default value is off. Synonym for put. Toggle the use of PORT commands. Get/set site-specific information from/on remote machine. Return size of filename on remote machine. Show current status of ftp. Set the file transfer structure to struct-name. By default, stream structure is used. Toggle storing of files on remote machine under unique filenames. Show type of operating system running on remote machine. Set file transfer type to that needed to talk to TENEX machines. Toggle packet tracing. Set file transfer type to type-name. If no type is specified, the current type is printed. The default type is network ASCII. Set user file-creation mode mask on the remote site. If mask is omitted, the current value of the mask is printed. Identify yourself to the remote FTP server. ftp will prompt the user for the password, if not specified and the server requires it, and the account field. Toggle verbose mode. Same as help. in.ftpd [options] TCP/IP command. Internet File Transfer Protocol server. The server uses the TCP protocol and listens at the port specified in the ftp service specification. ftpd is started by inetd and must have an entry in inetd's configuration file, /etc/inetd.conf. Write debugging information to the syslog. Log each FTP session in the syslog. Set maximum timeout period in seconds. Default limit is 15 minutes. Set timeout period to timeout seconds. fuser [options] [files | filesystems] Identify processes that are using a file or filesystem. fuser outputs the process IDs of the processes that are using the files or local filesystems. Each process ID is followed by a letter code: c if process is using file. Silent. User login name, in parentheses, also follows process ID. Display version information. g++ [options] files Invoke gcc with the options necessary to make it recognize C++. g++ recognizes all the file extensions gcc does, in addition to C++ source files (.C, .cc, or .cxx files) and C++ preprocessed files (.ii files). See also gcc.. Parse configuration file for syntax errors, then exit gated, leaving a dump file in /usr/tmp/gated_dump. Use alternate configuration file, config_file. Default is /etc/gated.conf. Do not modify kernel's routing table. Start gated with the specified tracing options enabled. If no flags are specified, assume general. The trace flags are: Management of policy blocks. Includes normal, policy, route, state, task, and timer. Includes normal and route. The kernel interface list. Normal protocols instances. Lexical analyzer and parser. Instances in which policy is applied to imported and exported routes. Any changes to routing table. State machine transitions. Symbols read from kernel -- note that they are read before the configuration file is parsed, so this option must be specified on the command line. System tasks and interfaces. Timer usage. Parse configuration file for errors and set exit code to indicate if there were any (1) or not (0), then exit. Do not daemonize. gawk [options] `script' [var=value...] [files] gawk [options] -f scriptfile [var=value...] [files] The GNU version of awk, a program that does pattern matching, record processing, and other forms of text manipulation. For more information, see Chapter 13, "The gawk Scripting Language". gcc [options] files Compile one or more C source files (file.c), assembler source files (file.s), or preprocessed C source files (file.i). If the file suffix is not recognizable, assume that the file is an object file or library. gcc automatically invokes the link editor ld (unless -c, -S, or -E is supplied). In some cases, gcc generates an object file having a .o suffix and a corresponding root name. By default, output is placed in a.out. gcc accepts many system-specific options not covered here. Note: gcc is the GNU form of cc; on most Linux systems, the command cc will invoke gcc. The command g++ will invoke gcc with the appropriate options for interpreting C++. Provide profile information for basic blocks. Enforce full ANSI conformance. Compile for use on machine type. Create linkable object file for each source file, but do not call linker. Print #defines. Suppress normal output. Print series of #defines that are in effect at the end of preprocessing. Print #defines with macro names only, not arguments or values. Do not recognize asm, inline, or typeof as keywords. Implied by -ansi. Do not recognize built-in functions unless they begin with two underscores. Do not recognize classof, headof, signature, sigof, or typeof as keywords. Do not respond to #ident commands. Set default control of bitfields to signed or unsigned if not explicitly declared. Cause the type char to be signed. Check for syntax errors. Do not attempt to actually compile. Cause the type char to be unsigned. Include debugging information for use with gdb. Provide level amount of debugging information. level must be 1, 2, or 3, with 1 providing the least amount of information. The default is 2. Include dir in the list of directories to search when an include file is not found in the normal include path. Process file before proceeding to the normal input file. Process the macros in file before proceeding to the normal input file. Add dir to the list of directories to be searched when a system file cannot be found in the main include path. Append dir to the list of directories to be searched when a header file cannot be found in the main include path. If -iprefix has been set, prepend that prefix to the directory's name. Link to lib. Force linker to ignore standard system startup files. Suppress linking to standard library files. Specify output file as file. Default is a.out. Provide profile information for use with prof. Err in every case in which -pedantic would have produced a warning. Provide profile information for use with gprof. Transfer information between stages of compiler by pipes instead of temporary files. Remove all symbol table and relocation information from the executable. Save temporary files in the current directory when compiling. Suppress linking to shared libraries. Attempt to behave like a traditional C compiler. Cause the preprocessor to attempt to behave like a traditional C preprocessor. Include trigraph support. Force the linker to search libraries for a definition of symbol and to link to them, if found. Define only those constants required by the language standard, not system-specific constants like unix. Verbose mode. Display commands as they are executed, gcc version number, and preprocessor version number. Suppress warnings. Expect input file to be written in language, which may be c, objective-c, c-header, c++, cpp-output, assembler, or assembler-with-cpp. If none is specified as language, guess the language by filename extension. If the preprocessor encounters a conditional such as #if question, assert answer in response. To turn off standard assertions, use -A-. Specify the path directory in which the compiler files are located. Retain comments during preprocessing. Meaningful only with -E. Define name with value def as if by a #define. If no =def is given, name is defined with value 1. -D has lower precedence than -U. Preprocess the source files, but do not compile. Print result to standard output. Include dir in list of directories to search for include files. If dir is -, search those directories that were specified by -I before the -I- only when #include "file" is specified, not #include <file>. Search dir in addition to standard directories. Instead of compiling, print a rule suitable for inclusion in a makefile that describes dependencies of the source file based on its #include directives. Implies -E. Similar to -M, but sends dependency information to files ending in .d in addition to ordinary compilation. Used with -M or -MM. Suppress error messages if an included file does not exist; useful if the included file is automatically generated by a build. Similar to -MD, but record only user header file information, not system header file information. Similar to -M, but limit the rule to non-standard #include files; that is, only files declared through #include "file" and not those declared through #include <file>. Optimize. level should be 1, 2, 3, or 0. The default is 1. 0 turns off optimization; 3 optimizes the most. Preprocess input without producing line-control information used by next pass of C compiler. Meaningful only with -E. Compile source files into assembler code, but do not assemble. Remove any initial definition of name, where name is a reserved symbol predefined by the preprocessor or a name defined on a -D option. Names predefined by cpp are unix and i386. Attempt to run gcc version version. Warn more verbosely than normal. Invoke linker with option, which may be a comma-separated list. Call assembler with option, which may be a comma-separated list. Warn if any functions return structures or unions are defined or called. Enable -W, -Wchar-subscripts, -Wcomment, -Wformat, -Wimplicit, -Wparentheses, -Wreturn-type, -Wswitch, -Wtemplate-debugging, -Wtrigraphs, -Wuninitialized, in particular cases of type conversions. Exit at the first error. Warn about inappropriately formatted printfs and scanfs. Warn when encountering implicit function or parameter declarations. Warn about illegal inline functions. Warn if a global function is defined without a previous declaration. Warn when encountering global function definitions without previous prototype declarations. Warn if an extern declaration is encountered within a function. Don't warn about use of #import. Pass options to the preprocessor. Multiple options are separated by commas. Not a warning parameter. Enable more verbose warnings about omitted parentheses. Warn when encountering code that attempts to determine the size of a function or void. Warn if anything is declared more than once in the same scope. Warn about functions defined without return types or with improper return types. Warn when a local variable shadows another local variable. Insist that argument types be specified in function declarations and definitions. Warn about switches that skip the index for one of their enumerated types. Warn if debugging is not available for C++ templates. Warn when encountering code that produces different results in ANSI C and traditional C. Warn when encountering trigraphs. Warn when encountering uninitialized automatic variables. Warn about unused variables and functions. Pass an option to the linker. A linker option with an argument requires two -Xs, the first specifying the option and the second specifying the argument. base name as the file containing the pragma directive). This information will be globally visible. Normally the specified header file contains a #pragma interface directive. gdb [options] [program [core|pid]] GDB (GNU DeBugger) allows you to step through C, C++, and Modula-2 programs in order to find the point at which they break. The program to be debugged is normally specified on the command line; you can also specify a core or, if you want to investigate a running program, a process ID. Consult file for symbol table. With -e, also uses file as the executable. Use file as executable, to be read in conjunction with source code. May be used in conjunction with -s to read symbol table from the executable. Consult file for information provided by a core dump. Read gdb commands from file. Include directory in path that is searched for source files. Ignore .gdbinit file. Suppress introductory and copyright messages. Exit after executing all the commands specified in .gdbinit and -x files. Print no startup messages. Use directory as gdb's working directory. Show full filename and line number for each stack frame. Set line speed of serial device used by GDB to bps. Set standard in and standard out to device. These are just some of the more common gdb commands; there are too many commands to list all of them here: Print the current location within the program and a stack trace showing how the current location was reached. (where does the same thing.) Set a breakpoint in the program. Change the current working directory. Delete the breakpoint where you just stopped. List commands to be executed when breakpoint is hit. Continue execution from a breakpoint. Delete a breakpoint or a watchpoint; also used in conjunction with other commands. Cause variables or expressions to be displayed when program stops. Move down one stack frame to make another function the current one. Select a frame for the next continue command. Show a variety of information about the program. For instance, info breakpoints shows all outstanding breakpoints and watchpoints. Start execution at another point in the source file. Abort the process running under gdb's control. List the contents of the source file corresponding to the program being executed. Execute the next source line, executing a function in its entirety. Print the value of a variable or expression. Show the current working directory. Show the contents of a datatype, such as a structure or C++ class. Exit gdb. Search backward for a regular expression in the source file. Execute the program. Assign a value to a variable. Send a signal to the running process. Execute the next source line, stepping into a function if necessary. Reverse the effect of the display command; keep expressions from being displayed. Finish the current loop. Move up one stack frame to make another function the current one. Set a watchpoint (i.e., a data breakpoint) in the program. Print the type of a variable or function. gdc [options] command TCP/IP command. Administer gated. Various commands start and stop the daemon, send signals to it, maintain the configuration files, and manage state and core dumps. Specify maximum core dump size. Specify maximum file dump size. Specify maximum data segment size. Suppress editing of the kernel forwarding table. Quiet mode: suppress warnings and log errors to syslogd instead of standard error. Specify maximum stack size. Wait seconds seconds (default is 10) for gated to complete specified operations at start and stop time. Restore /etc/gated.conf from /etc/gated.conf-, whether or not the latter exists. Restore /etc/gated.conf from /etc/gated.conf-, assuming the latter exists. Report any syntax errors in /etc/gated.conf. Report any syntax errors in /etc/gated.conf+. Force gated to core dump and exit. Create an empty /etc/gated.conf+ if one does not already exist, and set it to mode 664, owner root, group gdmaint. Force gated to dump to /usr/tmp/gated_dump and then continue normal operation. Reload interface configuration. Terminate immediately (ungracefully). Set all configuration files to mode 664, owner root, group gdmaint. Make sure that /etc/gated.conf+ exists and move it to /etc/gated.conf. Save the old /etc/gated.conf as /etc/gated.conf-. Reload configuration file. Stop and restart gated. Remove any gated core files. Remove any gated state dump files. Remove any gated files that report on parse errors. These are generated by the checkconf and checknew commands. Exit with zero status if gated is running and nonzero if it is not. Start gated, unless it is already running, in which case return an error. Stop gated as gracefully as possible. Terminate gracefully. Toggle tracing. The test configuration file. Once you're satisfied that it works, you should run gated newconf to install it as /etc/gated.conf. A backup of the old configuration file. A backup of the backup of the old configuration file. The actual configuration file. gated's process ID. The state dump file. A list of the parse errors generated by reading the configuration file. getkeycodes Print the kernel's scancode-to-keycode mapping table. getty [options] port [speed [term [lined]]] System administration command. Set terminal type, modes, speed, and line discipline. Linux systems may use agetty instead, which uses a different syntax. getty is invoked by init. It is the second process in the series init-getty-login-shell, which ultimately connects a user with the Linux system. getty reads the user's login name and invokes the login command with the user's name as an argument. While reading the name, getty attempts to adapt the system to the speed and type of device being used. You must specify a port argument, which getty will use to attach itself to the device /dev/port. getty will then scan the defaults file, usually /etc/default/getty, for runtime values and parameters. These may also be specified, for the most part, on the command line, but the values in the defaults file take precedence. The speed argument is used to point to an entry in the file /etc/gettydefs, which contains the initial baud rate, tty settings, and login prompt and final speed and settings for the connection. The first entry is the default in /etc/gettydefs. term specifies the type of terminal, with lined the optional line discipline to use. Check the gettydefs file. file is the name of the gettydefs file. Produces the files' values and reports parsing errors to standard output. Use a different default file. Do not force a hangup on the port when initializing. Wait for single character from port, then wait delay seconds before proceeding. If no username is accepted within timeout seconds, close connection. Wait for string characters from port before proceeding. gprof [options] [object_file] Display the profile data for an object file. The file's symbol table is compared with the call graph profile file gmon.out (previously created by compiling with gcc -pg).. Do not display entries for routine and its descendants. Print only routine, but include time spent in all routines. Remove arcs between the routines from and to. Summarize profile information in the file gmon.sum. Include zero-usage calls. Do not display entries for routine and its descendants or include time spent on them in calculations for total time. Print only information about routine. Do not include time spent in other routines. grep [options] pattern [files] Search one or more files for lines that match a regular expression pattern. Regular expressions are described in Chapter 9, "Pattern Matching". directories like ordinary files (default). Skip directories. Recursively read all files under each directory. Same as -r.. Print lines and their line numbers. Suppress normal output in favor of quiet mode; the. List files that contain no matching lines. List the number of users who use tcsh: grep -c /bin/tcsh /etc/passwd List header files that have at least one #include directive: grep -l '^#include' /usr/include/* List files that don't contain pattern: grep -c pattern files | grep :0 groff [options] [files] troff [options] [files] Frontend to the groff document-formatting system, which normally runs troff along with a postprocessor appropriate for the selected output device. Options without arguments can be grouped after a single dash (-). A filename of - denotes standard input. Generate an ASCII approximation of the typeset output. Print a backtrace. Enable compatibility mode. Define the character c or string name to be the string s. Preprocess with eqn. Don't print any error messages. Use fam as the default font family. Search dir for subdirectories with DESC and font files before the default /usr/lib/groff/font. Print a help message. Read standard input after all files have been processed. Send the output to a printer (as specified by the print command in the device description file). Pass arg to the spooler. Each argument should be passed with a separate -L option. Read the macro file tmac.name. Search directory dir for macro files before the default directory /usr/lib/groff/tmac. Set the first page number to num. Don't allow newlines with eqn delimiters; equivalent to eqn's -N option. Output only pages specified in list, which is a comma-separated list of page ranges. Preprocess with pic. Pass arg to the postprocessor. Each argument should be passed with a separate -P option. Set the number register c or name to n. c is a single character and n is any troff numeric expression. Preprocess with refer. Preprocess with soelim. Use safer mode (i.e., pass the -S option to pic and use the -msafer macros with troff). Preprocess with tbl. Prepare output for device dev; the default is ps. Make programs run by groff print out their version number. Print the pipeline on stdout instead of executing it. Enable warning name. You can specify multiple -w options. See the troff manpage for a list of warnings. Disable warning name. You can specify multiple -W options. See the troff manpage for a list of warnings. Suppress troff output (except error messages). Do not postprocess troff output. Normally groff automatically runs the appropriate postprocessor. Typewriter-like device TeX dvi format Typewriter-like devices using the ISO Latin-1 character set PostScript 75-dpi X11 previewer 100-dpi X11 previewer HP LaserJet4-compatible (or other PCL5-compatible) printer If set to be X, groff will run Xtroff instead of troff. Colon-separated list of directories in which to search for the devname directory. Colon-separated list of directories in which to search for the macro files. If set, temporary files will be created in this directory; otherwise, they will be created in TMPDIR (if set) or /tmp (if TMPDIR is not set). Default device. Search path for commands that groff executes. groupadd [options] group System administration command. Create new group account group. Assign numerical group ID. (By default, the first available number above 500 is used.) The value must be unique unless the -o option is used. Accept a nonunique gid with the -g option. groupdel group System administration command. Remove group from system account files. You may still need to find and change permissions on files that belong to the removed group. groupmod [options] group System administration command. Modify group information for group. Change the numerical value of the group ID. Any files that have the old gid will have to be changed manually. The new gid must be unique unless the -o option is used. Change the group name to name. Override. Accept a nonunique gid. groups [options] [users] Show the groups that each user belongs to (default user is the owner of the current group). Groups are listed in /etc/passwd and /etc/group. Print help message. Print version information. grpck [option] [files] System administration command. Remove corrupt or duplicate entries in the /etc/group and /etc/gshadow files. Generate warnings for other errors found. grpck will prompt for a "yes" or "no" before deleting entries. If the user replies "no," the program will exit. If run in a noninteractive mode, the reply to all prompts is "no." Alternate group and gshadow files can be checked. If other errors are found, the user will be encouraged to run the groupmod command. Noninteractive mode. Success. Syntax error. One or more bad group entries found. Could not open group files. Could not lock group files. Could not write group files. grpconv grpunconv. gs [options] [files].. Adds the designated list of directories at the head of the search path for library files. Define a name in systemdict with a given string as value. Causes individual character outlines to be loaded from the disk the first time they are encountered. Disables the bind operator. Useful only for debugging. Disables character caching. Useful only for debugging. Suppresses the normal initialization of the output device. May be useful when debugging. Disables the prompt and pause at the end of each page. Disables the use of fonts supplied by the underlying platform (e.g., the X Window System). Disables the deletefile and renamefile operators and the ability to open files in any mode other than read-only. Leaves systemdict writable. Selects an alternate initial output device. Selects an alternate output file (or pipe) for the initial output device. gunzip [options] [files] Uncompress files compressed by gzip. See gzip for a list of options. gzexe [option] [files] Compress executables. When run, these files automatically uncompress, thus trading time for space. gzexe creates backup files (filename~), which should be removed after testing the original. Decompress files. gzip [options] [files] zcat [options] [files] Compress specified files (or,. halt [options] System administration command. Insert a note in the file /var/log/wtmp; if the system is in runlevel 0 or 6, stop all processes; otherwise, call shutdown -nf. Suppress writing to /var/log/wtmp. Call halt even when shutdown -nf would normally be called (i.e., force a call to halt, even when not in runlevel 0 or 6). Suppress normal call to sync. Suppress normal execution; simply write to /var/log/wtmp. head [options] [files] Print the first few lines (default is 10) of one or more files. If files is missing or -, read from standard input. With more than one file, print a header for each file. Print first num bytes or, if num is followed by b, k, or m, first num 512-byte blocks, 1-kilobyte blocks, or 1-megabyte blocks. Display help and then exit. Print first num lines. Default is 10. Quiet mode; never print headers giving filenames. Print filename headers, even for only one file. Output version information and then exit. Display the first 20 lines of phone_list: head -20 phone_list Display the first 10 phone numbers having a 202 area code: grep '(202)' phone_list | head. Same as -t ANY. CSNET, CH, CHAOS, HS, HESIOD, ANY, or *). Default is IN. Debugging mode. -dd is a more verbose version. Do not print information about domains outside of specified zone. For hostname queries, do not print "additional information" or "authoritative nameserver." Output to file as well as standard out. Given an IP address, return the corresponding in-addr.arpa address, class (always PTR), and hostname. List all machines in zone. Print only MR, MG, and MB records; recursively expand MR (renamed mail box) and MG (mail group) records to MB (mail box) records. Do not print output to standard out. For use with -l. Query only the zone's primary nameserver (or server) for zone transfers, instead of those authoritative servers that respond. Useful for testing unregistered zones. Quiet. Suppress warning, but not error, messages. Do not ask contacted server to query other servers, but require only the information that it has cached. Look for type entries in the resource record. type may be A, NS, PTR, ANY, or * (all). Use TCP, not UDP. Verbose. Include all fields from resource record, even time-to-live and class, as well as "additional information" and "authoritative nameservers" (provided by the remote nameserver). Very verbose. Include information about host's defaults. Never give up on queried server. Allow multiple hosts or zones to be specified. If a server is also specified, the argument must be preceded by -X. For hostnames, look up the associated IP address, and then reverse look up the hostname, to see if a match occurs. For IP addresses, look up the associated hostname, and determine whether the host recognizes that address as its own. For zones, check IP addresses for all hosts. Exit silently if no incongruities are discovered. Similar to -l, but also check to see if the zone's name servers are really authoritative. The zone's SOA (start of authority) records specify authoritative name servers (in NS fields). Those servers are queried; if they do not have SOA records, host reports a lame delegation. Other checks are made as well. Similar to -H but include the names of hosts with more than one address per defined name. Similar to -H but do not treat extra-zone hosts as errors. Extra-zone hosts are hosts in an undefined subdomain. Redirect standard out to file, and print extra resource record output only on standard out. Similar to -H but include the names of gateway hosts. Print the number of unique hosts within zone. Do not include aliases. Also list all errors found (extra-zone names, duplicate hosts). Do not print warnings about domain names containing illegal characters chars, such as _. For use with -l. List all delegated zones within this zone, up to level deep, recursively. For use with -l. servers should be a comma-separated list. Specify preferred hosts for secondary servers to use when copying over zone data. Highest priority is given to those servers that match the most domain components in a given part of servers. Treat non-fully-qualified hostnames as BIND does, searching each component of the local domain. For use with -l. Print all hosts within the zone to standard out. Do not print hosts within subzones. Include class and IP address. Print warning messages (illegal names, lame delegations, missing records, etc.) to standard error. Print time-to-live values (how long information about each host will remain cached before the nameserver refreshes it). Specify a server to query, and allow multiple hosts or zones to be specified. When printing recource records, include trailing dot in domain names, and print time-to-live value and class name. hostid Print the ID number in hexadecimal of the current host. hostname [option] [nameofhost] Set or print name of current host system. A privileged user can set the hostname with the nameofhost argument. Display the alias name of the host (if used). Print DNS domain name. Print fully qualified domain name. Consult file for hostname. Print a help message and then exit. Display the IP address(es) of the host. Trim domain information from the printed name. Print version information and then exit. Display the NIS domain name. A privileged user can set a new NIS domain name with nameofhost. hwclock [options]. You may specify only one of the following options: Adjust the hardware clock based on information in /etc/adjtime and set the system clock to the new time. Adjust the hardware clock based on information in /etc/adjtime. Meaningful only with the --set option. date is a string appropriate for use with the date command. Print information about what hwclock is doing. Print the current time stored in the hardware clock. Set the system time in accordance with the hardware clock. Set the hardware clock according to the time given in the --date parameter. Do not actually change anything. This is good for checking syntax. The hardware clock is stored in Universal Coordinated Time. Set the hardware clock in accordance with the system time. icmpinfo [options] TCP/IP command. Intercept and interpret ICMP packets. Print the address and name of the message's sender, the source port, the destination port, the sequence, and the packet size. By default, provide information only about packets that are behaving oddly. Kill the syslogd process begun by -l. Record via syslogd. Only a privileged user may use this option. Use IP addresses instead of hostnames. Suppress decoding of port number: do not attempt to guess the name of the service that is listening at that port. Include IP address of interface that received the packet, in case there are several interfaces on the host machine. Verbose. Include information about normal ICMP packets. You may also specify -vv and -vvv for extra verbosity. id [options] [username] Display information about yourself or another user: user ID, group ID, effective user ID and group ID if relevant, and additional group IDs. Print group ID only. Print supplementary groups only. With -u, -g, or -G, print user or group name, not number. With -u, -g, or -G, print real, not effective, user ID or group ID. Print user ID only. in.identd [options] [kernelfile [kmemfile]] TCP/IP command. Provide the name of the user whose process is running a specified TCP/IP connection. You may specify the kernel and its memory space. Bind to ip_address. Useful only with -b. By default, bind to the INADDR_ANY address. Run standalone; not for use with inetd. Allow debugging requests. Attempt to run in the group gid. Useful only with -b. Run as a daemon, one process per request. Log via syslogd. Allow multiple requests per session. Return user IDs instead of usernames. Do not provide a user's name or user ID if the file .noident exists in the user's home directory. When queried for the type of operating system, always return OTHER. Listen at port instead of the default, port 113. Exit if no new requests have been received before seconds seconds have passed. Note that, with -i or -w, the next new request will result in identd being restarted. Default is infinity (never exit). Attempt to run as uid. Useful only with -b. Run as a daemon, one process for all requests.. String of the form name unit, for example, en0.). The following parameters may be set with ifconfig: Enable/disable sending of incoming frames to the kernel's network layer. Enable/disable use of the Address Resolution Protocol in mapping between network-level addresses and link-level addresses. (inet only.) Specify address to use to represent broadcasts to the network. Default is the address with a host part of all 1s (i.e., x.y.z.255 for a class C network). Enable/disable driver-dependent debugging code. Specify the address of the correspondent on the other end of a point-to-point link. Mark an interface "down" (unresponsive). Set the interface's hardware class and address. class may be ether (Ethernet), ax25 (AX.25 Packet Radio), or ARCnet. Set the device's interrupt line. Set routing metric of the interface to n. Default is 0. Set the interface's Maximum Transfer Unit (MTU). Set the multicast flag. . Enable/disable point-to-point interfacing, so that the connection between the two machines is dedicated. Mark an interface "up" (ready to send and receive). Request/disable use of a "trailer" link-level encapsulation when sending. Either a hostname present in the hostname database (/etc/hosts), or an Internet address expressed in the Internet standard dot notation. imake options C preprocessor (cpp) interface to the make utility. imake (for include make) solves the portability problem of make by allowing machine dependencies to be kept in a central set of configuration files, separate from the descriptions of the various items to be built. The targets are contained in the Imakefile, a machine-independent description of the targets to be built, written as cpp macros. imake uses cpp to process the configuration files and the Imakefile, and to generate machine-specific Makefiles, which can then be used by make. One of the configuration files is a template file, a master file for imake. This template file (default is Imake.tmpl) #includes the other configuration files that contain machine dependencies such as variable assignments, site definitions, and cpp macros, and directs the order in which the files are processed. Each file affects the interpretation of later files and sections of Imake.tmpl. Comments may be included in imake configuration files, but the initial # needs to be preceded with an empty C comment: /**/# For more information, see cpp and make. Also check out the Nutshell Handbook Software Portability with imake, by Paul DuBois. Set directory-specific variables. This option is passed directly to cpp. Execute the generated Makefile. Default is to leave this to the user. Name of per-directory input file. Default is Imakefile. Directory in which imake template and configuration files may be found. This option is passed directly to cpp. Name of make description file to be generated. If filename is a -- , the output is written to stdout. The default is to generate, but not execute, a Makefile. Name of master template file used by cpp. This file is usually located in the directory specified with the -I option. The default file is Imake.tmpl. Print the cpp command line used to generate the Makefile. Following is a list of tools used with imake: Create header file dependencies in Makefiles. make- depend reads the named input source files in sequence and parses them to process #include, #define, #undef, #ifdef, #ifndef, #endif, #if, and #else directives so it can tell which #include directives would be used in a compilation. makedepend determines the dependencies and writes them to the Makefile. make then knows which object files must be recompiled when a dependency has changed. makedepend has the following options: Ignore any unrecognized options following a double hyphen. A second double hyphen terminates this action. Recognized options between the hyphens are processed normally. Append dependencies to any existing ones instead of replacing existing ones. Write dependencies to filename instead of to Makefile. Print a warning when encountering a multiple inclusion. Use string as delimiter in file, instead of # DO NOT DELETE THIS LINE -- make depend depends on it. Verbose. List all files included by main source file. Define name with the given value (first form) or with value 1 (second form). Add directory dir to the list of directories searched. Search only dir for include files. Ignore standard include directories. Create directory dir and all missing parent directories during file installation operations. Bootstrap a Makefile from an Imakefile. topdir specifies the location of the project root directory. curdir (usually omitted) is specified as a relative pathname from the top of the build tree to the current directory. The -a option is equivalent to the following command sequence: % xmkmf % make Makefiles % make includes % make depend Following is a list of the imake configuration files: Master template for imake. Imake.tmpl includes all the other configuration files, plus the Imakefile in the current directory. Contains definitions that apply across sites and vendors. Contains cpp macro definitions that are configured for the current platform. The macro definitions are fed into imake, which runs cpp to process the macros. Newlines (line continuations) are indicated by the string @@\ (double at sign, backslash). Contains site-specific (as opposed to vendor-specific) information, such as installation directories, what set of programs to build, and any special versions of programs to use during the build. The site.def file changes from machine to machine. File containing X-specific variables. File containing library rules. File containing server-specific rules. The .cf files are the vendor-specific VendorFiles that live in Imake.vb. A .cf file contains platform-specific definitions, such as version numbers of the operating system and the compiler and workarounds for missing commands. The definitions in .cf files override the defaults, defined in Imake.params. The Imakefile is a per-directory file that indicates targets to be built and installed and rules to be applied. imake reads the Imakefile and expands the rules into Makefile target entries. An Imakefile may also include definitions of make variables and list the dependencies of the targets. The dependencies are expressed as cpp macros, defined in Imake.rules. Whenever you change an Imakefile, you need to rebuild the Makefile and regenerate header file dependencies. For more information on imake, see Software Portability with imake by Paul DuBois. imapd TCP/IP command. The Interactive Mail Access Protocol (IMAP) server daemon. imapd is invoked by inetd and listens on port 143 for requests from IMAP clients. IMAP allows mail programs to access remote mailboxes as if they were local. IMAP is a richer protocol than POP because it allows a client to retrieve message-level information from a server mailbox instead of the entire mailbox. IMAP can be used for online and offline reading. The popular Pine mail client contains support for IMAP. inetd [option] [configuration_file] TCP/IP command. Internet services daemon. inetd listens on multiple ports for incoming connection requests. When it receives one, it spawns the appropriate server. When started, inetd reads its configuration information from either configuration_file, or from the default configuration file /etc/inetd.conf. It then issues a call to getservbyname, creates a socket for each server, and binds each socket to the port for that server. It does a listen on all connection-based sockets, then waits, using select for a connection or datagram. When a connection request is received on a listening socket, inetd does an accept, creating a new socket. It then forks, dups, and execs the appropriate server. The invoked server has I/O to stdin, stdout, and stderr done to the new socket, connecting the server to the client process. When there is data waiting on a datagram socket, inetd forks, dups, and execs the appropriate server, passing it any server program arguments. A datagram server has I/O to stdin, stdout, and stderr done to the original socket. If the datagram socket is marked as wait, the invoked server must process the message before inetd considers the socket available for new connections. If the socket is marked nowait, inetd continues to process incoming messages on that port. The following servers may be started by inetd: bootpd, bootpgw, fingerd, ftpd, imapd, popd, rexecd, rlogind, rshd, talkd, telnetd, and tftpd. Do not arrange for inetd to start named, routed, rwhod, sendmail, listen, or any NFS server. inetd rereads its configuration file when it receives a hangup signal, SIGHUP. Services may be added, deleted, or modified when the configuration file is reread. Turn on socket-level debugging and print debugging information to stdout. Default configuration file. inetd's process ID. info [options] [topics] GNU hypertext reader: display online documentation previously built from Texinfo input. Info files are arranged in a hierarchy and can contain menus for subtopics. When entered without options, the command displays the top-level info file (usually /usr/local/info/dir). When topics are specified, find a subtopic by choosing the first topic from the menu in the top-level info file, the next topic from the new menu specified by the first topic, and so on. The initial display can also be controlled by the -f and -n options. Search directories, a colon-separated list, for info files. If this option is not specified, use the INFOPATH environment variable or the default directory (usually /usr/local/info). Store each keystroke in file, which can be used in a future session with the --restore option to return to this place in info. Display specified info file. Display specified node in the info file. Copy output to file instead of displaying it at the screen. Display brief help. When starting, execute keystrokes in file. Display subtopics. Display version. Use vi-like key bindings. init [option] [runlevel] System administration command. When changing runlevels, send SIGKILL seconds after SIGTERM. Default is 20. init is the first process run by any Unix machine at boot time. It verifies the integrity of all filesystems and then creates other processes, using fork and exec, as specified by /etc/inittab. Which processes may be run are controlled by runlevel. All process terminations are recorded in /var/run/utmp and /var/log/wtmp. When the runlevel changes, init sends SIGTERM and then, after 20 seconds, SIGKILL to all processes that cannot be run in the new runlevel. The current runlevel may be changed by telinit, which is often just a link to init. The default runlevels vary from distribution to distribution, but these are standard: Halt the system. Single-user mode. Reboot the system. Reread /etc/inittab. Check the /etc/inittab file for runlevels on your system. insmod [options] file [symbol=value ...] System administration command. Load the module file into the kernel, changing any symbols that are defined on the command line. If the module file is named file.o or file.mod, the module will be named file. Force loading of module, even if some problems are encountered. Output a load map. Name module name instead of attempting to name it from the object file's name. Do not export: do not add any external symbols from the module to the kernel's symbol table. install [options] [file] directories System administration command. Used primarily in makefiles to update files. install copies files into user-specified directories. It will not overwrite a file. Similar to cp but attempts to set permission modes, owner, and group. Create any missing directories. Set group ID of new file to group (privileged users only). Set permissions of new file to mode (octal or symbolic). By default, the mode is 0755. Set ownership to owner or, if unspecified, to root (privileged users only). Strip symbol tables. ipchains command [options] System administration command. Edit IP firewall rules in the 2.2 Linux kernel. A 2.2 Linux kernel compiled with firewall support will examine the headers of all network packets and compare them to matching rules to see what it should do with the packet. A firewall rule consists of some matching criteria and a target, a result to be applied if the packet matches the criteria. The rules are organized into chains. You can use these rules to build a firewall or just reject certain kinds of network connections. Firewall rules are organized into chains, an ordered checklist that the kernel works through looking for matches. There are three built-in chains input, output, and forward. Packets entering the system are tested against the input chain. Those exiting the system are checked against the output chain. If an incoming packet is destined for some other system, it is checked against the forward chain. Each of these chains has a default target, a policy, in case no match is found. User-defined chains can be created and used as targets for packets, but they have no default policies. If no match can be found in a user-defined chain, the packet is returned to the chain from which it was called and tested against the next rule in that chain. ipchains only changes the rules in the running kernel. When the system is powered off, all those changes are lost. You can use the ipchains-save command to make a script you can later run with ipchains-restore to restore your firewall settings. Such a script is often called at boot up and many distributions have an ipchains initialization script that uses the output from ipchains-save. ipchains is always invoked with one of the following commands: Append new rules to chain. Insert rules into chain at the ordinal position given by number. Delete rules from chain. Rules can be specified by their ordinal number in the chain as well as by a general rule description. Replace a rule in chain. The rule to be replaced is specified by its ordinal number. Construct a network packet that matches the given rule and check how chain will handle it. The rule must describe the source, destination, protocol, and interface of the packet to be constructed. List the rules in chain. If no chain is specified, list the rules in all chains. List masquerading connections. Set timeout value in seconds for masquerading connections. -MS always takes three parameters specifying the timeout values for TCP sessions, TCP sessions that have received a FIN packet, and UDP packets. Remove all rules from chain. Reset the packet and byte counters in chain. If no chain is specified, all chains will be reset. When used without specifying a chain and combined with the -L command, it lists the current counter values before they are reset. Create a new chain. The chain's name must be unique. Delete chain. Only user-defined chains can be deleted, and there can be no references to the chain to be deleted. If no argument is given, all user-defined chains will be deleted. Set the policy for a built-in chain; the target itself cannot be a chain. Print a brief help message. If the option icmp is given, print a list of valid ICMP types. A target can be the name of a chain or one of the following special values: Let the packet through. Drop the packet. Masquerade the packet so it appears that it originated from the current system. Reverse packets from masqueraded connections are unmasqueraded automatically. This is a legal target for only the forward chain, or user-defined chains used in forwarding packets. To use this target, the kernel must be compiled with support for IP masquerading. Redirect incoming packets to a local port on which you are running a transparent proxy program. If the specified port is 0 or is not given, the destination port of the packet is used as the redirection port. REDIRECT is only a legal target for the input chain or user-defined chains used in handling incoming packets. The kernel must be compiled with support for transparent proxies. Drop the packet and send an ICMP message back to the sender indicating the packet was dropped. Return to the chain from which this chain was called and check the next rule. If RETURN is the target of a rule in a built-in chain, then. Specifies the source address and port of the packet). The optional port specifies the TCP, UDP, or ICMP type that will match. You may supply a port specification only if you've supplied the -p parameter with one of the tcp, udp or icmp protocols. A colon can be used to indicate an inclusive range of ports or ICMP values to be used. (e.g., 20:25 for ports 20 through 25). If the first port parameter is missing, the default value is 0. If the second is omitted, the default value is 65535. Match packets with the destination address. The syntax for this command's parameters is the same as for the -s option. Jump to a special target or a user-defined chain. If this option is not specified for a rule, matching the rule only increases the rule's counters and the packet is tested against the next rule. Match packets from interface name[+]. name is the network interface used by your system (e.g., eth0 or ppp0). A + can be used as a wildcard, so ppp+ would match any interface name beginning with ppp. The rule applies to everything but the first fragment of a fragmented packet. Match packets from the source port. The syntax for specifying ports can be found in the preceding description of the -s option. Match packets with the destination port. The syntax for specifying ports can be found in the preceding description of the -s option. Match packets with ICMP type name or number of type. Put rule in both the input and output chain so packets will be matched in both directions. Print all IP address and port numbers in numeric form. By default, names are displayed when possible. Log information for the matching packet to the system log. Change the Type of Service field in the packet's header. The TOS field is first ANDed with the 8-bit hexadecimal mask andmask, then XORed with the 8-bit hexadecimal mask xormask. Rules that would affect the least significant bit (LSB) portion of the TOS field are rejected. Expand all numbers in a listing (-L). Display the exact value of the packet and byte counters instead of rounded figures. Match only incoming TCP connection requests, those with the SYN bit set and the ACK and FIN bits cleared. This blocks incoming TCP connections but leaves outgoing connections unaffected. Used with the -L command. Add the line number to the beginning of each rule in a listing indicating its position in the chain. Disable all warnings ipchains-restore [options] System administration command. Restore firewall rules. ipchains-restore takes commands generated by ipchains-save and uses them to restore the firewall rules for each chain. Often used by initialization scripts to restore firewall settings on boot. Force updates of existing chains without asking. Print rules as they are being restored. If a nonexisting chain is targeted by a rule, create it. ipchains-save [chain] [option] System administration command. Print the IP firewall rules currently stored in the kernel to stdout. If no chain is given, all chains will be printed. Output is usually redirected to a file, which can later be used by ipchains-restore to restore the firewall. Print out rules to stderr as well as stdout, making them easier to see when redirecting output. ipfwadm category command parameters [options] ipfwadm -M [ -l | -s ] [options] Administer a firewall and its rules, firewall accounting, and IP masquerading in the 2.0 Linux kernel. This command is replaced with ipchains in the 2.2 kernel, and ipchains is replaced by iptables in the 2.4 kernel. There are four categories of rules: IP packet accounting, IP input firewall, IP output firewall, and IP forwarding firewall. The rules are maintained in lists, with a separate list for each category. See the manpage for ipfw(4) for a more detailed description of how the lists work. Each ipfwadm command specifies only one category and one rule. To create a secure firewall, you issue multiple ipfwadm commands; the combination of their rules work together to ensure that your firewall operates as you intend it to. The second form of the command is for masquerading. The commands -l and -s described in the later list are the only ones that can be used with the masquerading category, -M. One of the following flags is required to indicate the category of rules to which the command that follows the category applies. IP accounting rules. Optionally, a direction can be specified: Count only incoming packets. Count only outgoing packets. Count both incoming and outgoing packets; this is the default. IP forwarding firewall rules. IP input firewall rules. IP masquerading administration. Can be used only with the -l or -s command. IP output firewall rules. The category is followed by a command indicating the specific action to be taken. Unless otherwise specified, only one action can be given on a command line. For the commands that can include a policy, the valid policies are: Allow matching packets to be received, sent, or forwarded. Block matching packets from being received, sent, or forwarded. Block matching packets from being received, sent, or forwarded and also return an ICMP error message to the sending host. The commands are: Append one or more rules to the end of the rules for the category. No policy is specified for accounting rules. For firewall rules, a policy is required. When the source and/or destination names resolve to more than one address, a rule is added for each possible address combination. Check whether this IP packet would be accepted, denied, or rejected by the type of firewall represented by this category. Valid only when the category is -I, -O, or -F. Requires the -V parameter to be specified (see "Parameters," later). Delete one or more entries from the list of rules for the category. No policy is specified for accounting rules. The parameters specified with this command must exactly match the parameters from an append or insert command, or no match will be found and the rule will not be removed. Only the first matching rule in the list of rules is deleted. Remove (flush) all rules for the category. Display a help message with a brief description of the command syntax. Specified with no category: % ipfwadm -h Insert a new rule at the beginning of the selected list for the category. No policy is specified for accounting rules. For firewall rules, a policy is required. When the source and/or destination names resolve to more than one address, a rule is added for each possible address combination. List all rules for the category. This option may be combined with the -z option to reset the packet and byte counters after listing their current values. Unless the -x option is also specified, the packet and byte counters are shown as numberK or numberM, rounded to the nearest integer. See also the -e option described under "Options" later. Change the default policy for the selected type of firewall to policy. The default policy is used when no matching rule is found. Valid only with -I, -O, or -F. Set the masquerading timeout values; valid only with -M. The three parameters are required and represent the timeout value in seconds for TCP sessions, TCP sessions after receiving a FIN packet, and UDP packets, respectively. A timeout value of 0 preserves the current timeout value of the corresponding entry. Reset the packet and byte counters for all rules in the category. This command may be combined with the -l command. The following parameters can be specified with the -a, -i, -d, or -c commands, except as noted. Multiple parameters can be specified on a single ipfwadm command line. The destination specification (optional). See the description of -S for the syntax, default values, and other requirements. ICMP types cannot be specified with -D. The protocol of the rule or packet; possible values are tcp, udp, icmp, or all. Defaults to all, which matches all protocols. -P cannot be specified with the -c command. The source IP address, specified as a hostname, a network name, or an IP address. The source address and mask default to 0.0.0.0/0. If -S is specified, -P must also be specified. The optional mask is specified as a network mask or as the number of 1s on the left of the network mask (e.g., a mask of 24 is equivalent to 255.255.255.0). The mask defaults to 32. One or more values of port may optionally be specified, indicating what ports or ICMP types the rule applies to. The default is all. Ports may be specified by their /etc/ services entry. The syntax for indicating a range of ports is: lowport:highport For example: -S 172.29.16.1/24 ftp:ftp-data The address of the network interface the packet is received from (if category is -I) or is being sent to (if category is -O). address can be a hostname or an IP address, and defaults to 0.0.0.0, which matches any interface address. -V is required with the -c command: -V 172.29.16.1 Identical to -V but takes a device name instead of its address: -W ppp0 Bidirectional mode. The rule matches IP packets in both directions. This option is valid only with the -a, -i, and -d commands. Extended output. Used with the -l command to also show the interface address and any rule options. When listing firewall rules, also shows the packet and byte counters and the TOS (Type of Service) masks. When used with -M, also shows information related to delta sequence numbers. Match TCP acknowledgment packets (i.e., only TCP packets with the ACK bit set). This option is ignored for all other protocols and is valid only with the -a, -i, and -d commands. Accept masquerade packets for forwarding, making them appear to have originated from the local host. Recognizes reverse packets and automatically demasquerades them, bypassing the forwarding firewall. This option is valid only in forwarding firewall rules with policy accept. The kernel must have been compiled with CONFIG_IP_MASQUERADE defined. Numeric output. Print IP addresses and port numbers in numeric format. Log packets that match this rule to the kernel log. This option is valid only with the -a, -i, and -d commands. The kernel must have been compiled with CONFIG_IP_FIREWALL_VERBOSE defined. Redirect packets to a local socket, even if they were sent to a remote host. If port is 0 (the default), the packet's destination port is used. This option is valid only in input firewall rules with policy accept. The kernel must have been compiled with CONFIG_IP_TRANSPARENT_ PROXY defined. Specify masks used for modifying the TOS field in the IP header. When a packet is accepted (with or without masquerading) by a firewall rule, its TOS field is bitwise ANDed with andmask, and the result is bitwise XORed with xormask. The masks are specified as 8-bit hexadecimal values. This option is valid only with the -a, -i, and -d commands and has no effect when used with accounting rules or with firewall rules for rejecting or denying a packet. Verbose output. Print detailed information about the rule or packet to be added, deleted, or checked. This option is valid only with the -a, -i, -d, and -c commands. Expand numbers. Display the exact value of the packet and byte counters, instead of a rounded value. This option is valid only when the counters are being listed anyway (see also the -e option). Match TCP packets with the SYN bit set and the ACK bit cleared. This option is ignored for packets of other protocols and is valid only with the -a, -i, and -d commands. iptables command [options] System administration command. Configure netfilter filtering rules. In the 2.4 kernel, the ipchains firewall capabilities are replaced with the netfilter kernel module. netfilter can be configured to work just like ipchains, but it also comes with the module iptables, which is similar to ipchains but extensible. iptables rules consist of some matching criteria and a target, a result to be applied if the packet matches the criteria. The rules are organized into chains. You can use these rules to build a firewall, masquerade your local area network, or just reject certain kinds of network connections. There are three built-in tables for iptables, one for network filtering (filter), one for Network Address Translation (nat), and the last for specialized packet alterations (mangle). Firewall rules are organized into chains, ordered check lists of rules that the kernel works through looking for matches. The filter table has three built-in chains: INPUT, OUTPUT, and FORWARD. The INPUT and OUTPUT chains handle packets originating from or destined for the host system. The FORWARD chain handles mail only changes will have an iptables initialization script that uses the output from iptables-save. iptables is always invoked with one of the following commands: Delete rules from chain. Rules can be specified by their ordinal number in the chain as well as by a general rule description. Check how chain will handle a network packet that matches the given rule. The rule must describe the source, destination, protocol, and interface of the packet to be constructed. List the rules in chain or all chains if chain is not specified. Remove all rules from chain or from all chains if chain is not specified. Zero the packet and byte counters in chain. If no chain is specified, all chains will be reset. When used without specifying a chain and combined with the -L command, it lists the current counter values before they are reset chain. Create a new chain. The chain's name must be unique. This is how user-defined chains are created. Delete the specified user-defined chain or all user-defined chains if no chain is specified. Set the default policy for a built-in chain; the target itself cannot be a chain. Rename old-chain to new-chain. Print a brief help message. If the option icmp is given, print a list of valid ICMP types. A target may be the name of a chain or one of the following special values.. The address may be supplied as a hostname, a network name, or an IP address. The optional mask is the netmask to use and may be supplied either in the traditional form (e.g., /255.255.255.0) or in the modern form (e.g., /24). Match packets from the destination address. See the description of -s for the syntax of this option. Jump to a special target or a user-defined chain. If this option is not specified for a rule, matching the rule only increases the rule's counters, and the packet is tested against the next rule. Match packets being received from interface name. name is the network interface used by your system (e.g., eth0 or ppp0). A + can be used as a wildcard, so ppp+ would match any interface name beginning with ppp Match packets being sent from interface name. See the description of -i for the syntax for name. The rule applies only to the second or further fragments of a fragmented packet. Print all IP address and port numbers in numeric form. By default, text names are displayed when possible. Expand all numbers in a listing (-L). Display the exact value of the packet and byte counters instead of rounded figures. Explicitly load matching rule extensions associated with module. See the following section, "Match Extensions.". Several kernel modules come with netfilter to extend matching capabilities of rules. Those associated with particular protocols are loaded automatically when the -p option is used to specify the protocol. Others need to be loaded explicitly with the -m option. Loaded when -p tcp is the only protocol specified.. Match the specified destination ports. The syntax is the same as for --source-port.. Match packets with the SYN bit set and the ACK and FIN bits cleared. These are packets that request TCP connections; blocking them prevents incoming connections. Shorthand for --tcp-flags SYN,RST,ACK SYN. Loaded when -p udp is the only protocol specified. Match the specified source ports. The syntax is the same as for the --source-port option of the TCP extension. Match the specified destination ports. The syntax is the same as for --source-port option of the TCP extension. Loaded when -p icmp is the only protocol specified. Match the specified icmp type. type may be a numeric ICMP type or one of the ICMP type names shown by the command iptables -p icmp -h. Loaded explicitly with the -m option. Match the source address that transmitted the packet. address must be given in colon-separated hexbyte notation (for example, --mac-source 00:60:08:91:CC:B7. Loaded explicitly with the -m option. The limit extensions are used to limit the number of packets matched. This is useful when combined with the LOG target. Rules using this extension match until the specified limit is reached. Match addresses at the given rate. rate is specified as a number with an optional /second, /minute, hour, or /day suffix. When this option is not set, the default is '3/hour'. Set the maximum number of packets to match in a burst. Once the number has been reached, no more packets are matched for this rule until the number has recharged. It recharges at the rate set by the --limit option. When not specified, the default is 5. Loaded explicitly with the -m option. The multiport extensions match sets of source or destination ports. These rules can be used only in conjunction with -p tcp and -p udp. Up to 15 ports can be specified in a comma-separated list. Match the given source ports. Match the given destination ports. Match if the packet has the same source and destination port and that port is one of the given ports. Loaded explicitly with the -m option. This module works with the MARK extension target: Match the given unsigned mark value. If a mask is specified, it is logically ANDed with the mark before comparison. Loaded explicitly with the -m option. The owner extensions match a local packet's creator's user, group process, and session IDs. This makes sense only as a part of the OUTPUT chain. Match packets created by a process owned by userid. Match packets created by a process owned by groupid. Match packets created by process ID processid. Match packets created by a process in the session sessionid. Loaded explicitly with the -m option. This module matches the connection state of a packet. Match the packet if it has one of the states in the comma-separated list states. Valid states are INVALID, ESTABLISHED, NEW, and RELATED. Loaded explicitly with the -m option. This module matches the Type of Service field in a packet's header. Match the packet if it has a TOS of value. value can be a numeric value or a Type of Service name. iptables -m tos -h will give you a list of valid TOS values. Extension targets are optional additional targets supported by separate kernel modules. They have their own associated options. Log the packet's information in the system log. Set the syslog level by name or number (as defined by syslog.conf). Begin each log entry with the string prefix. The prefix string may be up to 30 characters long. Log the TCP sequence numbers. This is a security risk if your log is readable by users. Log options from the TCP packet header. Log options from the IP packet header. Used to mark packets with an unsigned integer value you can use later with the mark matching extension. Valid only with the mangle table. Mark the packet with value.. Send the specified ICMP message type. Valid values are icmp-net-unreachable, icmp-host- unreachable, icmp-port-unreachable, or icmp-proto-unreachable. If the packet was an ICMP ping packet, type may also be echo-reply. Set the Type of Service field in the IP header. TOS is a valid target only for rules in the mangle table. Set the TOS field to value. You can specify this as an 8-bit value or as a TOS name. You can get a list of valid names using iptables -j TOS -h. Modify the source address of the packet and all future packets in the current connection. SNAT is valid only as a part of the POSTROUTING chain in the nat table.. Modify the destination address of the packet and all future packets in the current connection. DNAT is valid only as a part of the POSTROUTING chain in the nat table. Specify the new destination address or range of addresses. The arguments for this option are the same as the --to-source argument for the SNAT extension target.. Specify the port or range of ports to use when masquerading. This option is only valid if a tcp or udp protocol has been specified with the -p option. If this option is not used, the masqueraded packet's port will not be changed. Redirect the packet to a local port. This is useful for creating transparent proxies. Specify the port or range of ports on the local system to which the packet should be redirected. This option is valid only if a tcp or udp protocol has been specified with the -p option. If this option is not used, the redirected packet's port will not be changed. iptables-restore [file] System administration command. Restore firewall rules. iptables-restore takes commands generated by iptables-save and uses them to restore the firewall rules for each chain. Often used by initialization scripts to restore firewall settings on boot. file is the name of a file whose contents were generated by iptables-save. If not specified, the command takes its input from stdin. This command was not completed at the time this book went to print. There may be options not listed here. iptables-save [chain] System administration command. Print the IP firewall rules currently stored in the kernel to stdout. If no chain is given, all chains will be printed. Output may be redirected to a file that can later be used by iptables-restore to restore the firewall. This command was not completed at the time this book went to print. There may be options not listed here. ispell [options] [files] Compare the words of one or more named files with the system dictionary. Display unrecognized words on the top of the screen, accompanied by possible correct spellings, and allow editing, via a series of commands. Back up original file in filename.bak. Search file instead of standard dictionary file. Suggest different root/affix combinations. Expect nroff or troff input file. Search file instead of personal dictionary file. Expect TeX or LaTeX input file. Consider chars to be legal, in addition to a-z and A-Z. Do not back up original file. each is correct. Expect all files to be formatted by type. Never consider words that are n characters or less to be misspelled. Use hat notation (^L) to display control characters and M- to display characters with the high bit set. Display help screen. Accept the word in this instance. Replace with suggested word that corresponds to number. Invoke shell and execute command in it. Prompt before exiting. Accept word as correctly spelled, but do not add it to personal dictionary. Accept word and add it (capitalized, if so in file) to personal dictionary. Search system dictionary for words. Exit without saving. Replace word. Accept word and add lowercase version of it to personal dictionary. Redraw screen. Suspend ispell. join [options] file1 file2 Join lines of two sorted files by matching on a common field. If either file1 or file2 is -, read from standard input. Print a line for each unpairable line in file filenum, in addition to the normal output. Replace missing input fields with string. Ignore case differences when comparing keys. Join field in file1 is fieldnum1. Default is the first field. Join field in file2 is fieldnum2. Default is the first field. Order the output fields according to fieldlist, where each entry in the list is in the form filenum.fieldnum. Entries are separated by commas or blanks. Specifies the field-separator character (default is whitespace ). Print only unpairable lines from file filenum. kbd_mode [option] Print or set the current keyboard mode, which may be RAW, MEDIUMRAW, or XLATE. Set mode to XLATE (ASCII mode). Set mode to MEDIUMRAW (keycode mode). Set mode to RAW (scancode mode). Set mode to UNICODE (UTF-8 mode). kbdrate [options] System administration command. Control the rate at which the keyboard repeats characters, as well as its delay time. Using this command without options sets a repeat rate of 10.9 characters per second; the default delay is 250 milliseconds. When Linux boots, however, it sets the keyboard rate to 30 characters per second. Suppress printing of messages. Specify the repeat rate, which must be one of the following numbers (all in characters per second):, or 30.0. Specify the delay, which must be one of the following (in milliseconds): 250, 500, 750, or 1000. kerneld System administration command. kerneld automatically loads kernel modules when they are needed, thereby reducing kernel memory usage from unused loaded modules and replacing manual loading of modules with modprobe or insmod. If a module has not been used for more than one minute, kerneld automatically removes it. kerneld comes with the modules-utilities package and is set up during kernel configuration; its functionality is provided by interactions between that package and the kernel. kerneld is aware of most common types of modules. When more than one possible module can be used for a device (such as a network driver), kerneld uses the configuration file /etc/conf.modules, which contains path information and aliases for all loadable modules, to determine the correct module choice. kerneld can also be used to implement dial-on-demand networking, such as SLIP or PPP connections. The network connection request can be processed by kerneld to load the proper modules and set up the connection to the server. kill [option] IDs This is the /bin/kill command; there is also a shell command of the same name. Send a signal to terminate one or more process IDs. You must own the process or be a privileged user. If no signal is specified, TERM is sent. (HUP), the kill cannot be caught by the process; use this to kill a process that a plain kill doesn't terminate. The default is TERM. killall [options] names Kill processes by command name. If more than one process is running the specified command, kill all of them. Treat command names that contain a / as files; kill all processes that are executing that file. Send signal to process (default is TERM). signal may be a name or number.. Wait for all killed processes to die. Note that killall may wait forever if the signal was ignored or had no effect, or if the process stays in zombie state. killall5 The System V equivalent of killall, this command kills all processes except those on which it depends. k 8 messages (KERN_DEBUG). Print all messages of a higher priority (lower number) than level to the console. Print all messages to file; suppress normal logging. Use file as source of kernel symbols. Avoid autobackgrounding. This is needed when klogd is started from init. One-shot mode. Prioritize and log all current messages, then immediately exit. Suppress reading of messages from the /proc filesystem. Sources for definitions of each logging level A file examined by klogd for messages klogd's process ID ksyms [options] System administration command. Print a list of all exported kernel symbols (name, address, and defining module, if applicable). Include symbols from unloaded modules. Suppress header message. Include starting address and size. Useful only for symbols in loaded modules. Another source of the same information lastlog [options] System administration command. Print the last login times for system accounts. Login information is read from the file /var/log/lastlog. Print only logins more recent than n days ago. Print only login information for user name. ld [options] objfiles Combine several objfiles, in the specified order, into a single executable object module (a.out by default). ld is the link editor and is often invoked automatically by compiler commands. Consult file for commands. Force the assignment of space to common symbols. Create the global symbol with the value expression. Set symbol as the address of the output file's entry point. Produce a linkable output file; attempt to set its magic number to OMAGIC. Include the archive file arch in the list of files to link. Emulate linker. Make text read-only; attempt to set NMAGIC. Produce output file even if errors are encountered. Place output in output, instead of a.out. Specify output format. Do not include any symbol information in output. Create a shared library. Do not sort global common symbols by size. Announce each input file's name as it is processed. Force symbol to be undefined. Show version number. Print information about ld; print the names of input files while attempting to open them. Warn when encountering common symbols combined with other constructs. Provide only one warning per undefined symbol. With -s or -S, delete all local symbols that begin with L. Search directory dir before standard search directories (this option must precede the -l option that searches that directory). Display a link map on standard out. Print a link map to file. Allow reading of and writing to both data and text; mark ouput if it supports Unix magic numbers; do not page-align data. Obtain symbol names and addresses from file, but suppress relocation of file and its inclusion in output. Do not include debugger symbol information in output.. ldconfig [options] directories System administration command. Examine the libraries in the given directories, /etc/ld.so.conf, /usr/lib, and /lib; update links and cache where necessary. Usually run in startup files or after the installation of new shared libraries. Debug. Suppress all normal operations. Library mode. Expect libraries as arguments, not directories. Manually link specified libraries. Suppress examination of /usr/lib and /lib and reading of /etc/ld.so.conf; do not cache. Do not cache; only link. Print all directories and candidate libraries in the cache. Expects no arguments. Verbose. Include version number, and announce each directory as it is scanned and links as they are created. Do not link; only rebuild cache. Linker and loader. List of directories that contain libraries. List of the libraries found in those libraries mentioned in /etc/ld.so.conf. ldd [options] programs Display a list of the shared libraries each program requires. Display ldd's version. Display the linker's version. less [options] [filename] less is a program for paging through files or other output. It was written in reaction to the perceived primitiveness of more (hence its name). Some commands may be preceded by a number. Set number of lines to scroll to num. Default is one screenful. A negative num sets the number to num lines less than the current number. Run command on startup. If command is a number, jump to that line. The option ++ applies this command to each file in the command-line list. Print help screen. Ignore all other options; do not page through file. When searching, begin after last line displayed. (Default is to search from second line displayed.) Use buffers buffers for each file (default is 10). Buffers are 1 kilobyte in size. Redraw screen from top, not bottom. Suppress dumb-terminal error messages. Automatically exit after reaching EOF twice. Force opening of directories and devices; do not print warning when opening binaries. Highlight only string found by past search command, not all matching strings. Never scroll backward more than num lines at once. Make searches case-insensitive, unless the search string contains uppercase letters. Position target line on line num of screen. Target line can be the result of a search or a jump. Count lines beginning from 1 (top line). A negative num is counted back from bottom of screen. Read file to define special key bindings. Display more-like prompt, including percent of file read. Do not calculate line numbers. Affects -m and -M options and = and v commands (disables passing of line number to editor). When input is from a pipe, copy output to file as well as to screen. (Prompt for overwrite authority if file exists.) At startup, search for first occurrence of pattern. Set medium prompt (specified by -m). Set long prompt (specified by -M). Set message printed by = command. Disable ringing of bell on attempts to scroll past EOF or before beginning of file. Attempt to use visual bell instead. Display "raw" control characters, instead of using ^x notation. Sometimes leads to display problems. Print successive blank lines as one line. Edit file containing tag. Consult ./tags (constructed by ctags). Treat backspaces and carriage returns as printable input. Print lines after EOF as blanks instead of tildes (~). Set tab stops to every n characters. Default is 8. Never scroll forward more than n lines at once. Do not automatically allocate buffers for data read from a pipe. If -b specifies a number of buffers, allocate that many. If necessary, allow information from previous screens to be lost. Redraw screen by clearing it and then redrawing from top. Automatically exit after reaching EOF once. Never highlight matching search strings. Make searches case-insensitive, even when the search string contains uppercase letters. Prompt more verbosely than with -m, including percentage, line number, and total lines. Print line number before each line. Similar to -o but does not prompt when overwriting file. Set prompt (as defined by -m, -M, or =). Default is short prompt (-m). Never ring terminal bell. Cut, do not fold, long lines. With the -t option or :t command, read file instead of ./tags. Treat backspaces and carriage returns as control characters. Do not send initialization and deinitialization strings from termcap to terminal. Many commands can be preceded by a numeric argument, referred to as number in the command descriptions. Scroll forward the default number of lines (usually one windowful). Similar to SPACE but allows the number of lines to be specified, in which case it resets the default to that number. Scroll forward. Default is one line. Display all lines, even if the default is more lines than the screen size. Scroll forward. Default is one-half the screen size. The number of lines may be specified, in which case the default is reset. Scroll backward. Default is one windowful. Like b but allows the number of lines to be specified, in which case it resets the default to that number. Scroll backward. Default is one line. Display all lines, even if the default is more lines than the screen size. Scroll backward. Default is one-half the screen size. The number of lines may be specified, in which case the default is reset. Like r but discard buffered input. Scroll forward. When an EOF is reached, continue trying to find more output, behaving similarly to tail -f. number percent of the way into the file.. Behave like { but prompt for two characters, which it substitutes for { and } in its search. Behave like } but prompt for two characters, which it substitutes for { and } in its search. Prompt for a lowercase letter and then use that letter to mark the current position. Prompt for a lowercase letter and then go to the position marked by that letter. There are some special characters: Return to position before last "large movement." Beginning of file. End of file. Same as '.. Search backward, beginning at the line before the top line. Treats !, *, and @ as special characters when they begin pattern, as / does. Same as /*. Same as ?*. Repeat last pattern search. Repeat last pattern search, in the reverse direction. Repeat previous search command but as though it were prefaced by *. Repeat previous search command but as though it were prefaced by * and in the opposite direction. Toggle search highlighting. Read in filename and insert it into the command-line list of filenames. Without filename, reread the current file. filename may contain special characters: Name of current file Name of previous file Same as :e. Read in next file in command-line list. Read in previous file in command-line list. Read in first file in command-line list. Print filename, position in command-line list, line number on top of window, total lines, byte number, and total bytes. Expects to be followed by a command-line option letter. Toggles the value of that option or, if appropriate, prompts for its new value. Expects to be followed by a command-line option letter. Resets that option to its default. Expects to be followed by a command-line option letter. Resets that option to the opposite of its default, where the opposite can be determined. Expects to be followed by a command-line option letter. Display that option's current setting. Execute command each time a new file is read in. Exit. Not valid for all versions. Invoke editor specified by $VISUAL or $EDITOR, or vi if neither is set. Not valid for all versions. Invoke $SHELL or sh. If command is given, run it and then exit. Special characters: Last shell command Not valid for all versions. Pipe fragment of file (from first line on screen to mark-letter) to command. mark-letter may also be:. ln [options] sourcename [destname] ln [options] sourcenames destdirectory Create pseudonyms (links) for files, allowing them to be accessed by different names. In the first form, link sourcename to destname, where destname is usually a new filename, or (by default) the current directory. If destname is an existing file, it is overwritten; if destname is an existing directory, a link named sourcename is created in that directory. In the second form, create links in destdirectory, each link having the same name as the file specified. Back up files before removing the originals. Allow hard links to directories. Available to privileged users. Force the link (don't prompt for overwrite permission). ~. Control the types of backups made. The acceptable values for version-control are: Numbered. Simple (~) unless a numbered backup exists; then make a numbered backup. Simple. locate [options] pattern Search database(s) of filenames and print matches. *, ?, [, and ] are treated specially; / and . are not. Matches include all files that contain pattern, unless pattern includes metacharacters, in which case locate requires an exact match. Search databases in path. path must be a colon- separated list. lockfile [options] filenames Create semaphore file(s), used to limit access to a file. When lockfile fails to create some of the specified files, it pauses for 8 seconds and retries the last one on which it failed. The command processes flags as they are encountered (i.e., a flag that is specified after a file will not affect that file). Time lockfile waits before retrying after a failed creation attempt. Default is 8 seconds. Invert return value. Useful in shell scripts. Time (in seconds) after a lockfile was last modified at which it will be removed by force. See also -s. If the permissions on the system mail spool directory allow it or if lockfile is suitably setgid, it can lock and unlock your system mailbox with the options -ml and -mu, respectively. Stop trying to create files after retries retries. The default is -1 (never stop trying). When giving up, remove all created files. After a lockfile has been removed by force (see -l), a suspension of 16 seconds takes place by default. (This is intended to prevent the inadvertent immediate removal of any lockfile newly created by another program.) Use -s to change the default 16 seconds. logger [options] [message...] TCP/IP command. Add entries to the system log (via syslogd). A message can be given on the command line, or standard input is logged. Read message from file. Include the process ID of the logger process. Enter message with the specified priority pri. Default is user.notice. Mark every line in the log with the specified tag. login [name | option] Log in to. Suppress second login authentication. Specify name of remote host. Normally used by servers, not humans; may be used only by root. Preserve previous environment. logname [option] Consult /var/run/utmp for user's login name. If found, print it; otherwise, exit with an error message. logrotate [options] config_files System administration command. Manipulate log files according to commands given in config_files. Debug mode. No changes will be made to log files. Save state information in file. The default is /var/lib/logrotate.status. Usage version and copyright information. Compress old versions of log files with gzip. Copy log file, then truncate it in place. For use with programs whose logging cannot be temporarily halted. After rotation, re-create log file with the specified permissions, owner, and group. permissions must be in octal. If any of these parameters is missing, the log file's original attributes will be used. Rotate log files every day. Don't compress log file until the next rotation. Mail any errors to the given address. End a postrotate or prerotate script. Rotate log file even if it is empty. Overrides the default notifempty option. Read the file into current file. If file is a directory, read all files in that directory into the current file. Mail any deleted logs to address. Rotate log files only the first time logrotate is run in a month. Override compress. Override copytruncate. Override create. Override delaycompress. Override olddir. Override ifempty. Move logs into directory for rotation. directory must be on the same physical device as the original log files. Begin a script of directives to apply after the log file is rotated. The script ends when the endscript directive is read. Begin a script of directives to apply before a log file is rotated. The script ends when the endscript directive is read. The number of times to rotate a log file before removing it. Rotate log file when it is greater than n bytes. n can optionally be followed by k for kilobytes or M for megabytes. look [options] string [file] (/usr/dict/words by default) that begin with string. Use alternate dictionary /usr/dict/web2. Compare only alphanumeric characters. Search is not case-sensitive. Stop checking after the first occurrence of character. lpc [command] System administration command. Control line printer. If executed without a command, lpc will accept commands from standard input. Get a list of commands or help on specific commands. Terminate current printer daemon and disable printing for the specified printer. Remove files that cannot be printed from the specified printer queues. Disable specified printer queues. Disable specified printer queues and put message in the printer status file. Enable the specified printer queues. Exit lpc. Try to restart printer daemons for the specified printers. Enable the printer queues and start printing daemons for the specified printers. Return the status of the specified printers. Disable the specified printer daemons after any current jobs are completed. Put the specifed jobs at the top of the printer's queue in the order the jobs are listed. Enable print queues and restart daemons for the specified printers. lpd [option] [port] TCP/IP command. Line printer daemon. lpd is usually invoked at boot time from the rc2 file. It makes a single pass through the printer configuration file (traditionally /etc/printcap) to find out about the existing printers and prints any files left after a crash. It then accepts requests to print files in a queue, transfer files to a spooling area, display a queue's status, or remove jobs from a queue. In each case, it forks a child process for each request, then continues to listen for subsequent requests. If port is specified, lpd listens on that port; otherwise, it uses the getservbyname call to ascertain the correct port. The file lock in each spool directory prevents multiple daemons from becoming active simultaneously. After the daemon has set the lock, it scans the directory for files beginning wth cf. Lines in each cf file specify files to be printed or nonprinting actions to be performed. Each line begins with a key character, which specifies information about the print job or what to do with the remainder of the line. Key characters are: Classification -- string to be used for the classification line on the burst page. cifplot file. Formatted file -- name of a file to print that is already formatted. Graph file. Hostname -- name of machine where lpd was invoked. Job name -- string to be used for the jobname on the burst page. Literal -- this line contains identification information from the password file and causes the banner page to be printed. Formatted file, but suppress page breaks and printing of control characters. Mail -- send mail to the specified user when the current print job completes. ditroff file. Person -- login name of person who invoked lpd. DVI file. Title -- string to be used as the title for pr. troff file. Unlink -- name of file to remove upon completion of printing. Enable logging of all valid requests. Printer description file Spool directories Minimum free space to leave Printer devices Machine names allowed printer access Machine names allowed printer access, but not under same administrative control lpq [options] [user] Check the print spool queue for status of print jobs. For each job, display username, rank in the queue, filenames, job number, and total file size (in bytes). If user is specified, display information only for that user. Print information about each file comprising a job. Specify which printer to query. Without this option, lpq uses the printer set in the PRINTER environment variable or the default system printer. Check status for job number num. lpr [options] files Send files to the printer spool queue. Expect data produced by cifplot. Expect data produced by TeX in the DVI (device- independent) format. Use a filter that interprets the first character of each line as a standard carriage control character. Expect standard plot data as produced by the plot routines. Use a filter that allows control characters to be printed and suppresses page breaks. Expect data from ditroff (device-independent troff). Use pr to format the files. Expect data from troff (phototypesetter commands). Expect a raster image for devices like the Benson Varian. Output to printer instead of the printer specified in the PRINTER environment variable or the system default. Do not print the burst page. Send mail to notify of completion. Remove the file upon completion of spooling. Cannot be used with the -s option. Use symbolic links instead of copying files to the spool directory. This can save time and disk space for large files. Files should not be modified or removed until they have been printed. Print num copies of each listed file. Replace system name on the burst page with string. Replace the job name on the burst page with name. If omitted, uses the first file's name. Use title as the title when using pr. Indent the output. Default is 8 columns. Specify number of columns to indent with the cols argument. Set num characters as the page width for pr. lprm [options] [jobnum] [user] Remove a print job from the print spool queue. You must specify a job number or numbers, which can be obtained from lpq. A privileged user may use the user parameter to remove all files belonging to a particular user or users. Specify printer name. Normally, the default printer or printer specified in the PRINTER environment variable is used. Remove all jobs in the spool owned by user. lpstat [options] Show the status of the print queue. With options that take a list argument, omitting the list produces all information for that option. list can be separated by commas or, if enclosed in double quotes, by spaces. Show whether the list of printer or class names is accepting requests. Show information about printer classes named in list. Show the default printer destination. Verify that the list of forms is known to lp. Use after -f to describe available forms, after -p to show printer configurations, or after -s to describe printers appropriate for the specified character set or print wheel. Show the status of output requests. list contains printer names, class names, or request IDs. Show the status of printers named in list. Show whether the print scheduler is on or off. Show the job's position in the print queue. Summarize the print status (shows almost everything). Show all status information (reports everything). Show request status for users on list. list can be all to show information on all users. Show device associated with each printer named in list. lptest [length] [count] Generate a lineprinter test pattern on standard output. Prints a standard ripple pattern of all printable ASCII characters, offset by one position on each succeeding line. Specify the output line length (default is 79). Specify the number of lines to print (default is 200).. Disables colorization. This is the default. Provided to override a previous color option. Same as --color, but only if standard output is a terminal. Very useful for shell scripts and command aliases, especially if your favorite pager does not support color control codes. Report only on the directory, not its contents. Print directory contents in exactly the order in which they are stored, without attempting to sort them. List times in full, rather than use the standard abbreviations. List the inode for each file.. Mark directories by appending / to them. Show nonprinting characters as ?. List files in reverse order (by name or by time). Print size of the files in blocks. Sort files according to modification time (newest first). Sort files according to the file access time. Print version information on standard output, then exit. List files in rows going across the screen. List all files, including the normally hidden files whose names begin with a period. Does not include the . and . directories. Do not list files ending in ~, unless given as arguments. List files in columns (the default format). Flag filenames by appending / to directories, * to executable files, @ to symbolic links, | to FIFOs, and = to sockets. In long format, do not display group name. Do not list files whose names match the shell pattern pattern, unless they are given on the command line. List the file or directory referenced by a symbolic link rather than the link itself. Do not list filenames. Quote filenames with "; quote nongraphic characters with alphabetic and octal backslash sequences. Recursively list subdirectories as well as the specified (or current) directory. Sort by file size, largest to smallest. Assume that each tabstop is n_cols columns wide. The default is 8. Do not sort files. Similar to -f but display in long format. Sort by file extension. List all files in the current directory and their sizes; use multiple columns and mark special files: ls -asCF List the status of directories /bin and /etc: ls -ld /bin /etc List C source files in the current directory, the oldest first: ls -rt *.c Count the nonhidden files in the current directory: ls | wc -l lsattr [options] [files] Print attributes of files on a Linux Second Extended File System. See also chattr. List all files in specified directories. List directories' attributes, not the attributes of the contents. List directories and their contents recursively. List version of files. List version and then exit. lsmod System administration command. List all loaded modules: their name, size (in 4KB units) and, if appropriate, a list of referring modules. Source of the same information. m4 [options] [macros] [files] Macro processor for C and other files. Operate interactively, ignoring interrupts. Specify flag-level debugging. Specify the length of debugging output. Place output in file. Despite the name, print error messages on standard error. Prepend m4_ to all built-in macro names. Insert #line directives for the C preprocessor. Set the size of the push-back and argument collection buffers to n (default is 4096). Define name as value or, if value is not specified, define name as null. Consider all warnings to be fatal, and exit after the first of them. Record m4's frozen state in file, for later reloading. Behave like traditional m4, ignoring GNU extensions. Set symbol-table hash array to n (default is 509). Search directory for include files. Load state from file before starting execution. Undefine name. mail [options] [users] include pine and elm, which are much easier to use. This section presents mail commands, options, and files. To get you started, here are two of the most basic commands. To enter interactive mail-reading mode, type: mail To begin writing a message to user, type: mail user Enter the text of the message, one line at a time, pressing Enter at the end of each line. To end the message, enter a single period (.) in the first column of a new line, and press. Do not consult /etc/mail.rc when starting up. Read mail in POP mode. Set subject to subject. Process contents of /var/spool/mail/$user. Default. Verbose. Print information about mail delivery to standard out. Interactive -- even when standard input has been redirected from the terminal. When printing a mail message or entering a mail folder, do not display message headers. Disable POP mode. Execute a shell escape from compose mode. List compose mode escapes. Add names to or edit the Bcc: header. Add names to or edit the Cc: header. Read in the dead.letter file. Invoke text editor. Insert messages into message being composed. Similar to ~f, but include message headers. Add to or change all the headers interactively. Similar to ~f, but indent with a tab. Similar to ~m, but include message headers. Print message header fields and message being sent. Abort current message composition. Append file \. Print numth previous message; defaults to immediately previous. Specify remote accounts on remote machines that are yours. Tell mail not to reply to them. Similar to save, but do not mark message for deletion. Delete current message and display next one. Read messages saved in a file. Files can be: Compose message to user. Default. Move specified messages to mbox on exiting. Type next message or next message that matches argument. Always include this list of header fields when printing messages. With no arguments, list retained fields. Override saveignore to retain specified fields. Print first few lines of each specified message. Edit message with editor specified by the VISUAL environment variable. Move mail's attention to next windowful of text. Use z- to move it back. These options are used inside of the .mailrc file. The syntax is set option or unset option. Append (do not prepend) messages to mbox. Prompt for subject. Prompt for blind carbon copy recipients. Prompt for carbon copy recipients. Prompt for Subject line. Print next message after a delete. Display messages in chronological order, most recent last. Same as -d on command line. Interpret a solitary . as an EOF. Define directory to hold mail folders. Keep message in system mailbox upon quitting. Ignore interrupt signals from terminal. Print them as @. Do not treat ^D as an EOF. Do not remove sender from groups when mailing to them. Same as -N on command line. Retrieve POP mail via POP3, not KPOP, protocol. Do not save aborted letters to dead.letter. Retrieve mail with POP3 protocol, and save it in mbox.pop. Set prompt to a different string. Switch roles of Reply and reply. Do not print version at startup. When given the specifier /x:y, expand all messages that contain the string y in the x header field. Same as -v on command line. Display status while retrieving POP mail. Contains reminders that the operating system mails to you. Mail delivery configuration file. Mail configuration file. Keeps track of your automatic response recipients. Contains automatic message. mailq [option] System administration command. List all messages in the sendmail mail queue. Equivalent to sendmail -bp. mailstats [options] System administration command. Display a formatted report of the current sendmail mail statistics. Use sendmail configuration file file instead of the default sendmail.cf file. Use sendmail statistics file file instead of the file specified in the sendmail configuration file. Don't show the name of the mailer in the report. make [options] [targets] [macro definitions] Update one or more targets according to dependency instructions in a description file in the current directory. By default, this file is called makefile or Makefile. Options, targets, and macro definitions can be in any order. Macros definitions are typed as: name=string For more information on make, see Managing Projects with make by Andrew Oram and Steve Talbott. Print detailed debugging information. Override makefile macro definitions with environment variables. Use makefile as the description file; a filename of - denotes standard input. Print options to make command. Ignore command error codes (same as .IGNORE). Attempt to execute jobs jobs simultaneously, or, if no number is specified, as many jobs as possible. Abandon the current target when it fails, but keep working with unrelated targets. Attempt to keep load below load, which should be a floating-point number. Used with -j. Print commands but don't execute (used for testing). Never remake file or cause other files to be remade on account of it. Print rules and variables in addition to normal execution. Query; return 0 if file is up-to-date; nonzero otherwise. Do not use default rules. Do not display command lines (same as .SILENT). Touch the target files, without remaking them. Show version of make. Display the current working directory before and after execution. Print warning if a macro is used without being defined. cd to directory before beginning make operations. A subsequent -C directive will cause make to attempt to cd into a directory relative to the current working directory. Include directory in list of directories that contain included files. Cancel previous -k options. Useful in recursive makes. Behave as though file has been recently updated. Instructions in the description file are interpreted as single lines. If an instruction must span more than one input line, use a backslash (\) at the end of the line so that the next line is considered as a continuation. The description file may contain any of the following types of lines: Blank lines are ignored. A pound sign (#) can be used at the beginning of a line or anywhere in the middle. make ignores everything after the #. Depending on one or more targets, certain commands that follow will be executed. Possible formats include: targets : dependencies targets : dependencies ; command Subsequent commands are executed if dependency files (the names of which may contain wildcards) do not exist or are newer than a target. If no prerequisites are supplied, then subsequent commands are always executed (whenever any of the targets are specified). No tab should precede any targets. These specify that files ending with the first suffix can be prerequisites for files ending with the second suffix (assuming the root filenames are the same). Either of these formats can be used: .suffix.suffix: .suffix: The second form means that the root filename depends on the filename with the corresponding suffix. Commands are grouped below the dependency line and are typed on lines that begin with a tab. If a command is preceded by a hyphen (-), make ignores any error returned. If a command is preceded by an at sign (@), the command line won't echo on the display (unless make is called with -n). These have the following form: name = string or define name string endef Blank space is optional around the =. Similar to the C include directive, these have the form: include files The list of prerequisites that have been changed more recently than the current target. Can be used only in normal description file entries -- not suffix rules. The name of the current target, except in description file entries for making libraries, where it becomes the library name. Can be used both in normal description file entries and in suffix rules. The name of the current prerequisite that has been modified more recently than the current target. The name -- without the suffix -- of the current prerequisite that has been modified more recently than the current target. Can be used only in suffix rules. The name of the corresponding .o file when the current target is a library module. Can be used both in normal description file entries and in suffix rules. A space-separated list of all dependencies, with no duplications. A space-separated list of all dependencies, including duplications. These are a more general application of the idea behind suffix rules. If a target and a dependency both contain %, GNU make will substitute any part of an existing filename. For instance, the standard suffix rule: $(cc) -o $@ $< can be written as the following pattern rule: %.o : %.c $(cc) -o $@ $< The directory portion of any internal macro name except $?. Valid uses are: $(*D) $$(@D) $(?D) $(<D) $(%D) $(@D) $(^D) The file portion of any internal macro name except $?. Valid uses are: $(*F) $$(@F) $(?F) $(<F) $(%F) $(@F) $(^F) Replace all occurrences of from with to in string. Similar to subst, but treat % as a wildcard within pattern. Substitute to for any word in string that matches pattern. Remove all extraneous whitespace. Return substring if it exists within mainstring; otherwise, return null. Return those words in string that match at least one word in pattern. patterns may include the wildcard %. Remove those words in string that match at least one word in pattern. patterns may include the wildcard %. Return list, sorted in lexical order. Return the directory part (everything up to the last slash) of each filename in list. Return the nondirectory part (everything after the last slash) of each filename in list. Return the suffix part (everything after the last period) of each filename in list. Return everything but the suffix part (everything up to the last period) of each filename in list. Return each filename given in list with suffix appended. Return each filename given in list with prefix prepended. Return a list formed by concatenating the two arguments, word by word (e.g., $(join a b,.c .o) becomes a.c b.o). Return the nth word of string. Return the number of words in string. Return the first word in the list list. Return a list of existing files in the current directory that match pattern. Return one of the following strings that describes how variable was defined: undefined, default, environment, environment override, file, command line, override, or automatic. Return the results of command. Any newlines in the result are to be converted to spaces. This function works similarly to backquotes in most shells. Evaluates to the current definition of $(macro), after substituting the string s2 for every occurrence of s1 that occurs either immediately before a blank or tab or at the end of the macro definition. Commands associated with this target are executed if make can't find any description file entries or suffix rules with which to build a requested target. If this target exists, export all macros to all child processes. Ignore error codes. Same as the -i option. Always execute commands under a target, even if it is an existing, up-to-date file. Files you specify for this target are not removed when you send a signal (such as an interrupt) that aborts make or when a command line in your description file returns an error. Execute commands, but do not echo them. Same as the -s option. Suffixes associated with this target are meaningful in suffix rules. If no suffixes are listed, the existing list of suffix rules is effectively "turned off." makedbm [options] infile outfile NFS/NIS command. Make NIS dbm file. makedbm takes infile and converts it to a pair of files in ndbm format, namely outfile.pag and outfile.dir. Each line of the input file is converted to a single dbm record. All characters up to the first TAB or SPACE form the key, and the rest of the line is the data. If line ends with \&, the data for that record is continued on to the next line. It is left for the NIS clients to interpret #; makedbm does not treat it as a comment character. infile can be -, in which case the standard input is read. makedbm generates a special entry with the key yp_last_modified, which is the date of infile (or the current time, if infile is -). Interdomain. Propagate a map to all servers using the interdomain name server named. Create a special entry with the key yp_domain_name. Create a special entry with the key yp_input_file. Convert keys of the given map to lowercase. Create a special entry with the key yp_master_name. If no master hostname is specified, yp_master_name is set to the local hostname. Create a special entry with the key yp_output_name. Secure map. Accept connections from secure NIS networks only. Undo a dbm file -- print out a dbm file, one entry per line, with a single space separating keys from values. Itakdbm to make the NIS file passwd.byname. That is, the key is a username and the value is the remaining line in the /etc/passwd file. makemap [options] type name System administration command. Transfer from standard input to sendmail's database maps. Input should be formatted as: key value You may comment lines with #, may substitute parameters with %n, and must escape literal % by entering it as %%. The type must be dbm, btree, or hash. The name is a filename to which makemap appends standard suffixes. Allow duplicate entries. Valid only with btree type maps. Suppress conversion of uppercase to lowercase. Append a zero byte to each key. Append to existing file instead of replacing it. If some keys already exist, replace them. (By default, makemap will exit when encountering a duplicated key.) Ignore safety checks. man [options] [section] [title] Display information from the online reference manuals. man locates and prints the named title from the designated reference section. Expect a pure ASCII file, and format it for a 7-bit terminal or terminal emulator. Show all pages matching title. Leave blank lines in output. Display debugging information. Suppress actual printing of manual pages. Same as whatis command. Same as apropos command. Search local files, not system files, for manual pages. If i is given as filename, search standard input. Search systems' manual pages. systems should be a comma-separated list. Preprocess manual pages with preprocessors before turning them over to nroff, troff, or groff. Always runs soelim first. Set prompt if less is used as pager. Format the manual page with /usr/bin/groff -Tgv -mandoc. Implied by -T and -Z. Perform a consistency check between manual page cache and filesystem. Print pathnames of entries on standard output. Reset all options to their defaults. Assume current locale to be locale; do not consult the setlocale() function. Select paging program pager to display the entry. Format groff or troff output for device, such as dvi, latin1, X75, and X100. Do not allow postprocessing of manual page after groff has finished formatting it. Manual pages are divided into sections, depending on their intended audience: Executable programs or shell commands System calls (functions provided by the kernel) Library calls (functions within system libraries) Special files (usually found in /dev) File formats and conventions (e.g., /etc/passwd) Games Macro packages and conventions System administration commands (usually only for a privileged user) Kernel routines (nonstandard) manpath [options] Attempt to determine path to manual pages. Check $MANPATH first; if that is not set, consult /etc/man.conf, user environment variables, and the current working directory. The manpath command is a symbolic link to man, but most of the options are ignored for manpath. merge [options] file1 file2 file3 Perform a three-way file merge. merge incorporates all changes that lead from file2 to file3 and puts the results into file1. merge is useful for combining separate changes to an original. puts brackets around the conflict, with lines preceded by <<<<<<< and >>>>>>>. A typical conflict looks like this: <<<<<<< file1 relevant lines from file1 ======= relevant lines from file3 >>>>>>> file3 If there are conflicts, the user should edit the result and delete one of the alternatives. Don't warn about conflicts. Send results to standard output instead of overwriting file1. Quiet; do not warn about conflicts. Output conflicts using the -A style of diff3. This merges all changes leading from file2 to file3 into file1 and generates the most verbose output. Output conflict information in a less verbose style than -A; this is the default. Specify up to three labels to be used in place of the corresponding filenames in conflict reports. That is: merge -L x -L y -L z file_a file_b file_c generates output that looks as if it came from x, y, and z instead of from file_a, file_b, and file_c. mesg [option] Change the ability of other users to send write messages to your terminal. With no options, display the permission status. Forbid write messages. Allow write messages (the default). mimencode [options] [filename] [-o output_file] Translate to and from MIME encoding formats, the proposed standard for Internet multimedia mail formats. By default, mimencode reads standard input and sends a base64-encoded version of the input to standard output. Use the (default) base64 encoding. Send output to the named file rather than to standard output. Translate decoded CRLF sequences into the local newline convention during decoding and do the reverse during encoding; meaningful only when the default base64 encoding is in effect. Use the quoted-printable encoding instead of base64. Decode the standard input rather than encode it. mkdir [options] directories Create one or more directories. You must have write permission in the parent directory in order to create a directory. See also rmdir. The default mode of the new directory is 0777, modified by the system or user's umask. Set the access mode for new directories. See chmod for an explanation of acceptable formats for mode. Create intervening parent directories if they don't exist. Print a message for each directory created. Create a read-only directory named personal: mkdir -m 444 personal The following sequence: mkdir work; cd work mkdir junk; cd junk mkdir questions; cd ../.. can be accomplished by typing this: mkdir -p work/junk/questions mke2fs [options] device [blocks] mkfs.ext2 [options] device [blocks] System administration command. Format device as a Linux Second Extended Filesystem. You may specify the number of blocks on the device or allow mke2fs to guess. Specify block size in bytes. Scan device for bad blocks before execution. Specify fragment size in bytes. Create an inode for each bytes-per-inode of space. bytes-per-inode must be 1024 or greater; it is 4096 by default. Consult filename for a list of bad blocks. Reserve percentage percent of the blocks for use by privileged users. Write only superblock and group descriptors; suppress writing of inode table and block and inode bitmaps. Useful only when attempting to salvage damaged systems. mkfs [options] [fs-options] filesys [blocks] System administration command. Construct a filesystem on a device (such as a hard disk partition). filesys is either the name of the device or the mountpoint. mkfs is actually a frontend that invokes the appropriate version of mkfs according to a filesystem type specified by the -t option. For example, a Linux Second Extended Filesystem uses mkfs.ext2 (which is the same as mke2fs); MS-DOS filesystems use mkfs.msdos. fs-options are options specific to the filesystem type. blocks is the size of the filesystem in 1024-byte blocks. Produce verbose output, including all commands executed to create the specific filesystem. Tells mkfs what type of filesystem to construct. These options must follow generic options and not be combined with them. Most filesystem builders support these three options: Check for bad blocks on the device before building the filesystem. Read the file file for the list of bad blocks on the device. Produce verbose ouput. mkfs.minix [options] device size System administration command. Creates a MINIX filesystem. See mkfs. mklost+found System administration command. Create a lost+found directory in the current working directory. Intended for Linux Second Extended Filesystems. mkraid [options] devices System administration command. Set up RAID array devices as defined in the /etc/raidtab configuration file. mkraid can be used to initialize a new array or upgrade older RAID device arrays for the new kernel. Initialization will destroy any data on the disk devices used to create the array. Use file instead of /etc/raidtab. Initialize the devices used to create the RAID array even if they currently have data. Print a usage message and then exit. Upgrade an older array to the current kernel's RAID version. Preserve data on the old array. mkswap [option] device [size] System administration command. Create swap space on device. You may specify its size in blocks; each block is a page of about 4KB. Check for bad blocks before creating the swap space. modprobe [options] [modules] System administration command. With no options, attempt to load the specified module, as well as all modules on which it depends. If more than one module is specified, attempt to load further modules only if the previous module failed to load. Load all listed modules, not just the first one. List all existing modules. This option may be combined with -t to specify a type of module, or you may include a pattern to search for. Remove the specified modules, as well as the modules on which they depend. Load only a specific type of module. Consult /etc/conf.modules for the directories in which all modules of that type reside. Information about modules: which ones depend on others, which directories correspond to particular types of modules. Programs that modprobe relies on. more [options] [files] Display the named files on a terminal, one screenful at a time. See less for an alternative to more. Some commands can be preceded by a number. Begin displaying at line number num. Set screen size to number lines. Begin displaying two lines before pattern. Repaint screen from top instead of scrolling. Display the prompt "Hit space to continue, Del to abort" in response to illegal commands; disable bell. Count logical rather than screen lines. Useful when long lines wrap past the width of the screen. Ignore form-feed (Ctrl-L) characters. Page through the file by clearing each window instead of scrolling. This is sometimes faster. Force display of control characters, in the form ^x. Squeeze; display multiple blank lines as one. Suppress underline characters. All commands in more are based on vi commands. An argument can precede many commands. Display next screen of text. Display next lines of text, and redefine a screenful to lines lines. Default is one screenful. Display next lines of text, and redefine a screenful to lines lines. Default is one line. Scroll lines of text, and redefine scroll size to lines lines. Default is one line. Quit. Skip forward one line of text. Skip forward one screen of text. Skip backward one screen of text. Return to point where previous search began. Print number of current line. numth occurrence if an argument is specified. Repeat last search, skipping to numth occurrence if an argument is specified. Invoke shell and execute cmd in it. Invoke vi editor on the file, at the current line. Print current filename and line number. Reexecute previous command. Page through file in "clear" mode, and display prompts: more -cd file Format doc to the screen, removing underlines: nroff doc | more -u View the manpage for the grep command; begin near the word "BUGS" and compress extra whitespace: man grep | more +/BUGS -s mount [options] [special-device] [directory] System administration command. Mount a file structure. mount announces to the system that a removable file structure is present on special-device. The file structure is mounted on directory, which must already exist and should be empty; it then becomes the name of the root of the newly mounted file structure. If mount is invoked with no arguments, it displays the name of each mounted device, the directory on which it is mounted, whether the file structure is read-only, and the date it was mounted. Only a privileged user can use the mount command. Mount all filesystems listed in /etc/fstab. Note: this is the only option that cannot take a special-device or node argument. Fake mount. Go through the motions of checking the device and directory, but do not actually mount the filesystem. Do not record the mount in /etc/mtab. Note: this is the only option to mount that requires a special-device or node argument. Qualify the mount with one of the specified options: Read input and output to the device asynchronously. Allow mounting with the -a option. Use all options' default values (async, auto, dev, exec, nouser, rw, suid). Interpret any special devices that exist on the filesystem. Allow binaries to be executed. Do not allow mounting via the -a option. Do not interpret any special devices that exist on the filesystem. Do not allow the execution of binaries on the filesystem. Do not acknowledge any suid or sgid bits. Only privileged users will have access to the filesystem. Expect the filesystem to have already been mounted, and remount it. Allow read-only access to the filesystem. Allow read/write access to the filesystem. Acknowledge suid and sgid bits. Read input and output to the device synchronously. Allow unprivileged users to mount the filesystem. Note that the defaults on such a system will be nodev, noexec, and nosuid, unless otherwise specified. Specify how strictly to regulate the integration of an MS-DOS filesystem when mounting it. Specify method by which to convert files on MS-DOS and ISO 9660 filesystems. Turn debugging on for MS-DOS and ext2fs filesystems. Specify action to take when encountering an error. ext2fs filesystems only. Mount filesystem read-only. Specify the filesystem type. Possible values are: minix, ext, ext2, xiafs, hpfs, msdos, umsdos, vfat, proc, nfs, iso9660, smbfs, ncpfs, affs, ufs, romfs, sysv, xenix, and coherent. Note that ext and xiafs are valid only for kernels older than 2.1.21 and that sysv should be used instead of xenix and coherent. Display mount information verbosely. Mount filesystem read/write. This is the default. List of filesystems to be mounted and options to use when mounting them. List of filesystems that are currently mounted and the options with which they were mounted. rpc.mountd [options] NFS/NIS command. NFS mount request server. mountd reads the file /etc/exports to determine which filesystems are available for mounting by which machines. It also provides information as to what filesystems are mounted by which clients. See also nfsd. Debug mode. Output all debugging information via syslogd. Read the export permissions from file instead of /etc/exports. Accept even those mount requests that enter via a non-reserved port. Accept requests from any host that sends them. Allow re-exportation of imported filesystems. Information about mount permissions. mv [option] sources target Move or rename files and directories. The source (first column) and target (second column) determine the result (third column): name (nonexistent) Rename file to name. Existing file Overwrite existing file with source file. Rename directory to name. Existing directory Move directory to be a subdirectory of existing directory. One or more files Move files to directory. Back up files before removing. Force the move, even if target file exists; suppress messages about restricted access modes. Query user before removing files. Do not remove a file or link if its modification date is the same as or newer than that of its replacement. Print the name of each file before moving it. Override the SIMPLE_BACKUP_SUFFIX environment variable, which determines the suffix used for making simple backup files. If the suffix is not set either way, the default is a tilde (~). Override the VERSION_CONTROL environment variable, which determines the type of backups made. The acceptable values for version control are: Make numbered backups of files that already have them, simple backups of the others. The default. named [options] TCP/IP command. Internet domain name server. named is used by resolver libraries to provide access to the Internet distributed naming database. With no arguments, named reads /etc/named.boot for any initial data and listens for queries on a privileged port. See RFC 1034 and RFC 1035 for more details. There are several named binaries available at different Linux archives, displaying various behaviors. If your version doesn't behave like the one described here, never fear -- it should have come with documentation. Print debugging information. debuglevel is a number indicating the level of messages printed. Use port as the port number. Default is 42. File to use instead of named.boot. The -b is optional and allows you to specify a filename that begins with a leading dash. Read when named starts up. namei [options] pathname [pathname . . .] Follow a pathname until a terminal point is found (e.g., a file, directory, char device, etc.). If namei finds a symbolic link, it shows the link and starts following it, indenting the output to show the context. namei prints an informative message when the maximum number of symbolic links this system can have has been exceeded. Show the mode bits of each file type in the style of ls; for example: "rwxr-xr-x". Show mountpoint directories with a D, rather than a d. For each line of output, namei prints the following characters to identify the file types found: A regular file An error of some kind A block device A character device A directory The pathname namei is currently trying to resolve A symbolic link (both the link and its contents are output) A socket netdate [options] [protocol] hostname... TCP/IP command. Set the system time according to the time provided by one of the hosts in the list hostname. netdate tries to ascertain which host is the most reliable source. When run by an unprivileged user, netdate reports the current time, without attempting to set the system clock. You may specify the protocol -- udp (the default) or tcp -- once, or several times for various hosts. The most reliable host is chosen from the list by sorting the hosts into groups based on the times they return when questioned. The first host from the largest group is then polled a second time. The differences between its time and the local host's time on each poll are recorded. These two differences are then compared. If the gap between them is greater than time (the default is five seconds), the host is rejected as inaccurate. Display the groups into which hosts are sorted. netstat [options] TCP/IP command. Show network status. For all active sockets, print the protocol, the number of bytes waiting to be received, the number of bytes to be sent, the port number, the remote address and port, and the state of the socket. Show the state of all sockets, not just active ones. Display information continuously, refreshing once every second. Include statistics for network devices. Show network addresses as numbers. Include additional information such as username. Show routing tables. List only TCP sockets. List only UDP sockets. List only raw sockets. List only Unix domain sockets. newgrp [group] Change user's group identification to the specified group. If no group is specified, change to the user's login group. The new group is then used for checking permissions. newusers file. rpc.nfsd [options] System administration command. Daemon that starts the NFS server daemons that handle client filesystem requests. These daemons are user-level processes. The options are exactly the same as in mountd. nice [option] [command [arguments]]). Run command with niceness incremented by adjustment (1-19); default is 10. A privileged user can raise priority by specifying a negative adjustment (e.g., -5). nm [options] [objfiles] Print the symbol table (name list) in alphabetical order for one or more object files. If no object files are specified, perform operations on a.out. Output includes each symbol's value, type, size, name, and so on. A key letter categorizing the symbol can also be displayed. If no object file is given, use a.out. Print debugger symbols. Specify output format (bsd, sysv, or posix). Default is bsd. Print external symbols only. Sort the external symbols by address. Don't sort the symbols at all. Sort in reverse, alphabetically or numerically. Sort by size. Report only the undefined symbols. Print input filenames before each symbol. Translate low-level symbol names into readable versions. Print dynamic, not normal, symbols. Useful only when working with dynamic objects (some kinds of shared libraries, for example). Same as -f posix. Print nm's version number on standard error. nohup command [arguments] Run the named command with its optional command arguments, continuing to run it even after you log out (make command immune to hangups; i.e., no hangup). TTY output is appended to the file nohup.out by default. Modern shells preserve background commands by default; this command is necessary only in the original Bourne shell. nslookup [-option...] [host_to_find | - [server ]] TCP/IP command. Query Internet domain name servers. nslookup has two modes: interactive and noninteractive. Interactive mode allows the user to query name servers for information about various hosts and domains or to print a list of hosts in a domain. It is entered either when no arguments are given (default name server will be used) or when the first argument is a hyphen and the second argument is the hostname or Internet address of a name server. Noninteractive mode is used to print just the name and requested information for a host or domain. It is used when the name of the host to be looked up is given as the first argument. Any of the keyword=value pairs listed under the interactive set command can be used as an option on the command line by prefacing the keyword with a -. The optional second argument specifies a name server. All of the options under the set interactive command can be entered on the command line, with the syntax -keyword[=value]. Exit nslookup. Connect with finger server on current host, optionally creating or appending to filename. Print a brief summary of commands. Look up information for host using the current default server or using server if specified. List information available for domain, optionally creating or appending to filename. The -a option lists aliases of hosts in the domain. -h lists CPU and operating system information for the domain. -d lists all contents of a zone transfer. Change the default server to domain. Use the initial server to look up information about domain. Change default server to the server for the root of the domain namespace. Change the default server to domain. Use the current default server to look up information about domain. Change state information affecting the lookups. Valid keywords are: Print the current values of the frequently used options to set. Set query class to IN (Internet), CHAOS, HESIOD, or ANY. Default is IN. Change default domain name to name. Turn debugging mode on or off. Turn exhaustive debugging mode on or off. Append default domain name to every lookup. Ignore truncate error. Tell name server to query or not query other servers if it does not have the information. With defname, search for each name in parent domains of current domain. Always use a virtual circuit when sending requests to the server. Connect to name server using port. See type=value. Set number of retries to number. Change name of root server to host. Set search list to domain. Change timeout interval for waiting for a reply to number seconds. Change type of information returned from a query to one of: Sort and list output of previous ls command(s) with more. passwd [user] Create or change a password associated with a user name. Only the owner or a privileged user may change a password. Owners need not specify their user name. paste [options] files Merge corresponding lines of one or more files into tab- separated vertical columns. See also cut, join, and pr. Replace a filename with the standard input. Separate columns with char instead of a tab. Note: you can separate columns with different characters by supplying more than one char. Merge lines from one file at a time. Create a three-column file from files x, y, and z: paste x y z > file List users in two columns: who | paste - - Merge each pair of lines into one line: paste -s -d"\t\n" list patch [options] [original [patchfile]] Apply the patches specified in patchfile to original. Replace the original with the new, patched version; move the original to original.orig or original~. Apply patches again, with different options or a different original file. Back up the original file. Back up the original file in original.suffix. Prepend prefix to the backup filename. Interpret patchfile as a context diff. cd to directory before beginning patch operations. Mark all changes with: #ifdef string #endif Treat the contents of patchfile as ed commands. If patch creates any empty files, delete them. Force all changes, even those that look incorrect. Skip patches if the original file does not exist; force patches for files with the wrong version specified; assume patches are never reversed. Read patch from file instead of stdin. Skip patches if the original file does not exist. Specify the maximum number of lines that may be ignored (fuzzed over) when deciding where to install a hunk of code. The default is 2. Meaningful only with context diffs. Ignore whitespace while pattern matching. Interpret patch file as a normal diff. Ignore patches that appear to be reversed or to have already been applied. Print output to file. Specify how much of preceding pathname to strip. A num of 0 strips everything, leaving just the filename. 1 strips the leading /; each higher number after that strips another directory from the left. Place rejects (hunks of the patch file that patch fails to place within the original file) in file. Default is original.rej. Do a reverse patch: attempt to undo the damage done by patching with the old and new files reversed. Suppress commentary. Interpret patch file as a unified context diff. Specify method for creating backup files (overridden by -B): Make numbered backups. Back up files according to preexisting backup schemes, with simple backups as the default. This is patch's default behavior. Make simple backups. Specify the directory for temporary files, /tmp by default. Suffix to append to backup files instead of .orig or ~. Specify what method to use in naming backups (see -V). pathchk [ option ] filenames Determine validity and portability of filenames. Specifically, determine if all directories within the path are searchable and if the length of the filenames is acceptable. Check portability for all POSIX systems. /usr/sbin/rpc.pcnfsd NFS/NIS command. NFS authentication and print request server. pcnfsd is an RPC server that supports ONC clients on PC systems. pcnfsd reads the configuration file /etc/pcnfsd.conf, if present, then services RPC requests directed to program number 150001. This current release of the pcnfsd daemon (as of this printing) supports both Version 1 and Version 2 of the pcnfsd protocol. Requests serviced by pcnfsd fall into three categories: authentication, printing, and other. Only the authentication and printing services have administrative significance. When pcnfsd receives a PCNFSD_AUTH or PCNFSD2_AUTH request, it will log in the user by validating the username and password, returning the corresponding user ID, group IDs, home directory, and umask. At this time, pcnfsd will also append a record to the wtmp database. If you do not want to record PC logins in this way, add the line: wtmp off to the /etc/pcnfsd.conf file. pcnfsd may use and that is exported by NFS. pcnfsd creates a subdirectory for each of its clients; the parent directory is normally /usr/spool/pcnfs and the subdirectory is the hostname of the client system. If you want to use a different parent directory, add the line: spooldir path to the /etc/pcnfsd.conf file. Once a client has mounted the spool directory and has transferred print data to a file in this directory, pcnfsd will issue a PCNFSD_PR_START or PCNFSD2_PR_START request. pcnfsd constructs a command based on the printing services of the server operating system and executes the command using the identity of the PC user. Every print request includes the name of the printer to be used. pcnfsd interprets a printer as either a destination serviced by the system print spooler or as a virtual printer. Virtual printers are defined by the following line in the /etc/pcnfsd.conf file: printer name alias-for command where name is the name of the printer you want to define, alias-for is the name of a real printer that corresponds to this printer, and command is a command that will be executed whenever a file is printed on name. perl A powerful text-processing language that combines many of the most useful features of shell programs, C, awk, and sed, as well as adding extended features of its own. For more information, see Learning Perl by Randal L. Schwartz and Programming Perl, 2d ed., by Larry Wall, Tom Christiansen, and Randal L. Schwartz. pidof [options] programs Display the process IDs of the listed program or programs. pidof is actually a symbolic link to killall5. Omit all processes with the specified process ID. You may list several process IDs. Return a single process ID. Also return process IDs of shells running the named scripts.. Stop after sending (and receiving) count ECHO_RESPONSE packets. Set SO_DEBUG option on socket being used. Flood ping-output packets as fast as they come back or 100 times per second, whichever is more. This can be very hard on a network and should be used with caution; only a privileged user may use this option. Wait wait seconds between sending each packet. Default is to wait 1 second between each packet. This option is incompatible with the -f option. Send preload number of packets as fast as possible before falling into normal mode of behavior.. Bypass the normal routing tables and send directly to a host on an attached network. Specify number of data bytes to be sent. Default is 56, which translates into 64 ICMP data bytes when combined with the 8 bytes of ICMP header data. Verbose -- list ICMP packets received other than ECHO_RESPONSE. the -l option is given. in.pop2d System administration command. Allow users to connect to port 109 and request the contents of their mailbox in /var/spool/mail. pop2d requires a username and password before providing mail and can serve individual messages. See also pop3d. Each command must be entered on a separate line. Prompt for username and password. Open /var/spool/mail/$USER. Open /var/spool/pop/$USER. Read a message. Retrieve a message. Save the last message retrieved and move to next message. Delete the last message retrieved and move to next message. Save the last message retrieved and expect to resend it. in.pop3d System administration command. pop3d is a more recent version of pop2d. It behaves similarly but accepts a slightly different list of commands. Prompt for name. Prompt for password. Display the number of messages in the mailbox and its total size. Display individual messages' sizes. Delete a message. Perform a null operation. Print the number of the most recently received message that has been read. Reset: clear all deletion marks. Print the first part of a message. rpc.portmap [option] NFS/NIS command. RPC program number to IP port mapper. portmap is a server that converts RPC program numbers to IP port numbers. It must be running in order to make RPC calls. When an RPC server is started, it tells portmap what port number it is listening to and what RPC program numbers it is prepared to serve. When a client wishes to make an RPC call to a given program number, it first contacts portmap on the server machine to determine the port number where RPC packets should be sent. portmap must be the first RPC server started. Run portmap in debugging mode. Does not allow portmap to run as a daemon. powerd device System administration command. Monitor the connection to an uninterruptible power supply, which the user must specify via device. When power goes low, signal init to run its powerwait and powerfail entries; when full power is restored, signal init to run its powerokwait entries. pppd [options] [tty] [speed]. Connect as specified by command, which may be a binary or shell command. Increment the debugging level.. Specify a domain name of d. Escape all characters in character-list, which should be a comma-separated list of hex numbers. You cannot escape 0x20-0x3f or 0x5e. Consult file for options. Allow only pppd to access the device. Refuse packets of more than bytes bytes. Specify a machine name for the local system. Specify netmask (for example, 255.255.255.0). Do not exit if peer does not respond to attempts to initiate a connection. Instead, wait for a valid packet from the peer. Send no packets until after receiving one. Specify the local and/or remote interface IP addresses, as hostnames or numeric addresses.. pr [files] Convert a text file or files to a paginated, columned version, with headers.. Convert tabs (or tab-chars) to spaces. If width is specified, convert tabs to width characters (default is 8). Separate pages with form feeds, not newlines. Use header for the header instead of the filename. Replace spaces with tabs on output. Can specify alternative tab character (default is tab) and width (default is 8). Merge full lines; ignore -W if set. Set page length to lines (default 66). If lines is less than 10, omit headers and footers. Print all files, one file per column. Number columns, or, with the -m option, number lines. Append delimiter to each number (default is a tab) and limit the size of numbers to digits (default is 5). Set left margin to width. Continue silently when unable to open an input file. Separate columns with multi-column output. Default is 72. Set the page width to always be page_width characters. Default is 72. praliases [option] System administration command. praliases prints the current sendmail mail aliases. (Usually defined in the /etc/aliases or /etc/aliases.db file.) Read the aliases from the specified file instead of sendmail's default alias files. printenv [variables] Print values of all environment variables or, optionally, only the specified variables. printf formats [strings] Print strings using the specified formats. formats can be ordinary text characters, C-language escape characters, or more commonly, a set of conversion arguments listed here. Print the next string. Print the nth string. Print the next string, using a field that is m characters wide. Optionally, limit the field to print only the first n characters of string. Strings are right-adjusted unless the left-adjustment flag, -, is specified. printf '%s %s\n' "My files are in" $HOME printf '%-25.15s %s\n' "My files are in" $HOME ps [options] Report on active processes. Note that you do not need to include a - before options. In options, list arguments should either be separated by commas or be put in double quotes. In comparing the amount of output produced, note that e prints more than a and l prints more than f. Include only specified processes, which are given in a comma-delimited list. List all processes. Consult task_struct for command name. Include environment. "Forest" family tree format. Suppress header. Jobs format. Produce a long listing. Memory format. Print user IDs and WCHAN numerically. Exclude processes that are not running. Signal format. Similar to O, but designed to protect multiletter sort keys. See the later list, "Sort keys". Display only processes running on tty. Include username and start time. vm format. Wide format. Don't truncate long lines. Include processes without an associated terminal. Sort processes. (See the following list, "Sort keys.") Return key to default direction. Reverse default direction on key. Include child processes' CPU time and page faults. Name of executable. Whole command line. Flags. Group ID of process. Group ID of associated tty. Cumulative user time. Cumulative system time. User time. System time. Number of minor page faults. Amount of major page faults. Total minor page faults. Total major page faults. Session ID. Process ID. Parent's process ID. Resident set size. Resident pages. Kilobytes of memory used. Number of shared pages. tty. Process's start time. User ID. User's name. Bytes of VM used. Kernel's scheduling priority. Process's scheduling priority. A higher number indicates lower priority. Process's nice value. A higher number indicates less CPU time. Size of virtual image. Resident set size (amount of physical memory), in kilobytes. Kernel function in which process resides. Status: Runnable Stopped Asleep and not interruptible Asleep Zombie No resident pages (second field) Positive nice value (third field) Associated tty. Number of major page faults. Size of resident text. Amount of swap used, in kilobytes. Shared memory. psupdate [mapfile] System administration command. Update the psupdate database (on some systems /boot/psupdate; on others, /etc/psdatabase), which contains information about the kernel image system map file. If no mapfile is specified, psupdate uses the default (which is either /usr/src/linux/vmlinux or /usr/src/linux/tools/zSystem, depending on the distribution). pwck [option] [files] System administration command. Remove corrupt or duplicate entries in the /etc/passwd and /etc/shadow files. pwck will prompt for a "yes" or "no" before deleting entries. If the user replies "no," the program will exit. Alternate passwd and shadow files can be checked. If correctable errors are found, the user will be encouraged to run the usermod command. Noninteractive mode. Don't prompt for input, and delete no entries. Return appropriate exit status. One or more bad password entries found. Could not open password files. Could not lock password files. Could not write password files. pwconv pwunconv System administration command. Convert unshadowed entries in /etc/passwd into shadowed entries in the /etc/shadow file. Replace the encrypted password in /etc/password with an x. Shadowing passwords keeps them safe from password cracking programs. pwconv creates additional expiration information for the /etc/shadow file from entries in your /etc/login.defs file. If you add new entries to the /etc/passwd file, you can run pwconv again to transfer the new information to /etc/shadow. Already shadowed entries are ignored. pwunconv restores the encrypted passwords to your /etc/passwd file and removes the /etc/shadow file. Some expiration information is lost in the conversion. pwd Print the full pathname of the current working directory. See also the dirs shell command, built in to both bash and csh/tcsh. quota [options] [user|group] Display disk usage and total space allowed for a designated user or group. With no argument, the quota for the current user is displayed. This command reports quotas for all filesystems listed in /etc/fstab. Given with a user argument, display the quotas for the groups of which the user is a member, instead of the user's quotas. Display information only for filesystems in which the user is over quota. The default behavior. When used with -g, display both user and group quota information. Display quotas for filesystems even if no storage is currently allocated.. Apply command to all devices defined in the RAID configuration file. Print usage message and exit. ramsize [option] [image [size [offset]]] System administration command. If no options are specified, print usage information for the RAM disk. The pair of bytes at offset 504 in the kernel image normally specify the RAM size; with a kernel image argument, print the information found at that offset. To change that information, specify a new size (in kilobytes). You may also specify a different offset. Note that rdev -r is the same as ramsize. Same as specifying an offset as an argument. ranlib filename Generate an index for archive file filename. Same as running ar -s. rarp [options] System administration command. Administer the Reverse Address Resolution Protocol table (usually /proc/net/rarp). Show all entries. If hostname is specified, show only the entries relevant to hostname, which may be a list. Remove the entries relevant to hostname, which may be a list. Add a new entry for hostname, with the hardware address hw_addr. Check only for type entries when consulting or changing the table. type may be ether (the default) or ax25. rcp [options] file1 file2 rcp [options] file ... directory Copy files between two machines. Each file or directory is either a remote filename of the form rname@rhost:path or a local filename. Attempt to get tickets for remote host; query krb_realmofhost to determine realm. Preserve modification times and modes of the source files. If any of the source files are directories, rcp copies each subtree rooted at that name. The destination must be a directory. Turns on DES encryption for all data passed by rcp. rdate [options] [host...] TCP/IP command. Retrieve the date and time from a host or hosts on the network and optionally set the local system time. Print the retrieved dates. Set the local system time from the host; must be specified by root. rdev [options] [image [value [offset]]] System administration command. If no arguments are specified, display a line, in /etc/mtab syntax, that describes the root filesystem. Otherwise, change the values of the bytes in the kernel image that describe the RAM disk size (by default located at decimal byte offset 504 in the kernel), VGA mode (default 506), and root device (default 508). You must specify the kernel image to change and may specify a new value and a different offset. Same as specifying an offset as an argument. The offset is given in decimal. Behave like ramsize. Behave like swapdev. Behave like vidmode. Behave like rootflags. rdist [options] [names] System administration command. Remote file distribution client program. rdist maintains identical copies of files over multiple hosts. It reads commands from a file named distfile to direct the updating of files and/or directories. An alternative distfile can be specified with the -f option or the -c option. Do not update filesystems with fewer than num bytes free. Interpret the arguments as a small distfile, where login is the user to log in as, host is the destination host, name is the local file to transfer, and dest is the remote name where the file should be installed. Define var to have value. This option defines or overrides variable definitions in the distfile. Set the variable var to value. Read input from file (by default, distfile). If file is -, read from standard input. Specify logging options on the local machine. Update only machine. May be specified multiple times for multiple machines. Suppress normal execution. Instead, print the commands that would have been executed. Specify one or more options, which must be comma-separated. Suppress operations on files that reside on NFS filesystems. Check filesystem to be sure it is not read-only before attempting to perform updates. Do not update files that exist on the local host but are symbolic links on the remote host. Compare files; use this comparison rather than age as the criteria for determining which files should be updated. Interpret symbolic links, copying the file to which the link points instead of creating a link on the remote machine. Ignore links that appear to be unresolvable. Do not update a file's group ownership unless the entire file needs updating. Do not update file mode unless the entire file needs updating. Do not update file ownership unless the entire file needs updating. Suppress recursive descent into directories. Suppress rdist of executables that are in a.out format. Check group ownership by group ID instead of by name. Check file ownership by user ID instead of by name. Quiet mode; do not print commands as they execute. Remove files that exist on the remote host but not the local host. Save updated files in name.old. Print a list of all files on the remote machine that are out of date, but do not update them. Preserve directory structure by creating subdirectories on the remote machine. For example, if you rdist the file /foo/bar into the directory /baz, it would produce the file /baz/foo/bar, instead of the default, /baz/bar. Do not update files that are younger than the master files. Specify the path to search for rdistd on the remote machine. Specify the timeout period (default 900 seconds) after which rdist will sever the connection if the remote server has not yet responded. Specify the minimum number of inodes that rdist requires. Execute all commands sequentially, without forking. Specify logging options on the remote machine. Do not allow more than num child rdist processes to run simultaneously. Default is 4. Specify path to rsh on the local machine. rdistd options System administration command. Start the rdist server. Note that you must specify the -S option, unless you are simply querying for version information with -V. Start the server. Display the version number and exit immediately. reboot [options] System administration command. Close out filesystems, shut down the system, then reboot the system. Because this command immediately stops all processes, it should be run only in single-user mode. If the system is not in runlevel 0 or 6, reboot calls shutdown -nf. Call reboot even when shutdown would normally be called. renice [priority] [options] [target] Control the scheduling priority of various processes as they run. May be applied to a process, process group, or user (target). A privileged user may alter the priority of other users' processes. priority must, for ordinary users, lie between 0 and the environment variable PRIO_MAX (normally 20), with a higher number indicating increased niceness. A privileged user may set a negative priority, as low as PRIO_MIN, to speed up processes. Specify number by which to increase current priority of process, rather than an absolute priority number. Specify number by which to decrease current priority of process, rather than an absolute priority number. Interpret target parameters as process group IDs. Interpret target parameters as process IDs (default). Interpret target parameters as usernames. reset Clear screen (reset terminal). rev [file] Reverse the lines of a file onto standard output. The order of characters on each line is also reversed. If no file is specified, rev reads from standard input. rexecd command-line TCP/IP command. Server for the rexec routine, providing remote execution facilities with authentication based on usernames and passwords. rexecd is started by inetd and must have an entry in inetd's configuration file, /etc/inetd.conf. When rexecd receives a service request, the following protocol is initiated: The server reads characters from the socket up to a null byte. The resulting string is interpreted as an ASCII number, base 10. If the number received in step 1 is nonzero, it is interpreted as the port number of a secondary stream to be used for stderr. A second connection is then created to the specified port on the client's machine. A null-terminated username of at most 16 characters is retrieved on the initial socket. A null-terminated, unencrypted. A null byte is returned on the connection associated with stderr, and the command line is passed to the normal login shell of the user. The shell inherits the network connections established by rexecd. Name is longer than 16 characters. Password is longer than 16 characters. Command passed is too long. No password file entry for the username exists. Wrong password was supplied. chdir to home directory failed. fork by server failed. fork by server failed. User's login shell could not be started. rlogin rhost [options] Remote login. rlogin connects the terminal on the current local host system to the remote host system rhost. The remote terminal type is the same as your local terminal type. The terminal or window size is also copied to the remote system if the server supports it. Allow an 8-bit input data path at all times. Specify escape character c (default is ~). Attempt to get tickets from remote host, requesting them in the realm as determined by krb_realm-ofhost. Specify a different username for the remote login. Default is the same as your local username. Turns on DES encryption for all data passed via the rlogin session. Do not interpret any character as an escape character. Suppress all Kerberos authentication. Allow rlogin session to be run without any output postprocessing (i.e., run in litout mode). rlogind [options] TCP/IP command. Server for the rlogin program, providing a remote login facility, with authentication based on privileged port numbers from trusted hosts. rlogind is invoked by inetd when a remote login connection is requested and executes the following protocol: The server checks the client's source port. If the port is not in the range 0-023, the server aborts the connection. The server checks the client's source address and requests the corresponding hostname. If the hostname cannot be determined, the dot-notation representation of the host address is used. The login process propagates the client terminal's baud rate and terminal type, as found in the environment variable, TERM. Verify hostname. Do not authenticate hosts via a nonroot .rhosts file. Suppress keep-alive messages. rm [options] files Delete one or more files. To remove a file, you must have write permission in the directory that contains the file, but you need not have permission on the file itself. If you do not have write permission on the file, you will be prompted (y or n) to override. Remove directories, even if they are not empty. Available only to a privileged user. Remove write-protected files without prompting. Prompt for y (remove the file) or n (do not remove the file). If file is a directory, remove the entire directory and all its contents, including subdirectories. Be forewarned: use of this option can be dangerous. Turn on verbose mode. (rm prints the name of each file before removing it.) Mark the end of options. Use this when you need to supply a filename beginning with -. rmail user... TCP/IP command. Handle remote mail received via uucp, collapsing From lines in the form generated by mail into a single line of the form return-path!sender and passing the processed mail onto sendmail. rmail is explicitly designed for use with uucp and sendmail. rmdir [options] directories Delete the named directories (not the contents). directories are deleted from the parent directory and must be empty (if not, rm -r can be used instead). See also mkdir. Ignore failure to remove directories that are not empty. Remove directories and any intervening parent directories that become empty as a result; useful for removing subdirectory trees. Turn on verbose mode; print message for each directory as it is processed. rmmod [option] modules System administration command. Unload a module or list of modules from the kernel. This command is successful only if the specified modules are not in use and no other modules are dependent on them. Recursively remove stacked modules (all modules that use the specified module). rootflags [option] image [flags [offset]] System administration command. Sets flags for a kernel image. If no arguments are specified, print flags for the kernel image. flags is a 2-byte integer located at offset 498 in a kernel image. Currently the only effect of flags is to mount the root filesystem in read-only mode if flags is non-zero. You may change flags by specifying the kernel image to change, the new flags, and the byte-offset at which to place the new information (the default is 498). Note that rdev -R is a synonym for rootflags. If LILO is used, rootflags is not needed. flags can be set from the LILO prompt during a boot. route [option] [command] TCP/IP command. Manually manipulate the routing tables normally maintained by routed. route accepts two commands: add, to add a route, and del, to delete a route. The two commands have the following syntax: add [-net | -host] address [gw gateway] [netmask mask] [mss tcp-mss] [dev device] del address address is treated as a plain route unless -net is specified or address is found in /etc/networks. -host can be used to specify that address is a plain route whether or not it is found in /etc/networks. The keyword default means to use this route for all requests if no other route is known. You can specify the gateway through which to route packets headed for that address, its netmask, TCP mss, and the device with which to associate the route. Only a privileged user may modify the routing tables. If no command is specified, route prints the routing tables. Show numerical addresses; do not look up hostnames. (Useful if DNS is not functioning properly.). Debugging mode. Log additional information to the logfile. Offer a route to the default destination. Opposite of -s option. Force routed to supply routing information, whether it is acting as an internetwork router or not. Stop routed from going into background and releasing itself from the controlling terminal, so that interrupts from the keyboard will kill the process. rpcgen [options] file Parse file, which should be written in the RPCOS 4.1-compatible code. Produce all files (client and server). Produce SVR4-compatible code. Create XDR routines. Cannot be used with other options. Produce ANSI C code (default).. A secs of -1 prevents the program from ever exiting. Produce client code. Cannot be used with other options. Produce server code only, suppressing creation of a "main" routine. Cannot be used with other options. New style. Allow multiple arguments for procedures. Not necessarily backward compatible. Print output to file or standard output. Create skeleton server code only. Create RPC dispatch table. Cannot be used with other options. Include support for RPC dispatch tables.. Make an RPC broadcast to the specified program and version, using UDP, and report all hosts that respond. Delete the specified version of program's registration. Can be executed only by the user who added the registration or a privileged user. Use portnum as the port number for the -t and -u options, instead of the port number given by the portmapper. Probe the portmapper on host and print a list of all registered RPC programs. If host is not specified, it defaults to the value returned by hostname. Make an RPC call to program on the specified host, using TCP, and report whether a response was received. Make an RPC call to program on the specified host, using UDP, and report whether a response was received. Network Information Service (NIS), use: $ rpcinfo -b ypserv version | uniq where version is the current NIS version obtained from the results of the -p switch earlier in this list. rpm [options] The Red Hat Package Manager. A freely available packaging system for software distribution and installation. RPM packages are built, installed, and queried with the rpm command. For detailed information on rpm, see Chapter 5, "Red Hat and Debian Package Managers". rsh [options] host [command] Execute command on remote host, or, if no command is specified, begin an interactive shell on the remote host using rlogin. Enable socket debugging. Cause rsh to obtain tickets for the remote host in realm instead of the remote host's realm as determined by krb_realmofhost(3). Attempt to log in as username. By default, the name of the user executing rsh is used. Redirects the input to rsh from the special device /dev/null. (This should be done when backgrounding rsh from a shell prompt, to direct the input away from the terminal.) Turns on DES encryption for all data exchange. Suppress Kerberos authentication. rshd [options] TCP/IP command. Remote shell server for programs such as rcmd and rcp, which need to execute a noninteractive shell on remote machines. rshd is started by inetd and must have an entry in inetd's configuration file, /etc/inetd.conf. All options are exactly the same as those in rlogind, except for -L, which is unique to rshd. Log all successful connections and failed attempts via syslogd. rstat host TCP/IP command. Summarize host's system status: the current time, uptime, and load averages -- the average number of jobs in the run queue. Queries the remote host's rstat_svc daemon. run-parts [options] [directory] System administration command. Run, in lexical order, all scripts found in directory. Exclude scripts whose filenames include nonalphanumeric characters (besides underscores and hyphens). Interpret all subsequent arguments as filenames, not options. Print information listing which scripts would be run, but suppress actual execution of them. Specify umask. The default is 022. runlevel System administration command. Display the previous and current system runlevels. ruptime [options] TCP/IP command. Provide information on how long each machine on the local network has been up and which users are logged in to each. If a machine has not reported in for 11 minutes, assume it is down. The listing is sorted by hostname. Include users who have been idle for more than one hour. Sort machines by load average. Reverse the normal sort order. Sort machines by uptime. Sort machines by the number of users logged in. rusers [options] [host] TCP/IP command. List the users logged on to host, or to all local machines, in who format (hostname, usernames). Include machines with no users logged in. Include more information: tty, date, time, idle time, remote host. rwall host [file] TCP/IP command. Print a message to all users logged on to host. If file is specified, read the message from it; otherwise, read from standard input. rwho [option] Report who is logged on for all machines on the local network (similar to who). List users even if they've been idle for more than one hour. rwhod 3 minutes. script [option] [file] Fork the current shell and make a typescript of a terminal session. The typescript is written to file. If no file is given, the typescript is saved in the file typescript. The script ends when the forked shell exits, usually with Ctrl-D or exit. Append to file or typescript instead of overwriting the previous contents. sed [options] [command] [files] Stream editor -- edit one or more files without user interaction. See Chapter 12, "The sed Editor", for more information.. Set operation mode to x. Operation modes are: Run in ARPAnet mode. Run as a daemon. Initialize the alias database. Deliver mail (default). Print the mail queue. Speak SMTP on input side. Run in test mode. Verify addresses; do not collect or deliver. Use configuration file file. Set debugging level. Set full name of user to name. Sender's name is name. Set hop count (number of times message has been processed by sendmail) to cnt. Do not alias or forward. Set option x to value value. Options are described below. Receive messages via the protocol protocol. Process queued messages immediately, or at intervals indicated by time (for example, -q30m for every half hour). Obsolete form of -f. Read head for To:, Cc:, and Bcc: lines, and send to everyone on those lists. Log all traffic to file. Not to be used for normal logging. The following options can be set with the -o flag on the command line or the O line in the configuration file: Format all incoming messages in 7 bits. If the D option is set, wait min minutes for the aliases file to be rebuilt before returning an alias database out-of-date warning. Use alternate alias file. Require at least minblocks to be free, and optionally set the maximum message size to maxsize. If maxsize is omitted, the slash is optional. Set unquoted space replacement character. On mailers that are considered "expensive" to connect to, don't initiate immediate connection. Checkpoint the queue when mailing to multiple recipients. sendmail will rewrite the list of recipients after each group of num recipients has been processed. Set the delivery mode to x. Delivery modes are d for deferred delivery, i for interactive (synchronous) delivery, b for background (asynchronous) delivery, and q for queue only -- i.e., deliver the next time the queue is run. Try to automatically rebuild the alias database if necessary. error message header. text is either text to add to an error message or the name of a file. A filename must include its full path and begin with a /. Save Unix-style From lines at the front of messages. Set default file permissions for temporary files. If this option is missing, default permissions are 0644. Compare local mail names to the GECOS section in the password file. Default group ID to use when calling mailers. SMTP help file. Allow a maximum of num hops per message. Do not take dots on a line by themselves as a message terminator. Use DNS lookups and tune them. Queue messages on connection refused. The arg arguments are identical to resolver flags without the RES_ prefix. Each flag can be preceded by a plus or minus to enable or disable the corresponding name server option. There must be a whitespace between the I and the first flag. Use MIME format for error messages. Set an alternative .forward search path. Specify size of the connection cache. Time out connections after time. Do not ignore Errors-To header. Specify log level. Send to me (the sender) also if I am in an alias expansion. Define a macro's value in command line. Assign value to macro X. When running newaliases, validate the right side of aliases. If set, this message may have old-style headers. If not set, this message is guaranteed to have new-style headers (i.e., commas instead of spaces between addresses). Tune how private you want the SMTP daemon. The what arguments should be separated from one another by commas. The what arguments may be any of the following: Make SMTP fully public (default). Require site to send HELO or ELHO before sending mail. Require site to send HELO or ELHO before answering an address expansion request. Like preceding argument but for verification requests. Deny all expansion requests. Deny all verification requests. Insert special headers in mail messages advising recipients that the message may not be authentic. Set all of the previous arguments (except public). Allow only users of the same group as the owner of the queue directory to examine the mail queue. Limit queue processing to root and the owner of the queue directory. Send copies of all failed mail to user (usually postmaster). Multiplier (factor) for high-load queuing. Select the directory in which to queue messages. Don't prune route addresses. Save statistics in the named file. Always instantiate the queue file, even under circumstances in which it is not strictly necessary. Set the timeout on undelivered messages in the queue to the specified time. Set name of the time zone. Consult the user database database for forwarding information. Set default user ID for mailers. Run in verbose mode. Fall-back MX host. host should be the fully qualified domain name of the fallback host. Use a record for an ambiguous MX. Queues messages when load level is higher than load. Refuse SMTP connections when load is higher than load. Penalize large recipient lists by factor. Deliver each job that is run from the queue in a separate process. This helps limit the size of running processes on systems with very low amounts of memory. Multiplier for priority increments. This determines how much weight to give to a message's precedence header. sendmail's default is 1800. Increment priority of items remaining in queue by inc after each job is processed. sendmail uses 90,000 by default. Binary of sendmail. Link to /usr/lib/sendmail; causes the alias database to be rebuilt. Prints a listing of the mail queue. Configuration file, in text form. Statistics file. Doesn't need to be present. Alias file, in text form. Alias file in dbm format. Directory in which the mail queue and temporary files reside. Control (queue) files for messages. Data files. Lockfiles. Temporary versions of af files, used during queue-file rebuild. Used when creating a unique ID. Transcript of current session. setfdprm [options] device [name] Load disk parameters used when autoconfiguring floppy devices. Clear parameters of device. Disable format-detection messages for device. Permanently reset parameters for device. You can use name to specify a configuration, or you can specify individual parameters. The parameters that can be specified are dev, size, sect, heads, tracks, stretch, gap, rate, spec1, or fmt_gap. Consult /etc/fdprm for the original values. Enable format-detection messages for device. setsid command [arguments] System administration command. Execute the named command and optional command arguments in a new session. The standard Unix shell, a command interpreter into which all other commands are entered. On Linux, this is just another name for the bash shell. For more information, see Chapter 7, "bash: The Bourne-Again Shell", . shar [options] files shar -S [options] Create shell archives (or shar files) that are in text format and can be mailed. These files may be unpacked later by executing them with /bin/sh. Other commands may be required on the recipient's system, such as compress, gzip, and uudecode. The resulting archive is sent to standard output, unless the -o option is given. Allows automatic generation of headers. The -n option is required if the -a option is used. Use -b bits as a parameter to compress (when doing compression). Default value is 12. The -b option automatically turns on -Z. Start the shar file with a line that says "Cut here." Use delimiter for the files in the shar instead of SHAR_EOF. Causes only simple filenames to be used when restoring, which is useful when building a shar from several directories or another directory. (If a directory name is passed to shar, the substructure of that directory will be restored whether or not -f is used.) Use -level as a parameter to gzip (when doing compression). Default is 9. The -g option turns on the -z option by default. Print a help summary on standard output, then exit. Limit the output file size to nn kilobytes but don't split input files. Requires use of -o. Don't generate touch commands to restore the file modification dates when unpacking files from the archive. Name of archive to be included in the header of the shar files. Required if the -a option is used. Do not produce internationalized shell archives; use default English messages. By default, shar produces archives that will try to output messages in the unpacker's preferred language (as determined by LANG/LC_MESSAGES). Save the archive to files prefix.01 through prefix.nn (instead of sending it to standard output). This option must be used when either -l or -L is used. Allow positional parameter options. The options -B, -T, -z, and -Z may be embedded, and files to the right of the option will be processed in the specified mode. Print the directory shar looks in to find messages files for different languages, then immediately exit. Turn off verbose mode. Supply submitter name and address, instead of allowing shar to determine it automatically. Print the version number of the program on standard output, then exit. Do not check each file with wc -c after unpacking. The default is to check. Overwrite existing files without checking. Default is to check and not overwrite existing files. If -c is passed as a parameter to the script when unpacking (sh archive -c), existing files will be overwritten unconditionally. See also -X. gzip and uuencode all files prior to packing. Must be unpacked with uudecode and gunzip (or zcat). Treat all files as binary; use uuencode prior to packing. This increases the size of the archive, and it must be unpacked with uudecode. Do not use md5sum digest to verify the unpacked files. The default is to check. Force the prefix character to be prepended to every line even if not required. May slightly increase the size of the archive, especially if -B or -Z is used. Limit output file size to nn kilobytes and split files if necessary. The archive parts created with this option must be unpacked in correct order. Requires use of -o. Pack files in mixed mode (the default). Distinguishes files as either text or binary; binaries are uuencoded prior to packing. Use temporary files instead of pipes in the shar file. Disable verbose mode. Read list of files to be packed from standard input rather than from the command line. Input must be in a form similar to that generated by the find command, with one filename per line. Treat all files as text. Produce shars that rely only upon the existence of sed and echo in the unsharing environment. Prompt user to ask if files should be overwritten when unpacking. Compress and uuencode all files prior to packing. number of the program.. Broadcast messages, default or defined, are displayed at regular intervals during the grace period; the closer the shutdown time, the more frequent the message. Cancel a shutdown that is in progress. Reboot fast, by suppressing the normal call to fsck when rebooting. Halt the system when shutdown is complete. Print the warning message, but suppress actual shutdown. Perform shutdown without a call to init. Reboot the system when shutdown is complete. Ensure a sec-second delay between killing processes and changing the runlevel. size [options] [objfile...] Print the number of bytes of each section of objfile and its total size. If objfile is not specified, a.out is used. Display the size in decimal and hexadecimal. Imitate the size command from either System V (--format sysv) or BSD (--format berkeley). Display the size in octal and hexadecimal. Specify how to display the size: in hexadecimal and decimal (if num is 10 or 16) or hexadecimal and octal (if num is 8). Display the size in hexadecimal and decimal. Imitate System V's size command. Imitate BSD's size command. slattach [options] [tty] TCP/IP command. Attach serial lines as network interfaces, thereby preparing them for use as point-to-point connections. Only a privileged user may attach or detach a network interface. Run command when the connection is severed. Exit immediately after initializing the line. Exit when the connection is severed. Create UUCP-style lockfile in /var/spool/uucp. Enable 3-wire operation. Suppress initialization of the line to 8 bits raw mode. Similar to mesg -n. Specify protocol, which may be slip, adaptive, ppp, or kiss. Quiet mode; suppress messages. Specify line speed. sleep amount[units] Wait a specified amount of time before executing another command. The default for units is seconds. seconds minutes hours days sort [options] [files] Sort the lines of the named files. Compare specified fields for each pair of lines, or, if no fields are specified, compare them by byte, in machine collating sequence. See also uniq, comm, and join. Ignore leading spaces and tabs. Check whether files are already sorted, and, if so, produce no output. Sort in dictionary order. Fold -- ignore uppercase/lowercase differences. Ignore nonprinting characters (those outside ASCII range 040-176). Merge (i.e., sort as a group) input files. Sort in arithmetic order. Put output in file. Reverse the order of the sort. Separate fields with c (default is a tab). Identical lines in input file appear only one (unique) time in output. Provide recsz bytes for any one line in the file. This option prevents abnormal termination of sort in certain cases. Skip n fields before sorting, and sort up to field position m. If m is missing, sort to end of line. Positions take the form a.b, which means character b of field a. If .b is missing, sort at the first character of the field. Similar to +. Skip n-1 fields and stop at m-1 fields (i.e., start sorting at the nth field, where the fields are numbered beginning with 1). Attempt to treat the first three characters as a month designation (JAN, FEB, etc.). In comparisons, treat JAN < FEB and any valid month as less than an invalid name for a month. Directory pathname to be used for temporary files. List files by decreasing number of lines: wc -l * | sort -r Alphabetize a list of words, remove duplicates, and print the frequency of each word: sort -fd wordlist | uniq -c Sort the password file numerically by the third field (user ID): sort +2n -t: /etc/passwd split [option] [infile] [outfile] Split infile into equal-sized segments. infile remains unchanged, and the results are written to outfileaa, outfileab, and so on. (default is xaa, xab, etc.). If infile is - (or missing), standard input is read. See also csplit. Split infile into n-line segments (default is 1000). Split infile into n-byte segments. Alternate blocksizes may be specified: 512 bytes 1 kilobyte 1 megabyte Put a maximum of bytes into file; insist on adding complete lines. Print a message for each output file. Take input from the standard input. Break bigfile into 1000-line segments: split bigfile Join four files, then split them into 10-line files named new.aa, new.ab, and so on. Note that without the -, new. would be treated as a nonexistent input file: cat list[1-4] | split -10 - new. stat filename [filenames . . . ] Print out the contents of an inode as they appear to the stat system call in a human-readable format. The error messages "Can't stat file" or "Can't lstat file" usually mean the file doesn't exist. "Can't readlink file" generally indicates that something is wrong with a symbolic link. Sample output from the command: stat / File: "/" Size: 1024 Filetype: Directory Mode: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ system) Device: 3,3 Inode: 2 Links: 21 Access: Tue Apr 11 04:02:01 2000(00000.11:47:35) Modify: Wed Nov 17 11:46:38 1999(00146.03:02:58) Change: Wed Nov 17 11:46:38 1999(00146.03:02:58) strace [options] command [arguments] Trace the system calls and signals for command and arguments. strace shows you how data is passed between the program and the system kernel. With no options, strace prints a line to stderr for each system call. It shows the call name, arguments given, the return value, and any error messages generated. A signal is printed with both its signal symbol and a descriptive string. Align the return values in column n. Count all calls and signals and create a summary report when the program has ended. Debug mode. Print debugging information for strace on stderr. Pass an expression to strace to limit the types of calls or signals that are traced or change how they are displayed. The values for these expressions can be given as a comma-separated list. Preceding the list with an exclamation mark (!) negates the list. The special values of all and none are valid, as are the values listed with the following keywords. Abbreviate output from large structures for system calls listed in names. Print all data read from the given file descriptors. Trace the listed signal symbols (for example, signal=SIGIO,SIGHUP). Trace the listed values. values may be a list of system call names or one of the following sets of system calls: Unabbreviate structures for the given system call names. Default is none. Print all data written to the given file descriptors. Trace forked processes. Write system calls for forked processes to separate files named filename.pid when using the -o option. Print help and exit. Print instruction pointer with each system call. Write output to filename instead of stderr. If filename starts with the pipe symbol |, treat the rest of the name as a command to which output should be piped. Override strace's built-in timing estimates, and just subtract n microseconds from the timing of each system call to adjust for the time it takes to measure thestentry> call. Attach to the given process ID and begin tracking. strace can track more than one process if more than one option -p is given. Type Ctrl-c to end the trace. Quiet mode. Suppress attach and detach messages from strace. Relative timestamp. Print time in microseconds between system calls. Print only the first n characters of a string. Default value is 32. Sort output of -c option by the given value. value may be calls, name, time, or nothing. By default it is sorted by time. Print time spent in each system call. Print time of day on each line of output. Print time of day with microseconds on each line of output. Print timestamp on each line as number of seconds since the Epoch. Run command as username. Needed when tracing setuid and setgid programs. Verbose. Do not abbreviate structure information. Print all non-ASCII strings in hexadecimal. Print all strings in hexadecimal. that they are listed in the header file data. If no output file is specified, unstr prints to standard output; otherwise, it prints to the file specified. unstr can also globally change the delimiter character in a strings file. Of the following options, only -c can be used with unstr. All other options apply to strfile alone. Change the delimiting character from the percent sign to delimiter. Valid for both strfile and unstr. Ignore case when ordering the strings. Order the strings alphabetically. Randomize access to the strings. Run silently; don't give a summary message when finished. Set the STR_ROTATED bit in the header str_flags field. strings [options] files Search each file specified and print any printable character strings found that are at least four characters long and followed by an unprintable character. Scan entire object files; default is to scan only the initialized and loaded sections for object files. Print the name of the file before each string. Print only strings that are at least min-len characters. Print the offset within the file before each string, in the format specified by base: Decimal Specify an alternative object code format to the system default. Same as -t o. strip [options] files Remove symbols from object files, thereby reducing file sizes and freeing disk space. Expect the input file to be in the format format. Write output file in format. Delete section. Strip all symbols. Strip debugging symbols. Strip nonglobal symbols. Strip local symbols that were generated by the compiler. stty [options] [modes] Set terminal I/O options for the current standard input device. Without options, stty reports the terminal settings that differ from those set by running stty sane, where a ^ indicates the Ctrl key and ^` indicates a null value. Most modes can be negated using an optional - (shown in brackets). The corresponding description is also shown in brackets. Some arguments use non-POSIX extensions; these are marked with a *. Report all option settings. Report settings in hex. Set terminal baud rate to n (e.g., 2400). [Enable] disable modem control. [Disable] enable the receiver. Set character size to bits, which must be 5, 6, 7, or 8. [1] 2 stop bits per character. [Do not] hang up connection on last close. Same as previous. Set terminal input baud rate to n. Set terminal output baud rate to n. [Disable] enable parity generation and detection. Use [even] odd parity. [Disable]enable RTS/CTS handshaking. The following flow control modes are available by combining the ortsfl, ctsflow, and rtsflow flags: [Do not] signal INTR on break. [Do not] map CR to NL on input. [Do not] ignore break on input. [Do not] ignore CR on input. [Do not] ignore parity errors. [Do not] map NL to CR on input. [Disable] enable input parity checking. [Do not] strip input characters to 7 bits. [Do not] map uppercase to lowercase on input. Allow [XON] any character to restart output. [Do not] send START/STOP characters when queue is nearly empty/full. [Disable] enable START/STOP output control. [Do not] mark parity errors. When input buffer is too full to accept a new character, [flush the input buffer] beep without flushing the input buffer. have not been read when -icanon is set. Set line discipline to i (1-126). Same as -raw. Same as [-]parenb and cs[8]7. Reset ERASE and KILL characters to Ctrl-h and Ctrl-u, their defaults. [Un] set xcase, iuclc, and olcuc. Same as [-]lcase. [Un]. [Expand to spaces] preserve output tabs. Same as -icanon. Same as -parenb -istrip cs8. Same as -ixany. Same as echoe echoctl echoke. Same as echoe echoctl echoke -ixany. Additionally, set INTERRUPT to ^C, ERASE to DEL, and KILL to ^U. Specify input speed. Specify output speed. Specify number of rows. Specify number of columns. Display current row and column settings. Specify line discipline. Display terminal speed. su [option] [user] [shell_args] Create a shell with the effective user-ID user. If no user is specified, create a shell for a privileged user (that is, become a superuser). Enter EOF to terminate. You can run the shell with particular options by passing them as shell_args (e.g., if the shell runs sh, you can specify -c command to execute command via sh or -r to create a restricted shell). Go through the entire login sequence (i.e., change to user's environment). Execute command in the new shell and then exit immediately. If command is more than one word, it should be enclosed in quotes -- for example: su -c 'find / -name \*.c -print' nobody Start shell with -f option. In csh and tcsh, this suppresses the reading of the .cshrc file. In bash, this suppresses filename pattern expansion. Do not reset environment variables. Execute shell, not the shell specified in /etc/passwd, unless shell is restricted. sum [options] files Calculate and print a checksum and the number of (1KB) blocks for file. Useful for verifying data transmission. The default setting. Use the BSD checksum algorithm. Use alternate checksum algorithm as used on System V. The blocksize is 512 bytes. swapdev [option] [image [swapdevice [offset]]] System administration command. If no arguments are given, display usage information about the swap device. If just the location of the kernel image is specified, print the information found there. To change that information, specify the new swapdevice. You may also specify the offset in the kernel image to change. Note that rdev -s is a synonym for swapdev. Synonymous to specifying an offset as an argument. swapoff -a | device ... System administration command. Stop making the listed devices available for swapping and paging. Consult /etc/fstab for devices marked sw. Use those in place of the device argument. swapon [options] device ... System administration command. Make the listed devices available for swapping and paging. Specify a priority for the swap area. Higher priority areas will be used up before lower priority areas are used. sync System administration command. Write filesystem buffers to disk. sync executes the sync() system call. If the system is to be stopped, sync must be called to ensure filesystem integrity. Note that shutdown automatically calls sync before shutting down the system. sync may take several seconds to complete, so the system should be told to sleep briefly if you are about to manually call halt or reboot. Note that shutdown is the preferred way to halt or reboot your system, since it takes care of sync-ing and other housekeeping for you. sysklogd System administration command. sysklogd, the Linux program that provides syslogd functionality, behaves exactly like the BSD version of syslogd. The difference should be completely transparent to the user. However, sysklogd is coded very differently and supports a slightly extended syntax. It is invoked as syslogd. See also klogd. Turn on debugging. Specify alternate configuration file. Forward messages from remote hosts to forwarding hosts. Specify hostnames that should be logged with just their hostname, not their fully qualified domain name. Multiple hosts should be separated with a colon (:). Select number of minutes between mark messages. Avoid autobackground (:). syslogd TCP/IP command. Log system messages into a set of files described by the configuration file /etc/syslog.conf. Each message is one line. A message can contain a priority code, marked by a number in angle braces at the beginning of the line. Priorities are defined in <sys/syslog.h>. syslogd reads from an Internet domain socket specified in /etc/services. To bring syslogd down, send it a terminate signal. systat [options] host System administration command. Get information about the network or system status of a remote host by querying its netstat, systat, or daytime service. Specifically query the host's netstat service. Specify port to query. Specifically query the host's systat service. Specifically query the host's daytime service. tac [options] [file] Named for the common command cat, tac prints files in reverse. Without a filename or with -, it reads from standard input. By default, it reverses the order of the lines, printing the last line first. Print separator (by default a newline) before string that it delimits. Expect separator to be a regular expression. Specify alternate separator (default is newline). tail [options] [file] Print the last 10 lines of the named file (or standard input if - is specified) on standard output. Begin printing at nth item from end-of-file. k specifies the item to count: l (lines, the default), b (blocks), or c (characters). Same as -n, but use the default count of 10. Like -n, but start at nth item from beginning of file. Like -k, but count from beginning of file. Print last num bytes. An alternate blocksize may be specified: Don't quit at the end of file; "follow" file as it grows. End when user presses Ctrl-C. Print last num lines. Suppress filename headers. Show the last 20 lines containing instances of .Ah: grep '\.Ah' file | tail -20 Show the last 10 characters of variable name: echo "$name" | tail -c Print the last two blocks of bigfile: tail -2b bigfile talk person [ttyname] Talk to another user. person is either the login name of someone on your own machine or user@host on another host. To talk to a user who is logged in more than once, use ttyname to indicate the appropriate terminal name. Once communication has been established, the two parties may type simultaneously, with their output appearing in separate windows. To redraw the screen, type Ctrl-L. To exit, type your interrupt character; talk then moves the cursor to the bottom of the screen and restores the terminal. talkd [option]. Write debugging information to the syslogd log file. tar [options] [tarfile] [other-files]). In that case, the exact syntax is: tar --long-option -function-options files For example: tar --modification-time -xvf tarfile.tar You must use exactly one of these, and it must come before any other options: Create a new archive. Compare the files stored in tarfile with other-files. Report any differences: missing files, different sizes, different file attributes (such as permissions or modification time). Append other-files to the end of an existing archive. Print the names of other-files if they are stored on the archive (if other-files are not specified, print names of all files). Add files if not in the archive or if modified. Extract other-files from an archive (if other-files are not specified, extract all files). Concatenate a second tar file on to the end of the first. Select device n, where n is 0,...,9999. The default is found in /etc/default/tar. Set drive (0-7) and storage density (l, m, or h, corresponding to low, medium, or high). Preserve original access time on extracted files. Set block size to n × 512 bytes. List directory names encountered. Remove file from any list of files. Store files in or extract files from archive arch. Note that filename may take the form hostname:filename. Interpret filenames in the form hostname:filename as local files. Create new-style incremental backup. Dereference symbolic links. Ignore zero-sized blocks (i.e., EOFs). Ignore unreadable files to be archived. Default behavior is to exit when encountering these. When extracting files, do not overwrite files with similar names. Instead, print an error message. Do not archive files from other file systems. Do not restore file modification times; update them to the time of extraction. Allow filenames to be null-terminated with -T. Override -C. Equivalent to invoking both the -p and -s options. Keep ownership of extracted files same as that of original permissions. Remove originals after inclusion in archive. Do not connect to remote host with rsh; instead, use command. When extracting, sort filenames to correspond to the order in the archive. Print byte totals. Compress archived files with program, or uncompress extracted files with program. Verbose. Print filenames as they are added or extracted. Wait for user confirmation (y) before taking any actions. Compress files with gzip before archiving them, or uncompress them with gunzip before extracting them. cd to directory before beginning tar operation. Implies -M (multiple archive files). Run script at the end of each file. Create old-style incremental backup. Begin tar operation at file file in archive. Write a maximum of length × 1024 bytes to each tape. Expect archive to multivolume. With -c, create such an archive. Ignore files older than date. Print extracted files on standard out. Do not remove initial slashes (/) from input filenames. Display archive's record number. Treat short file specially and more efficiently. Consult filename for files to extract or create. Name this volume name. Check archive for corruption after creation. Consult file for list of files to exclude. Compress files with compress before archiving them, or uncompress them with uncompress before extracting them. Create.) tcpd TCP/IP command. Monitor incoming TCP/IP requests (such as those for telnet, ftp, finger, exec, rlogin). Provide checking and logging services; then pass the request to the appropriate daemon. tcpdchk [options] TCP/IP command. Consult the TCP wrapper configuration (in /etc/hosts.allow and /etc/hosts.deny); display a list of all possible problems with it; attempt to suggest possible fixes. Include a list of rules; do not require an ALLOW keyword before allowing sites to access the local host. Consult ./hosts.allow and ./hosts.deny instead of /etc/hosts.allow and /etc/hosts.deny. Specify location of inetd.conf or tlid.conf file. These are files that tcpdchk automatically uses in its evaluation of TCP wrapper files. tcpdmatch [options] daemon client TCP/IP command. Predict the TCP wrapper's response to a specific request. You must specify which daemon the request is made to (the syntax may be daemon@host for requests to remote machines) and the client from which the request originates (the syntax may be user@client for a specific user or a wildcard). Consult /etc/hosts.allow and /etc/hosts.deny to determine the TCP wrapper's actions. Specify location of inetd.conf or tlid.conf file. These are files that tcpdmatch automatically uses in its evaluation of TCP wrapper files. tcsh [options] [file [arguments]] An extended version of the C shell, a command interpreter into which all other commands are entered. For more information, see Chapter 8, "csh and tcsh". tee [options] files Accept output from another command and send it both to the standard output and to files (like a T or fork in a road). Append to files; do not overwrite. Ignore interrupt signals. ls -l | tee savefile View listing and save for later telinit [option] [runlevel] System administration command. Signal init to change the system's runlevel. telinit is actually just a link to init, the ancestor of all processes. Send SIGKILL seconds after SIGTERM. Default is 20. The default runlevels vary from distribution to distribution, but these are standard: Single user. Process only entries in /etc/inittab that are marked with run level a, b, or c. telnet [options] [host [port ]] Access). Automatic login into the remote system. Turn on socket-level debugging. Set initial telnet escape character to escape_char. If escape_char is omitted, there will be no predefined escape character. When connecting to remote system and if remote system understands ENVIRON, send user to the remote system as the value for variable USER. Open tracefile for recording the trace information. Emulate rlogin: the default escape character is a tilde (~);. Request 8-bit operation. Disable the escape character functionality. Specify an 8-bit data path on output. Set the IP type-of-service (TOS) option for the Telnet connection to the value tos. Suspend telnet. Execute a single command in a subshell on the local system. If command is omitted, an interactive subshell will be invoked. Get help. With no arguments, print a help summary. If a command is specified, print the help information for just that command. Close a Telnet session and return to command mode. Display all, or some, of the set and toggle values. Manipulate variables that may be sent through the TELNET ENVIRON option. Valid arguments for environ are: Get help for the environ command. Define variable to have a value of value. Remove variable from the list of en vi ronment variables. Mark variable to have its value exported to the remote side. Mark variable to not be exported unless explicitly requested by the remote side. Display current variable values. If the remote host supports the logout command, close the telnet session. Depending on state of Telnet session, type is one of several options: Print out help information for the mode command. Disable TELNET LINEMODE option, or, if remote side does not understand the option, enter "character-at-a-time" mode. Attempt to [disable] enable the EDIT mode of the TELNET LINEMODE option. Attempt to [disable]enable the TRAPSIG mode of the LINEMODE option. Enable LINEMODE option, or, if remote side does not understand the option, attempt to enter "old line-by-line" mode. Attempt to [disable] enable the SOFT_TAB mode of the LINEMODE option. [Disable]enable LIT_ECHO mode. Open a connection to the named host. If no port number is specified, attempt to contact a Telnet server at the default port. Close any open Telnet session and then exit telnet. Show current status of telnet. This includes the peer one is connected to as well as the current mode. Send one or more special character sequences to the remote host. Following are the arguments that may be specified: Print out help information for send command. Send Telnet ABORT sequence. Send Telnet AO sequence, which should cause the remote system to flush all output from the remote system to the user's terminal. Send Telnet AYT (Are You There) sequence. Send Telnet BRK (Break) sequence. Send Telnet DO cmd sequence, where cmd is a number between 0 and 255 or a symbolic name for a specific telnet command. If cmd is ? or help, this command prints out help (including a list of symbolic names). Send Telnet EC (Erase Character) sequence, which causes the remote system to erase the last character entered. Send Telnet EL (Erase Line) sequence, which causes the remote system to erase the last line entered. Send Telnet EOF (End Of File) sequence. Send Telnet EOR (End Of Record) sequence. Send current Telnet escape character (initially ^). Send Telnet GA (Go Ahead) sequence. If the remote side supports the Telnet STATUS command, getstatus sends the subnegotiation request that the server send its current option status. Send Telnet IP (Interrupt process) sequence, which causes the remote system to abort the currently running process. Send Telnet NOP (No operation) sequence. Send Telnet SUSP (Suspend process) sequence. Send Telnet SYNCH sequence, which causes the remote system to discard all previously typed (but not read) input. Set any one of a number of telnet variables to a specific value or to TRUE. The special value off disables the function associated with the variable. unset disables any of the specified functions. The values of variables may be interrogated with the aid of the display command. The variables that may be specified are: Display legal set and unset commands. If telnet is in LOCALCHARS mode, this character is taken to be the alternate AYT character. This is the value (initially ^E) which, when in "line-by-line" mode, toggles between doing local echoing of entered characters and suppressing echoing of entered characters. If telnet is operating in LINEMODE or in the old "line-by-line" mode, entering this character as the first character on a line will cause the character to be sent to the remote system. If telnet is in LOCALCHARS mode and operating in the "character-at-a-time" mode, then when this character is entered, a Telnet EC sequence will be sent to the remote system. This is the Telnet escape character (initially ^[), which causes entry into the Telnet command mode when connected to a remote system. If telnet is in LOCALCHARS mode and the flushoutput character is entered, a Telnet AO sequence is sent to the remote host. If Telnet is in LOCALCHARS mode, this character is taken to be an alternate end-of-line character. If Telnet AO is in LOCALCHARS mode and the interrupt character is entered, a Telnet IP sequence is sent to the remote host. If Telnet IP is in LOCALCHARS mode and operating in the "character-at-a-time" mode, then when this character is entered, a Telnet EL sequence is sent to the remote system. If Telnet EL is in LINEMODE or in the old "line-by-line" mode, then this character is taken to be the terminal's lnext character. If Telnet EL is in LOCALCHARS mode and the quit character is entered, a Telnet BRK sequence is sent to the remote host. If Telnet BRK is in LINEMODE or in the old "line-by-line" mode, this character is taken to be the terminal's reprint character. Enable rlogin mode. Same as using -r command-line option. If the Telnet TOGGLE-FLOW-CONTROL option has been enabled, this character is taken to be the terminal's start character. If the Telnet TOGGLE-FLOW-CONTROL option has been enabled, this character is taken to be the terminal's stop character. If Telnet is in LOCALCHARS mode, or if the LINEMODE is enabled and the suspend character is entered, a Telnet SUSP sequence is sent to the remote host. File to which output generated by netdata is written. If Telnet BRK is in LINEMODE or in the old "line-by-line" mode, this character is taken to be the terminal's worderase character. Defaults for these are the terminal's defaults. Set state of special characters when Telnet LINEMODE option has been enabled. List help on the slc command. Verify current settings for current special characters. If discrepancies are discovered, convert local settings to match remote ones. Switch to local defaults for the special characters. Switch to remote defaults for the special characters. Toggle various flags that control how Telnet responds to events. The flags may be set explicitly to true or false using the set and unset commands listed previously. The valid arguments are: Display legal toggle commands. If autoflush and LOCALCHARS are both true, then when the ao or quit characters are recognized, Telnet refuses to display any data on the user's terminal until the remote system acknowledges that it has processed those Telnet sequences. If autosynch and LOCALCHARS are both true, then when the intr or quit character is entered, the resulting Telnet sequence sent is followed by the Telnet SYNCH sequence. Initial value for this toggle is false. Enable or disable the Telnet BINARY option on both the input and the output. Enable or disable the Telnet BINARY option on the input. Enable or disable the Telnet BINARY option on the output. If this toggle value is true, carriage returns are sent as CR-LF. If false, carriage returns are sent as CR-NUL. Initial value is false. Toggle carriage return mode. Initial value is false. Toggle socket level debugging mode. Initial value is false. If the value is true, flush, interrupt, quit, erase, and kill characters are recognized locally, then transformed into appropriate Telnet control sequences. Initial value is true. Toggle display of all network data. Initial value is false. Toggle display of some internal telnet protocol processing pertaining to Telnet options. Initial value is false. When netdata is enabled, and if prettydump is enabled, the output from the netdata command is reorganized into a more user-friendly format, spaces are put between each character in the output, and an asterisk precedes any Telnet escape sequence. Toggle whether to process ~/.telnetrc file. Initial value is false, meaning the file is processed. Toggle printing of hexadecimal terminal data. Initial value is false. Suspend telnet; works only for the csh. telnetd [options] TCP/IP command. Telnet protocol server. telnetd is invoked by the Internet server for requests to connect to the Telnet port (port 23 by default). telnetd allocates a pseudoterminal device for a client, thereby creating a login process that has the slave side of the pseudoterminal serving as stdin, stdout, and stderr. telnetd manipulates the master side of the pseudoterminal by implementing the Telnet protocol and by passing characters between the remote client and the login process. Start telnetd manually instead of through inetd. port may be specified as an alternate TCP port number on which to run telnetd. Debugging mode. This allows telnet to print out debugging information to the connection, enabling the user to see what telnet is doing. Several modifiers are available for the debugging mode: Has not been implemented yet. Display data stream received by telnetd. Print information about the negotiation of the Telnet options. Display data written to the pseudo terminal device. Print options information, as well as some additional information about what processing is going on. test expression [expression] Also exists as a built-in in most shells. Evaluate an expression and, if its value is true, return a zero exit status; otherwise, return a nonzero exit status. In shell scripts, you can use the alternate form [expression]. This command is generally used with conditional constructs in shell programs. The syntax for all of these options is test option file. If the specified file does not exist, they return false. Otherwise, they, not creation, date. Is file1 older than file2? Check modification, not creation, date. Do the files have identical device and inode numbers? The syntax for string tests is test option string. Is the string 0 characters long? Is the string at least 1 character long? Are the two strings equal? Are the strings unequal? Note that an expression can consist of any of the previous tests.? tftp [host [port]] User interface to the TFTP (Trivial File Transfer Protocol), which allows users to transfer files to and from a remote machine. The remote host may be specified, in which case tftp uses host as the default host for future transfers. Once tftp is running, it issues the prompt: tftp> and recognizes the following commands: Print help information. Shorthand for mode ASCII. Shorthand for mode binary. Set the hostname, and optionally the port, for transfers. Get a file or set of files from the specified remote sources. Set the mode for transfers. transfer-mode may be ASCII or binary. The default is ASCII. Transfer a file or set of files to the specified remote file or directory.. tftpd [homedir] TCP/IP command. Trivial File Transfer Protocol server. tftpd is normally started by inetd and operates at the port indicated in the tftp Internet service description in the /etc/inetd.conf file. By default, the entry for tftpd in /etc/inetd.conf is commented out; the comment character must be deleted to make tfptd operational. Before responding to a request, the server attempts to change its current directory to homedir; the default value is tftpboot. tload [options] [tty] Display system load average in graph format. If tty is specified, print it to that tty. Specify the delay, in seconds, between updates. Specify scale (number of characters between each graph tick). A smaller number results in a larger scale. top [options] Provide information (frequently refreshed) about the most CPU-intensive processes currently running. See ps for explanations of the field descriptors. Run in batch mode; don't accept command-line input. Useful for sending output to another command or to a file. Show command line in display instead of just command name. Specify delay between refreshes. Suppress display of idle and zombie processes. Update display num times, then exit. Monitor only processes with the specified process ID. Refresh without any delay. If user is privileged, run with highest priority. Secure mode. Disable some (dangerous) interactive commands. Cumulative mode. Print total CPU time of each process, including dead child processes. Update display immediately. Toggle display of command name or full command line. Add fields to display or remove fields from the display. Display help about commands and the status of secure and cumulative modes. Prompt for process ID to kill and signal to send (default is 15) to kill it. Toggle suppression of idle and zombie processes. Toggle display of load average and uptime information. Toggle display of memory information. Prompt for number of processes to show. If 0 is entered, show as many as will fit on the screen (default). Change order of displayed fields. Apply renice to a process. Prompt for PID and renice value. Suppressed in secure mode. Change delay between refreshes. Prompt for new delay time, which should be in seconds. Suppressed in secure mode. Toggle display of processes and CPU states information. Sort by age, with newest first. Sort tasks by resident memory usage. Sort numerically by process ID. Sort tasks by CPU usage (default). Toggle cumulative mode. (See the -S option.) Sort tasks by time/cumulative time. Write current setup to ~/.toprc. This is the recommended way to write a top configuration file. touch [options] files For one or more files, update the access time and modification time (and dates) to the current time and date. touch is useful in forcing other commands to handle files a certain way; e.g., the operation of make, and sometimes find, relies on a file's access and modification time. If a file doesn't exist, touch creates it with a filesize of 0. Update only the access time. Do not create any file that doesn't already exist. Change the time value to the specified time instead of the current time. time can use several formats and may contain month names, time zones, a.m. and p.m. strings, as well as others. Update only the modification time. Change times to be the same as those of the specified file, instead of the current time. Use the time specified in time instead of the current time. This argument must be of the format: [[cc]yy]mmddhhmm[.ss], indicating optional century and year, month, date, hours, minutes, and optional seconds. tr [options] [string1 [string2]] Translate characters -- copy standard input to standard output, substituting characters from string1 to string2 or deleting characters in string1. Complement characters in string1 with respect to ASCII 001-377. Delete characters in string1 from output. Squeeze out repeated output characters in string2. Truncate string1 to the length of string2 before translating. Include brackets ([]) where shown. ^G (bell) ^H (backspace) ^L (form feed) ^J (newline) ^M (carriage return) ^I (tab) ^K (vertical tab) Character with octal value nnn. Literal backslash. All characters in the range char1 through char2. If char1 does not sort before char2, produce an error. Same as char1-char2 if both strings use this. In string2, expand char to the length of string1. Expand char to number occurrences. [x*4] expands to xxxx, for instance. Expand to all characters in class, where class can be: Letters and digits Letters Whitespace Control characters Digits Printable characters except space Lowercase letters Printable characters Punctuation Whitespace (horizontal or vertical) Uppercase letters Hexadecimal digits The class of characters in which char belongs. Change uppercase to lowercase in a file: cat file | tr '[A-Z]' '[a-z]' Turn spaces into newlines (ASCII code 012): tr ' ' '\012' < file Strip blank lines from file and save in new.file (or use 011 to change successive tabs into one tab): cat file | tr -s "" "\012" > new.file Delete colons from file; save result in new.file: tr -d : < file > new.file host to reach. packetsize is the packet size in bytes of the probe datagram. Default is 38 bytes. Enable the IP LSRR (Loose Source Record Route) option in addition to the TTL tests, to ask how someone at IP address addr can reach a particular target. Include the time-to-live value for each packet received. Set maximum time-to-live used in outgoing probe packets to max-ttl hops. Default is 30 hops. 3 seconds). troff See groff. true A null command that returns a successful (0) exit status. See also false. tune2fs [options] device System administration command. Tune the parameters of a Linux Second Extended Filesystem by adjusting various parameters. You must specify the device on which the filesystem resides; it must not be mounted read/write when you change its parameters. Specify the maximum number of mount counts between two checks on the filesystem. Specify the kernel's behavior when encountering errors. behavior must be one of: Continue as usual. Remount the offending filesystem in read-only mode. Cause a kernel panic. Allow group (a group ID or name) to use reserved blocks. Specify the maximum interval between filesystem checks. Units may be in days (d), weeks (w), or months (m). If interval is 0, checking will not be time-dependent. Display a list of the superblock's contents. Specify the percentage of blocks that will be reserved for use by privileged users. Specify the number of blocks that will be reserved for use by privileged users. Allow user (a user ID or name) to use reserved blocks. tunelp device [options] System administration command. Control a lineprinter's device parameters. Without options, print information about device(s).th of a second. Specify a delay of time in jiffies to sleep before resending a strobe signal. Specify whether to be extremely careful in checking for printer error. ul [options] [names] Translate underscores to underlining. The correct sequence with which to do this will vary by terminal type. Some terminals are unable to handle underlining. Translate -, when on a separate line, to underline, instead of translating underscores. Specify terminal type. By default, TERM is consulted. umount [options] [special-device/directory] System administration command. Unmount a filesystem. umount announces to the system that the removable file structure previously mounted on device special-device is to be removed. umount also works by specifying the directory. Any pending I/O for the filesystem is completed, and the file structure is flagged as clean. Unmount all filesystems that are listed in /etc/mtab. Unmount, but do not record changes in /etc/mtab. Unmount only filesystems of type type. uname [options] Print information about the machine and operating system. Without options, print the name of the operating system (Linux). Combine all the system information from the other options. Print the hardware the system is running on. Print the machine's hostname. Print the release number of the kernel. Print the name of the operating system (Linux). Print the type of processor (not available on all versions). Print build information about the kernel. Display a help message and then exit. uncompress [options] files Uncompress files that were compressed (i.e., whose names end in .Z). See compress for the available options; uncompress takes all the same options except -r and -b. unexpand [options] [files] Convert strings of initial whitespace, consisting of at least two spaces and/or tabs to tabs. Read from standard input if given no file or a file named -. Convert all, not just initial, strings of spaces and tabs. nums is a comma-separated list of integers that specify the placement of tab stops. If a single integer is provided, the tab stops are set to every integer spaces. By default, tab stops are 8 spaces apart. With -t and --tabs, the list may be separated by whitespace instead of commas. This option implies -a. uniq [options] [file1 [file2]] Remove duplicate adjacent lines from sorted file1, sending one copy of each line to file2 (or to standard output). Often used as a filter. Specify only one of -d or -u. See also comm and sort. Ignore first n fields of a line. Fields are separated by spaces or by tabs. Ignore first n characters of a field. Print each line once, prefixing number of instances. Print duplicate lines once but no unique lines. Ignore case differences when checking for duplicates. Print only unique lines (no copy of duplicate entries is kept). Compare only first n characters per line (beginning after skipped fields and characters). Send one copy of each line from list to output file list.new: uniq list list.new Show which names appear more than once: sort names | uniq -d unshar [options] [files] Unpack a shell archive (shar file). unshar scans mail messages looking for the start of a shell archive. It then passes the archive through a copy of the shell to unpack it. unshar accepts multiple files. If no files are given, standard input is used. Overwrite existing files. Change to directory before unpacking any files. Sequentially unpack multiple archives stored in same file; uses clue that many shar files are terminated by an exit 0 at the beginning of a line. (Equivalent to -E "exit 0".) Like -e, but allows you to specify the string that separates archives. Same as -c. update [options] System administration command. update is a daemon that controls how often the kernel's disk buffers are flushed to disk. update is also known as bdflush. The daemon forks a couple of processes to call system functions flush() and sync(). When called by an unprivileged user, no daemon is created. Instead, update calls sync() and then exits. By default, update will wake up every 5 seconds and flush() some dirty buffers. If that doesn't work, it will try waking up every 30 seconds to sync() the buffers to disk. Not all of the listed options are available in every version of update. Display the kernel parameters. This does not start the update daemon. Call flush() at this interval. Default is 5. Help. Print a command summary. Call sync() at this interval. Default is 30. Always use sync() instead of flush. Flush buffers when the specified percent of the buffer cache is dirty. The maximum number of dirty blocks to write out per wake cycle. The number of clean buffers to try to obtain each time the free buffers are refilled. Flush buffers if dirty blocks exceed blocks when trying to refill the buffers. Percent of buffer cache to scan when looking for free clusters. Time for a data buffer to age before being flushed. Time for a nondata buffer to age before being flushed. The time constant to use for load average. How low the load average can be before trimming back the number of buffers. uptime Print the current time, amount of time logged in, number of users currently logged in (which may include the same user multiple times), and system load averages. This output is also produced by the first line of the w command. useradd [options] [user] System administration command. Create new user accounts or update default account information. Unless invoked with the -D option, user must be given. useradd will create new entries in system files. Home directories and initial files may also be created as needed. Comment field. Home directory. The default is to use user as the directory name under the home directory specified with the -D option. Account expiration date. date is in the format MM/DD/YYYY. Two-digit year fields are also accepted. ID number. If a different default group has not been specified using the -D option, the default group is 1. Supplementary groups given by name or number in a comma-separated list with no whitespace. Copy default files to user's home directory. Meaningful only when used with the -m option. Default files are copied from /etc/skel/ unless an alternate dir is specified. Make user's home directory if it does not exist. The default is not to make the home directory. Override. Accept a nonunique uid with the -u option. (Probably a bad idea.) Login shell. Numerical user ID. The value must be unique unless the -o option is used. The default value is the smallest ID value greater than 99 and greater than every other uid. Set or display defaults. If options are specified, set them. If no options are specified, display current defaults. The options are: Home directory prefix to be used in creating home directories. If the -d option is not used when creating an account, the user name will be appended to dir. Expire date. Requires the use of shadow passwords. Number of days after a password expires to disable an account. Requires the use of shadow passwords. Initial group name or ID number. Default login shell. userdel [option] user System administration command. Delete all entries for user in system account files. Remove the home directory of user and any files contained in it. usermod [options] user System administration command. Modify user account information. Home directory. Account expiration date. date is in the format MM/DD/YYYY. Two-digit year fields are also accepted, but number. Supplementary groups given by name or number in a comma-separated list with no whitespace. user will be removed from any groups to which they currently belong that are not included in groups. Login name. This cannot be changed while the user is logged in. Override. Accept a nonunique uid with the -u option.. users [file] Print a space-separated list of each login session on the host. Note that this may include the same user multiple times. Consult file or, by default, /etc/utmp. usleep [microseconds] usleep [options] Sleep some number of microseconds (default is 1). Print help information and then exit. Print usage message and then exit. uudecode [-o outfile] [file] Read a uuencoded file and re-create the original file with the permissions and name set in the file (see uuencode). The -o option specifies an alternate output file. uuencode [-m] [file] name Encode. With the -m option, base64 encoding is used. ellen@oreilly.com vacation vacation [options] [user] Automatically return a mail message to the sender announcing that you are on vacation. Use vacation with no options to initialize the vacation mechanism. The process performs several steps. Creates a .forward file in your home directory. The .forward file contains: \user, "|/usr/bin/vacation user" user is your login name. The action of this file is to actually deliver the mail to user (i.e., you) and to run the incoming mail through vacation. Creates the .vacation.pag and .vacation.dir files. These files keep track of who has sent you messages, so that they receive only one "I'm on vacation" message from you per week. Starts an editor to edit the contents of .vacation.msg. The contents of this file are mailed back to whomever sends you mail. Within its body, $subject is replaced with the contents of the incoming message's Subject line. Remove or rename the .forward file to disable vacation processing. The -a and -r options are used within a .forward file; see the example. Mail addressed to alias is actually mail for the user and should produce an automatic reply. Reinitialize the .vacation.pag and .vacation.dir files. Use this right before leaving for your next vacation. By default, no more than one message per week is sent to any sender. This option changes that interval. interval is a number with a trailing s, m, h, d, or w indicating seconds, minutes, hours, days, or weeks, respectively. If interval is infinite, only one reply is sent to each sender. Send no more than one reply every three weeks to any given sender: $ cd $ vacation -I $ cat .forward \jp, "|/usr/bin/vacation -r3w jp" $ cat .vacation.msg From: jp@wizard-corp.com (J. Programmer, via the vacation program) Subject: I'm out of the office ... Hi. I'm off on a well-deserved vacation after finishing up whizprog 1.0. I will read and reply to your mail regarding "$SUBJECT" when I return. Have a nice day. vi [options] [files] A screen-oriented text editor based on ex. For more information on vi, see Chapter 11, "The vi Editor". vidmode [option] image [mode [offset]] System administration command. Sets. Prompt Extended VGA Normal VGA Same as entering 0 at the prompt Same as entering 1 at the prompt Same as entering 2 at the prompt Same as entering 3 at the prompt Same as entering n at the prompt w [options] [user] Print summaries of system usage, currently logged-in users, and what they are doing. w is essentially a combination of uptime, who, and ps -a. Display output for one user by specifying user. Toggle printing the from (remote hostname) field. Suppress headings and uptime information. Use the short format. Ignore the username while figuring out the current process and CPU times. List of users currently logged on. wall [file] System administration command. Write to all users. wall reads a message from the standard input until an end-of-file. It then sends this message to all users currently logged in, preceded by "Broadcast Message from..." If file is specified, read input from that, rather than from standard input. wc [options] [files] Print character, word, and line counts for each file. Print a total line for multiple files. If no files are given, read standard input. See other examples under ls and sort. Print character count only. Print line count only. Print word count only. Count the number of users logged in: who | wc -l Count the words in three essay files: wc -w essay.[123] Count lines in the file named by variable $file (don't display filename): wc -l < $file whatis keywords Search the short manual page descriptions in the whatis database for each keyword and print a one-line description to standard output for each match. Like apropos, except that it only searches for complete words. Equivalent to man .). Search only for binaries. Terminate the last directory list and signal the start of filenames; required when any of the -B, -M, or -S options are used. Search only for manual sections. Search only for sources.. Find all files in /usr/bin that are not documented in /usr/man/man1 but that have source in /usr/src: % cd /usr/bin % whereis -u -M /usr/man/man1 -S /usr/src -f * which [options] [--] [command] [...] List the full pathnames of the files that would be executed if the named commands had been run. which searches the user's $PATH environment variable. The C shell and tcsh have a built-in which command that has no options. To use the options, specify the full pathname (e.g., /usr/bin/which). Print all matches, not just the first. Read aliases from standard input and write matches to standard output. Useful for using an alias for which. Ignore --read-alias if present. Useful for finding normal binaries while using --read-alias in an alias for which. Skip directories that start with a dot. tty. $ which cc ls /usr/bin/cc ls: aliased to ls -sFC who [options] [file] who am i Show who is logged in to the system. With no options, list the names of users currently logged in, their terminal, the time they have been logged in, and the name of the host from which they have logged on. An optional system file (default is /etc/utmp) can be supplied to give additional information. Print the username of the invoking user. Include idle times. An idle time of . indicates activity within the last minute; one of old indicates no activity in more than a day. Attempt to include canonical hostnames via DNS. Same as who am i. "Quick." Display only the usernames and total number of users. Include user's message status: mesg y (write messages allowed) mesg n (write messages refused) Cannot find terminal device Print headings.. whoami Print current user ID. Equivalent to id -un. write user [tty] Initiate or respond to an interactive conversation with user. A write session is terminated with EOF. If the user is logged in to more than one terminal, specify a tty number. See also talk; use mesg to keep other users from writing to your terminal. xargs [options] [command]. Expect filenames to be terminated by NULL instead of whitespace. Do not treat quotes or backslashes specially. Set EOF to _ or, if specified, to string. Print a summary of the options to xargs and then exit. Edit all occurrences of {}, or string, to the names read in on standard input. Unquoted blanks are not considered argument terminators. Implies -x and -l 1. Allow no more than 1, or lines, nonblank input lines on the command line. Implies -x. Allow no more than args arguments on the command line. May be overridden by -s. Prompt for confirmation before running each command line. Implies -t. Allow no more than max processes to run at once. The default is 1. A maximum of 0 allows as many as possible to run at once. Do not run command if standard input contains only blanks. Allow no more than max characters per command line. Verbose mode. Print command line on standard error before executing. If the maximum size (as specified by -s) is exceeded, exit. Print the version number of xargs and then exit. grep for pattern in all files on the system: find / -print | xargs grep pattern > out & Run diff on file pairs (e.g., f1.a and f1.b, f2.a and f2.b ...): echo $* | xargs -n2 diff The previous line would be invoked as a shell script, specifying filenames as arguments. Display file, one word per line (same as deroff -w): cat file | xargs -n1 Move files in olddir to newdir, showing each command: ls olddir | xargs -i -t mv olddir/{} newdir/{} yacc [options] file Given a file containing context-free grammar, convert file into tables for subsequent parsing and send output to y.tab.c. This command name stands for yet another compiler-compiler. See also flex, bison, and lex & yacc by John Levine, Tony Mason, and Doug Brown. Prepend prefix, instead of y, to the output file. Generate y.tab.h, producing #define statements that relate yacc's token codes to the token names declared by the user. Exclude #line constructs from code produced in y.tab.c. (Use after debugging is complete.) Generate y.output, a file containing diagnostics and notes about the parsing tables. yes [strings] yes [option] Print the command-line arguments, separated by spaces and followed by a newline, until killed. If no arguments are given, print y followed by a newline until killed. Useful in scripts and in the background; its output can be piped to a program that issues prompts.. May be used to change the binding.. ypcat [options] mname NFS/NIS command. Print values in an NIS database specified by mname, which may be either a map name or a map nickname. Specify domain other than default domain. Display keys for maps in which values are null or key is not part of value. Do not translate mname to map name. Display map nickname table listing the nicknames (mnames) known and map name associated with each nickname. Do not require an mname argument. ypchfn [option] [user] NFS/NIS command. Change your information stored in /etc/passwd and displayed when you are fingered; distribute the change over NIS. Without options, ypchfn enters interactive mode and prompts for changes. To make a field blank, enter the keyword none. The superuser can change the information for any user. See also yppasswd and ypchsh. Behave like ypchfn (default). Behave like ypchsh. Behave like yppasswd. ypchsh [option] [user] NFS/NIS command. Change your login shell and distribute this information over NIS. Warn if shell does not exist in /etc/shells. The superuser can change the shell for any user. See also yppasswd and ypchfn. Behave like ypchfn. Behave like ypchsh (default). ypinit [options] NFS/NIS command. Build and install an NIS database on an NIS server. ypinit can be used to set up a master or a slave server or slave copier. Only a privileged user can run ypinit. Set up a slave copier database. master_name should be the hostname of an NIS server, either the master server for all the maps or a server on which the database is up-to-date and stable. Indicates that the local host is to be the NIS server. Set up a slave server database. master_name should be the hostname of an NIS server, either the master server for all the maps or a server on which the database is up-to-date and stable. ypmatch [options] key...mname NFS/NIS command. Print value of one or more keys from an NIS map specified by mname. mname may be either a map name or a map nickname. Before printing value of a key, print key itself, followed by a colon (:). Do not translate nickname to map name. Display map nickname table listing the nicknames (mnames) known, and map name associated with each nickname. Do not require an mname argument. yppasswd [option] [name] NFS/NIS command. Change login password in Network Information Service. Create or change your password, and distribute the new password over NIS. The superuser can change the password for any user. See also ypchfn and ypchsh. Behave like yppasswd (default). rpc.yppasswdd [option] NFS/NIS command. Server for modifying the NIS password file. yppasswdd handles password change requests from yppasswd. It changes a password entry only if the password represented by yppasswd matches the encrypted password of that entry and if the user ID and group ID match those in the server's /etc/passwd file. Then it updates /etc/passwd and the password maps on the local server. Support shadow password functions. yppoll [options] mapname NFS/NIS command. Determine version of NIS map at NIS server. yppoll asks a ypserv process for the order number and the hostname of the master NIS server for the named map. Ask the ypserv process at host about the map parameters. If host is not specified, the hostname of the NIS server for the local host (the one returned by ypwhich) is used. Use domain instead of the default domain. yppush [options] mapnames NFS/NIS command. Force propagation of changed NIS map. yppush copies a new version of an NIS map, mapname, from the master NIS server to the slave NIS servers. It first constructs a list of NIS server hosts by reading the NIS map ypservers with the -d option's domain argument. Keys within this map are the ASCII names of the machines on which the NIS servers run. A "transfer map" request is sent to the NIS server at each host, along with the information needed by the transfer agent to call back the yppush. When the attempt has been completed and the transfer agent has sent yppush a status message, the results may be printed to stdout. Normally invoked by /var/yp/Makefile. Specify a domain. Verbose -- print message when each server is called and for each response.. NIS service should go to the DNS for more host information. Indicates ypserv should not respond to outside requests. Location of NIS databases. Makefile that is responsible for creating NIS databases. ypset [options] server NFS/NIS command. Point ypbind at a particular server. ypset tells ypbind to get NIS services for the specified domain from the ypserv process running on server. server indicates the NIS server to bind to and can be specified as a name or an IP address. Set ypbind's binding on host, instead of locally. host can be specified as a name or an IP address. ypwhich [options] [host] NFS/NIS command. Return hostname of NIS server or map master. Without arguments, ypwhich cites the NIS server for the local machine. If host is specified, that machine is queried to find out which NIS master it is using. Find master NIS server for a map. No host can be specified with -m. map may be a map name or a nickname for a map. Inhibit nickname translation. Display map nickname table. Do not allow any other options. ypxfr [options] mapname NFS/NIS command. Transfer an NIS map from the server to the local host by making use of normal NIS services. ypxfr creates a temporary map in the directory /etc/yp/domain (where domain is the default domain for the local host), fills it by enumerating the map's entries, and fetches the map parameters and loads them. If run interactively, ypxfr writes its output to the terminal. However, if it is invoked without a controlling terminal, and if the log file /usr/admin/nislog exists, it appends all its output to that file. Preserve the resolver flag in the map during the transfer. This option is for use only by ypserv. When ypserv invokes ypxfr, it specifies that ypxfr should call back a
https://docstore.mik.ua/orelly/linux/lnut/ch03_01.htm
CC-MAIN-2019-35
refinedweb
55,781
69.58
#include <ggi/internal/triple-int.h> int sign_3(unsigned x[3]); int bits_3(unsigned x[3]); int eq0_3(unsigned x[3]); int gt0_3(unsigned x[3]); int ge0_3(unsigned x[3]); int lt0_3(unsigned x[3]); int le0_3(unsigned x[3]); bits_3 counts the number of significant bits of x. I.e. leading zeros in a positive value and leading ones in a negative value are not counted. eq0_3, gt0_3, ge0_3, lt0_3 and le0_3 tests the relation between x and zero. eq0_3 tests if x is equal to zero, gt0_3 if x is greater than zero, ge0_3 if x is greater than or equal to zero, lt0_3 if x is less than zero and last but not least le0_3 tests if x is less than or equal to zero. bits_3 returns 0 for x equal to 0 or -1, 1 for x equal to 1 and -2, 2 for x equal to 2, 3, -3 and -4 etc. eq0_3, gt0_3, ge0_3, lt0_3 and le0_3 all returns non-zero if the relation is true, and zero otherwise. unsigned x[3]; assign_int_3(x, 5); ASSERT(sign_3(x) == 1); ASSERT(bits_3(x) == 3); ASSERT(!eq0_3(x)); ASSERT(gt0_3(x)); ASSERT(ge0_3(x)); ASSERT(!lt0_3(x)); ASSERT(!le0_3(x));
http://www.makelinux.net/man/3/G/ggidev-bits_3
CC-MAIN-2015-48
refinedweb
205
66.98
Qt and OpenCV I'm attempting to use OpenCV 248 within my Qt Creation application and am hitting a linking snag. In the .pro file I have the following defined: @#--------------- specifying opencv libs ----------------- CV_LIB_PATH = ../../Libs/x64/opencv/build/x64/vc12/staticlib/ CV_VERSION = 248 CV_LIBS += opencv_core opencv_features2d opencv_highgui opencv_imgproc \ #-------------------------------------------------------- CV_DEBUG = debug { CV_DEBUG += d } cvpaths = cvlibs = for(libName, CV_LIBS){ cvpaths += -L$${CV_LIB_PATH}$${libName}$${CV_VERSION}$${CV_DEBUG} cvlibs += -l$${libName}$${CV_VERSION}$${CV_DEBUG} } INCLUDEPATH += ../../Includes ../../Includes/opencv/includes LIBS += $${cvpaths} $${cvlibs}@ -and qmake seems to find everything just fine, no errors. Then in an unassuming object file: @#include "autoslicer.h" #include "opencv2/core/core.hpp" Autoslicer::Autoslicer() { cv::Mat2d a; } @ It finds the include just fine. So I know I've got my path's right. When I go to build I get a bunch of unresolved's for instantiating that Mat2d. What did I miss? Thanks! Hi, It looks like you have too many \ in your pro file which might be interfering. First thing to do is remove them to be sure that your pro file is clean. Good eye! Now I'm getting complaints about it not being able to open the file... so it would seem my paths WERE wrong. Thanks! So this is strange. Qt can't seem to locate these libraries. I even placed them in the same directory as my .pro, changed the CV_LIB_PATH variable to "". Then I ran qmake with -d (verbose) and checked the following line to be correct: @LIBS := -Lopencv_core248d -Lopencv_features2d248d -Lopencv_highgui248d -Lopencv_imgproc248d -lopencv_core248d -lopencv_features2d248d -lopencv_highgui248d -lopencv_imgproc248d@ These .lib files are as is in the same directory as the .pro, which baffles me as to why I still get a @:-1: error: LNK1104: cannot open file 'opencv_core248d.lib'@ Don't you have all opencv lib files in only one folder ? @LIBS += -L$$CV_LIB_PATH@ Should be the only path needed Yeah, I figured out that I was confused with the meaning of the linker options -L and -l. -L points to a directory and then you can use -l for the individual libs, something I didn't totally get. I ended up just expressing the libs without options and used their direct path. So you go it working ?
https://forum.qt.io/topic/39734/qt-and-opencv
CC-MAIN-2022-33
refinedweb
361
75.5
On 2/20/2012 12:00 PM, Left Right wrote: > I was actually under impression that Adobe will be committing code to > the compiler and / or framework developed here - that was at least my > understanding of the last whitepaper posted by Adobe, or am I > misunderstanding it? Maybe the word "support" isn't the best choice > though. I am under the same impression. Writing code, contributing code, and supporting code are radically different things. > In my experience, the mx_internal namespace was used often for the > purposes such as to cover the weak points in design. I think that the > problem of not documenting things could've been solved by @private Things marked as @private in ASDocs still show up in code hinting if appropriate. Stuff in mx_internal did not. Is there metadata to prevent stuff from showing up in code hinting? -- Jeffry Houser Technical Entrepreneur 203-379-0773 -- UI Flex Components: Tested! Supported! Ready! -- -- Part of the DotComIt Brain Trust
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201202.mbox/%3C4F427F9C.4010801@dot-com-it.com%3E
CC-MAIN-2014-52
refinedweb
160
64.41
Introduction Python asyncio is a library for efficient single-thread concurrent applications. Ever since I started to use it, unlike Python multiprocessing and threading, it has been like a mysterious black box to me. Although I could still use asyncio for some simple high-level concurrent applications by taking advantage of the open source asyncio libraries, such as asyncssh and httpx, I have no idea how those asyncio libraries were implemented from scratch. To understand the asyncio mechanism, it might be necessary to look at its low level implementation details. Event loop is the core of Python asyncio. Every coroutine, Future, or Task would be scheduled as callback and be executed by an event loop. In this blog post, I would like to look into Python event loop at the low-level implementation superficially. Event Loop Although asyncio widely uses coroutine, Future, or Task, it is not necessary to use them in order to run tasks on an event loop. Event loop ultimately runs scheduled callbacks. To see this, let’s check the implementation of loop.run_forever from Python 3.8. def run_forever(self): """Run until stop() is called.""" self._check_closed() self._check_running() self._set_coroutine_origin_tracking(self._debug) self._thread_id = threading.get_ident() old_agen_hooks = sys.get_asyncgen_hooks() sys.set_asyncgen_hooks(firstiter=self._asyncgen_firstiter_hook, finalizer=self._asyncgen_finalizer_hook) try: events._set_running_loop(self) while True: self._run_once() if self._stopping: break finally: self._stopping = False self._thread_id = None events._set_running_loop(None) self._set_coroutine_origin_tracking(False) sys.set_asyncgen_hooks(*old_agen_hooks) Without going into the details, our gut feeling tells us that the key function call in the run_forever is the self._run_once(). The self._run_once function is described as follows. def _run_once(self): """Run one full iteration of the event loop. This calls all currently ready callbacks, polls for I/O, schedules the resulting callbacks, and finally schedules 'call_later' callbacks. """ This information is somewhat reflected in the loop.run_forever documentation. Event loop must have loops and run iteration by iteration, otherwise its name would not have been event loop. But what exactly is an iteration of the event loop? Before checking the actual implementation, I imagined an iteration of even loop is a fixed finite length of time frame where callbacks could be executed. We could use a for/ while loop in the iteration and the end of iteration could be determined by measuring the UNIX time at the end of each for/ while execution. But this raises a problem. What if there is a callback that takes very long time to run in the for/ while loop and keeps blocking the thread, then the fixed length of the time frame could not be guaranteed. It turns out that the design of an actual event loop iteration in Python is somewhat similar but more delicate. All the scheduled callbacks for the current event loop iteration are placed in self._ready. By looking at the implementation superficially, it seems that we have a (heap/priority) queue of scheduled callbacks, some of which might have been delayed and canceled. Although the loop.run_forever runs forever, it does have timeout for each event loop iteration. For the “call later” callbacks that are scheduled to run after the current UNIX time, they are not ready so they will not be put into the self._ready.) heapq.heapify(new_scheduled) self._scheduled = new_scheduled self._timer_cancelled_count = 0 else: # Remove delayed calls that were cancelled from head of queue. while self._scheduled and self._scheduled[0]._cancelled: self._timer_cancelled_count -= 1 handle = heapq.heappop(self._scheduled) handle._scheduled = False timeout = None if self._ready or self._stopping: timeout = 0 elif self._scheduled: # Compute the desired timeout. when = self._scheduled[0]._when timeout = min(max(0, when - self.time()), MAXIMUM_SELECT_TIMEOUT) event_list = self._selector.select(timeout) self._process_events(event_list) # Handle 'later' callbacks that are ready. end_time = self.time() + self._clock_resolution while self._scheduled: handle = self._scheduled[0] if handle._when >= end_time: break handle = heapq.heappop(self._scheduled) handle._scheduled = False self._ready.append(handle) Only the callbacks in the self._ready will be executed in order. # This is the only place where callbacks are actually *called*. # All other places just add them to ready. # Note: We run all currently scheduled callbacks, but not any # callbacks scheduled by callbacks run this time around -- # they will be run the next time (after another I/O poll). # Use an idiom that is thread-safe without using locks. ntodo = len(self._ready) for i in range(ntodo): handle = self._ready.popleft() if handle._cancelled: continue if self._debug: try: self._current_handle = handle t0 = self.time() handle._run() dt = self.time() - t0 if dt >= self.slow_callback_duration: logger.warning('Executing %s took %.3f seconds', _format_handle(handle), dt) finally: self._current_handle = None else: handle._run() handle = None # Needed to break cycles when an exception occurs. This means that in an event loop iteration, the number of callbacks being executed is dynamically determined. It does not have fixed time frame, it does not have an fixed number of callbacks to run. Everything is dynamically scheduled and thus is very flexible. Notice that this self._run_once is only called in the loop.run_forever method, but not others. Let’s further check the more commonly used method loop.run_until_complete which is being called by asyncio.run under the hood. def run_until_complete(self, future): """Run until the Future is done. If the argument is a coroutine, it is wrapped in a Task. WARNING: It would be disastrous to call run_until_complete() with the same coroutine twice -- it would wrap it in two different Tasks and that can't be good. Return the Future's result, or raise its exception. """ self._check_closed() self._check_running() new_task = not futures.isfuture(future) future = tasks.ensure_future(future, loop=self) if new_task: # An exception is raised if the future didn't complete, so there # is no need to log the "destroy pending task" message future._log_destroy_pending = False future.add_done_callback(_run_until_complete_cb) try: self.run_forever() except: if new_task and future.done() and not future.cancelled(): # The coroutine raised a BaseException. Consume the exception # to not log a warning, the caller doesn't have access to the # local task. future.exception() raise finally: future.remove_done_callback(_run_until_complete_cb) if not future.done(): raise RuntimeError('Event loop stopped before Future completed.') return future.result() The most prominent function call is self.run_forever() surprisingly. But where are the Future scheduled as callbacks in the event loop. tasks.ensure_future which takes both Future and loop as inputs scheduled the callbacks. In the tasks.ensure_future, it calls loop.create_task(coro_or_future) to set the callback schedules in the event loop. Also note that there is additional callback _run_until_complete_cb added to the event loop so that the self.run_forever() will not actually run forever.') The loop.create_task is a public interface and the documentation could be found from the Python website. def create_task(self, coro, *, name=None): """Schedule a coroutine object. Return a task object. """ self._check_closed() if self._task_factory is None: task = tasks.Task(coro, loop=self, name=name) if task._source_traceback: del task._source_traceback[-1] else: task = self._task_factory(self, coro) tasks._set_task_name(task, name) return task Conclusions Although we did not go through all the code about the event loop, we have become more knowledgeable about how a Python event loop executes call backs.
https://leimao.github.io/blog/Python-AsyncIO-Event-Loop/
CC-MAIN-2021-25
refinedweb
1,197
53.58
Hi good afternoon Hi good afternoon write a java program that Implement an array ADT with following operations: - a. Insert b. Delete c. Number of elements d. Display all elements e. Is Empty hi all - Java Beginners hi all hi, i need interview questions of the java asap can u please sendme to my mail Hi, Hope you didnt have this eBook. You.../Good_java_j2ee_interview_questions.html?s=1 Regards, Prasanth HI Which is the good website for struts 2 tutorials? Which is the good website for struts 2 tutorials? Hi, After completing the MCA I have learned Java and now searching for good tutorial website... Hi, Rose India website is the good Hi Hi Hi All, I am new to roseindia. I want to learn struts. I do not know anything in struts. What exactly is struts and where do we use it. Please help me. Thanks in advance. Regards, Deepak Hi Hi I need some help I've got my java code and am having difficulty to spot what errors there are is someone able to help import... and have replies written for them."); } else { System.out.println..., a lot of tags and attribute names are left for backward compatibility. struts Hi - Struts Hi Hi friends, must for struts in mysql or not necessary please tell me its very urgent....I have a only oracle10g database please let me...:// Thanks. Hi Soniya, We can use oracle too in struts... please give some idea for installed tomcat version 5 i have already tomcat 4.1 please help me. its very urgent Hi friend, Some points to be remember HOW TO BECOME A GOOD PROGRAMMER HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good programmer Hi Friend, Please go through the following link: CoreJava Tutorials Here you will get lot of examples with illustration where you can struts struts hi Before asking question, i would like to thank you for clarifying my doubts.this site help me a lot to learn more & more technologies like servlets, jsp,and struts. i am doing one struts application where i Testing Struts Application . ----------------------------------------------- Thus, we have correctly installed struts in our...Testing Struts Application It will be noticed that there are a lot of 'system.out.println Still have the same problem--First Example of Struts2 - Struts Still have the same problem--First Example of Struts2 Hi I tried... tried my own example. Its not displaying the "Struts Welcome" message and also... Please provide me a solution at the earliest. Thanks a lot in advance.   Top 10 Tips for Good Website Design Designing a good website as to come up with all round appreciation, traffic... presence. Good website design tips reflect on the art of mastering the traffic... to content structure to device friendliness and all of these factors addressed Hi Hi Hi this is really good example to beginners who is learning struts2.0 thanks Hi Hi Hi How to implement I18N concept in struts 1.3? Please reply to me Hello I like to make a registration form in struts inwhich... course then page is redirected to that course's subjects. Also all subject should... compelete code. thanks Hi friend, Please give details with full Struts - Struts Struts Hi i am senthil, i am not good in javascripts. i have... used in a struts aplication. these are the conditions 1. when u entered...; if(str == null || str.length == 0) { //// here i have to set.. what are the steps mandatory to develop a simple java program? To develop a Java program following steps must be followed by a Java developer : First of all the JDK (Java Development Kit) must be available struts - Struts struts hi, i have formbean class,action class,java classes and i configured all in struts-config.xml then i dont know how to deploy and test... and do run on server. whole project i have run?or any which data structure is good in java..? - Java Beginners which data structure is good in java..? Hi frends, Actually i... and reasons why it is good.... Hi Friend, To learn Data Structures... and vector ...etc........ i wanted to know, which technique is good to store hi the total cost of the two items;otherwise,display the total for all three HI!!!!!!!!!!!!!!!!!!!!! HI!!!!!!!!!!!!!!!!!!!!! import java.awt.*; import java.sql....="+leftSavings+" where pass='"+t9+"'"); JOptionPane.showMessageDialog(null," You have...;Hi Friend, Try this: import java.awt.*; import java.sql.*; import javax.swing. hi....... hi....... import java.awt.; import java.sql.*; import javax.swing....(null," You have withdrawn "+withdrawl+" shillings and your balance...){ } } can anyone tell wts wrong with this code?? Hi, Check it: import Struts - Framework Struts Good day to you Sir/madam, How can i start struts application ? Before that what kind of things necessary to learn and can u tell me clearly sir/madam? Hi friend Books applications in J2EE. A lot of books have emerged to satisfy the appetite of avid Struts... - covering all the Struts components in a "how to use them" approach. You'll also.... The book starts with an explanation of why Struts is a "good thing" and shows to learn and can u tell me clearly sir/madam? Hi Its good.../struts/". Its a very good site to learn struts. You dont need to be expert in any other framework or else before starting struts. you just need to have Reg struts - Struts Reg struts Please explain all the tags in Struts. Hi... have the struts jar also in your classpath, it should be like this:CLASSPATH... the name of the struts jar file.Thanks Good tutorials for beginners in Java Good tutorials for beginners in Java Hi, I am beginners in Java... in details about good tutorials for beginners in Java with example? Thanks. Hi, want to be command over Java, you should go on the link and follow Maximize Sales By Setting up Your Shopping Cart have got your business right in line heading to make a good sales profit.... In case you have no idea about the different features, a good web developer... by installing the best shopping cart services and have all the necessary java - Struts of questiond they may ask.i have scjp and scwcd and good knowledge in struts,hibernate. Hi Friend, Please visit the following link: http...java good morning sir.i have completed my mca at 2009.now i want Struts Tutorials found out: 1. The entry point to the all Struts related configuration... module based configuration. That means we can have multiple Struts configuration... in different languages. All you have to do is to create a resource file for each of code using ur idea... First part i have completed but i want to how to go... me its very urgent.... Hi Friend, Plz give full details Java - Struts Java Hi Good Morning, This is chandra Mohan I have a problem in DispatchAction in Struts. How can i pass the method name in "action" and how can i map at in struts-config.xml; when i follow some guidelines Please..please provide me a good Hibernate in Spring example Please..please provide me a good Hibernate in Spring example Hi, Can anyone please provide me a Hibernate in Spring example with simple explanation... me know if you still have any doubt Graphs - Struts Graphs Hi,I have an application developed using struts framework.Now the requirement is for displaying graph in it.Can anyone help me with some code.../types.html If your got lots of $$ and time on your hands then Adobe Flex <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>... }//execute }//class struts-config.xml <struts Can you suggest any good book to learn struts Can you suggest any good book to learn struts Can you suggest any good book to learn m getting Error when runing struts application. i have already define path in web.xml i m sending -- ActionServlet... /WEB-INF/struts-config.xml 1... the output, otherwise you will get the ArrayIndexOutOfBoundsException. We have Struts Articles these gaps that involves an extension to the Struts framework. All Web applications... to be an introduction to either Struts or JSR 168. It assumes you have some... Struts, Spring can also function as an MVC implementation. Both frameworks Single thread model in Struts - Struts me. Hi Struts 1 Actions are singletons therefore they must... an application. and Struts 2 doesn't have thread-safety issues...Single thread model in Struts Hi Friends, Can u i got an exception while accept to a jsp i got an exception while accept to a jsp type Exception report description The server encountered an internal error... properties file in c drive. Have a look at the following link: Visit Here   hi - Java Beginners (""))) { JOptionPane.showMessageDialog(null,"plz enter all fields","ERROR...,plzzzzzzzzzzzzzzzzzzzzzzzzzzzz, i am declare the final for jtextfields Hi Friend, We have modified your code. import java.awt.*; mysqldump all databases is 5.5.23. Thanks Hi, To take the backup of all the database...mysqldump all databases How to take the backup of complete data including all the databases in the MySQL Server? Is there one command to take using displaytag with struts2 - Struts that it is not so good specially when i have used displaytag, but using displaytag...using displaytag with struts2 Hi, i am using struts2 framework...,exel like this. I have downloaded and run that application. I experienced Struts - Struts Struts Hello I have 2 java pages and 2 jsp pages in struts.... Thanks in advance Hi friend, Please give full details with source code to solve the problem. For read Just Got The New iPad? Getting Started? have these basics in place all you have to do is download the applications...Just Got The New iPad? – Getting Started Since its release in early 2010, Apple’s iPad has been receiving rave reviews from consumers all over struts - Struts application,and why would you use it? Hi mamatha, The main aim of the MVC... knows about all the data that need to be displayed. It is model who is aware about all the operations that can be applied to transform that object. It only Integrate Struts, Hibernate and Spring important to have good directory structure that helps you reduce the time taken... Integrate Struts, Hibernate and Spring  ... are using one of the best technologies (Struts, Hibernate and Spring). This tutorial Struts Book - Popular Struts Books for developing web applications in J2EE. A lot of books have emerged to satisfy... the development of a non-trivial sample application - covering all the Struts components...; Jakarta Struts For Dummies As a Web developer, you?ve probably heard a lot Struts 1 Tutorial and example programs we have listed all the tutorials published on our website related... article Aggregating Actions in Struts , I have given a brief idea of how...Struts 1 Tutorials and many example code to learn Struts 1 in detail. Struts 1   struts hi.. i have a problem regarding the webpage in the webpage i have 3 submit buttons are there.. in those two are similar and another one is taking different action.. as per my problem if we click on first two submit Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/4489
CC-MAIN-2016-18
refinedweb
1,902
75.91
Creating Spring Boot Application for WebLogic and Tomcat Creating Spring Boot Application for WebLogic and Tomcat We look at some of the most popular frameworks for spinning up applications and servers. Read on to get started! Join the DZone community and get the full member experience.Join For Free In this post, we are going to create our first Spring Boot Application. Since Tomcat is an embedded container that comes with Spring Boot applications, it is easy to deploy in Tomcat containers, but, today, we will deploy our first Spring Boot Application in Oracle’s WebLogic server. We will use Spring Boot 1.x here. I will create another post on how to deploy Spring Boot 2.x in WebLogic 12.1.2.1 If you are new to WebLogic server, please download the latest version of WebLogic server first from here. I’m not going to discuss the installation process of WebLogic. Please follow Oracle’s documentation on how to install and configure WebLogic in your local system. Let's start deploying our first Spring Boot Application in WebLogic. But, before that, let's create one sample application in Spring Boot. There are various ways of creating applications for Spring Boot. I recommend you start from here. For this project, I will walk you through creating an application for Spring Boot, step-by-step. Please use the following details while creating your own project: Download this project and import it to your best IDE. I’ll be using the IntelliJ IDE for this tutorial. After importing this project to your IDE, you will see two classes under your root package: BootUserManagementApi.java (Main Spring Boot Application) ServletInitializer.java To deploy your app in WebLogic, you need to edit you ServletInitializer.java. Update this class as follows: public class ServletInitializer extends SpringBootServletInitializer implements WebApplicationInitializer { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(BootUserMgmtApplication.class); } } To deploy your application in WebLogic you need to implement WebApplicationInitializer in your ServletInitializer, which is the extra effort you need to perform. Remember, we need to add this interface if you are going to deploy in WebLogic server. For Tomcat, its not necessary. Next, you need to add a weblogic.xml file in your project to tell the WebLogic Server to load this file while deploying you app. You need to add this file into ../src/main/web-app/WEB-INF folder. You need to create a folder for this. Code for weblogic.xml is as follows: Please note that we just need to include "org.slf4j.*", the rest can be excluded. Now, create one package under your main package and name it “controller” and add one class for the controller. I’m going to name this class HomeController, which is going to be a RestController for this project. Finally, your project structure should be like this: application.properties That’s it. You can now run your application. Go to You should see: Hello World Remember, one more thing, if you are going to deploy your app in WebLogic then you need to exclude Tomcat from Gradle or Maven. Below is my build.gradle script:' } If you want to deploy your app in Tomcat, then use the following build.gradle:' } That’s all you need to do. If you have any questions or issues, please feel free to comment. I will help to fix your issues. The complete project is available in GitHub. Published at DZone with permission of Anish Panthi . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/creating-spring-boot-application-for-weblogic-and
CC-MAIN-2020-10
refinedweb
602
59.4
Common JavaScript idioms, an abridged whirlwind tour Innovative Solutions and Technologies Center(ISTC) Yerevan, Armenia By Edgar Aroutiounian, Summer 2017 Progress: Modern Web Development uses the latest versions and features of JavaScript, which is offically known as EcmaScript As of June 2017, most browsers support all of EcmaScript except for the ES6 module system, aka import React, { Component } from 'react'; Technically that isn't even legal ES6 because the ES6 module loader specification does not allow the creation of 'naked' imports, that is imports that don't specify a specific path or URI. However in practice this doesn't really matter because we use babel and webpack to compile our JavaScript code into code that can run on today's browsers. Note, this lecture is going to move VERY fast and YOU MUST READ the links I post to MDN, please look at for a more comprehensive introduction to JavaScript (uses nodejs) ES6 introduced the concept of classes, but these 'classes' are really just syntaxical sugar on top of plain JavaScript functions. class Person { constructor(age, name) {this.age = age; this.name = name; } speak() { console.log('My name is', this.name); } } const friend = new Person(27, 'Ruzanna'); // This is the same as doing: function Person(age, name) { this.age = age; this.name = name; } Person.prototype.speak = function() { console.log('My name is', this.name); } const acquaint = new Person(23, 'Tigran'); The benefit of using the class approach is that 1) Calling without 'new' is a TypeError exception, 2) The code looks more familiar to programmers coming from other languages JavaScript is a prototype based language, that means that every Object has a prototype. When we defined classes, all the 'methods' defined in the class are functions that are created for the prototype and the properties of the prototype (the things we access with . operator) are available for any Object that is on that prototype chain. Having a function defined on the prototype is better for memory usage than on each object because then we only make 1 copy of that function rather for each instance of the object. This is a common pattern class F { constructor() { this.speak = () => console.log('Hello world'); } alternative_speak() { console.log('Hello world'); } }; // Both a1, a2 have methods .speak and .alternative_speak const a1 = new F; const a2 = new F; In this example the interpreter created only one alternative_speak function, it is on the prototype of F, but the interpreter is forced to create two speak functions because we have created the function (a fat arrow function) as a property created on each new instance of F JavaScript classes do not autobind their context, `this` object Practically speaking, that means you will often see React code that binds functions class F { constructor() { this.handler = this.handler.bind(this); } handler(e) { console.log(e.target.value); } } In class we showed an example of the issues of not having the right context, the same issue comes up in React. The static keyword also exists in JavaScript, it creates a property on the class object itself, not on the prototype. One library that uses this is react-navigation class HomeScreen extends Component { static navigationOptions = ({navigation}) => ({ title: 'Home Screen', }); // the render function } The navigationOptions property is on the HomeScreen object, not on the prototype of HomeScreen In the previous example, this static navigationOptions = ({navigation}) => ({ might have looked strange, specifically the '({navigation})' part. It is called object destructuring, its a way to pull out values from Objects by key name directly. Here are some examples const foreman = { name: 'Gor', age: 28, location: 'Yerevan', profession(){ console.log(this.age, this.name); } }; // We only pulled out name and age as variable names based on keys const { name, age } = foreman; console.log(name, age); const { not_found } = foreman; The previous example static navigationOptions = ({navigation}) => ({ Is actually an example of object destructuring in function parameters, we do this because often times we pass Objects to function, so we might as well be able to pick out the fields right from the beginning. const person = {name: 'Lilit', profession: 'programmer'}; const say_profession = ({profession}) => { console.log('I am a ', profession); } say_profession(person); Notice that we didn't have to give a name to all the fields in the object, we just pick the key names that we care about As of June 2017, Class properties are at stage-2 TC39. That means that they aren't an official part of the EcmaScript specificiation but most likely will be. We can still use it with the help of tools like babel, used under the hood of create-react-app class F { state = { items: []}; open_dropdown = () => { console.log('Some logic here'); }; } And that is really the same as class F { constructor() { this.open_dropdown = () => { console.log('Some logic here'); } this.state = { items: []}; } } Another feature you'll often see is something called Object spread, this is also not offical EcmaScript yet babel will compile it into Object.assign function calls const professional = {name: 'Artur', langs: ['C#', 'JavaScript', 'Armenian']}; const with_more = {...professional, background: ['WebDevelopment']}; We are making a new object called with_more that is a copy of professiona, but with the extra key background EcmaScript 2015 (ES6) finally provided the JavaScript language with a module system // Assume this file is named funcs.js export const f = () => console.log('Hello'); This says that this module will export something called f which we can then import and use // Assume this file is called main.js import { f } from './funcs'; f(); Notice that we did something that looks like object destructuring; its almost that but its not. Also notice that there was no need to add the extension '.js'. ES6 modules are effectively singletons, importing it multiple times in different parts of your application does not make new 'instances' of the module Notice that we did export on that function f. Sometimes though you only want to export one value from your module, in that case we use 'export default' // assume this is called header.js // Notice that it was not necessary to give the class a name export default class extends Component { render() { return <h2>Hello World</h2>; } } And we use it like so import call_it_anything_you_want from './header'; // Can also rename it import * as Whatever from './header'; // renaming also works with named exports, this is usual in react-router import { BrowserRouter as Router } from 'react-router-dom'; JavaScript coding focuses on asynchronous work, events. That means that we need a way to say what to do in the future and the JavaScript langauge provides us with something called Promises. A Promise is a way to defer work to the future const promise_example = (success=true) => { return new Promise((accept, reject) => { const func = () => success ? accept('You waited 3 seconds, here is the data') // Always use an Error object, it preserves the stack : reject(new Error('failure')); setTimeout(func, 3 * 1000); }) } promise_example().then(msg => console.log('Given', msg)); // Be sure to handle the errors with the .catch method promise_example(false) .then(msg => console.log('Given', msg)) .catch(error_handle => console.log(error_handle.message)) Working with Promises has one small hassle, that is that we have to handle the .then and .catch. ES7, the next version of EcmaScript, provides new keywords called async and await. async, await can only be used with functions and any function that uses await must be wrapped with the async keyword. We can use this in browsers because babel will compile the async,await into ES6 generator functions and using the yield keyword. Any function wrapped with async returns a Promise const load_data = async path => { // get a request object, fetch by default does a HTTP GET request const req = await fetch(path); // Get the HTTP body as JSON, no need to use JSON.parse const results = await req.json(); return results; } fetch is a function provided by the DOM API, (also implemented in React-Native), that gives us the ability to download new data. fetch returns a promise so we can use .then on the result, or we can use async/await which will 'unwrap' the promise for us async, await make asynchronous code LOOK as if it was synchronous, it turns the .catch error from a Promise into an exception const promise_example = (success=true) => { return new Promise((accept, reject) => { const func = () => success ? accept('You waited 3 seconds, here is the data') : reject(new Error('failure')); setTimeout(func, 3 * 1000); }) } (async () => { try { await promise_example(false); } catch (e) { console.log(e.message); } })() Notice the sneaky way to do a top level async/await call since async/await can only be used in functions
https://yerevancoder.com/frontend-bootcamp-english/lecture-1/index.html
CC-MAIN-2019-22
refinedweb
1,414
50.67
On Fri, 2007-08-10 at 23:09 +0800, David Woodhouse wrote:> On Sun, 2006-09-24 at 04:00 +0000, Linux Kernel Mailing List wrote:> > --- a/include/scsi/scsi.h> > +++ b/include/scsi/scsi.h> > @@ -429,4 +429,10 @@ #define SCSI_IOCTL_GET_BUS_NUMBER 0x5386> > /* Used to obtain the PCI location of a device */> > #define SCSI_IOCTL_GET_PCI 0x5387> > > > +/* Pull a u32 out of a SCSI message (using BE SCSI conventions) */> > +static inline u32 scsi_to_u32(u8 *ptr)> > +{> > + return (ptr[0]<<24) + (ptr[1]<<16) + (ptr[2]<<8) + ptr[3];> > +}> > +> > #endif /* _SCSI_SCSI_H */ > > Please explain why it's necessary to export this to userspace.Er it's not ... but then it's not necessary to export this entire file,either.> The files in /usr/include/scsi are actually shipped by glibc, and most> distributions use glibc's version instead of the one from the kernel --> so this additional userspace interface is automatically incompatible> with most people's installations.> > It would perhaps make sense to stop glibc providing these files, and let> distributions use the version from the kernel -- but that's a separate> issue. And still doesn't seem to justify the addition of the above> function.>From the SCSI point of view, the function definitely belongs in thatfile because it's an accessor to facilitate the processing of commandsand their replies, which is what that file contains ... in fact itcontains a lot of the internal mechanics of the SCSI layer that the usershouldn't necessarily be seeing.James-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2007/8/10/227
CC-MAIN-2014-42
refinedweb
271
50.57
I'm trying to do problem #22 in python and my answer is off. I've gone through the list and tried certain names and they've been calculated correctly (I think). Can anyone help??. def letter_sum(word): total = 0 for letter in word: total += ord(letter) - ord('A') + 1 return total def namescore(name, count): name_sum = letter_sum(name) return count * name_sum def main(): names = [] f = open('p022_names.txt', 'r') string = f.read() f.close() total = 0 names = sorted(string.replace('"', '').split(',')) for i in range(len(names)): total += namescore(names[i], i) print total Your indexing is off by one. 'COLIN' is at index 937 on 0 based indexing but the assignment uses 1 based indexing. You could fix the issue by changing your code a bit: total += namescore(names[i], i + 1)
https://codedump.io/share/W4FkzscZgmDO/1/project-euler-22---why-is-my-total-off-by-324536
CC-MAIN-2018-17
refinedweb
134
67.04
Creating a HTML5 phone, tablet & PC game using the Universal Apps project for Windows Stores: First thing to check: your HTML5 game has to run fine inside IE11 as it’s the engine being used inside Universal HTML5 Apps on Windows Phone 8.1/Windows Store. You don’t have IE11 installed yet? Test it via the various ressources we’ve put on (Browser Stack and free VMs to download). Step 1: create a new Universal Apps Launch Visual Studio 2013 and create a new project. Choose “JavaScript” -> “Store Apps” -> “Universal Apps” -> “Blank App (Universal Apps) ” and name it “UniversalHTML5Platformer”. You’ll obtain this tree inside the “Solution Explorer”: As I was explaining you at the beginning of the article, we have the 3 types of projects: the 2 specifics for WP8.1 / Windows 8.1 and the magic one containing the code/html/css shared as-is between both platforms. By default, you only have a single “default.js” file shared. In this first tutorial, we’re going to fall in the optimal case: everything is going to be shared between both platforms: same HTML pages, same CSS, same JavaScript code. Step 2: clean the projects & copy the code into the Shared one Remove the “default.html” file in both “ .Windows” & “ .WindowsPhone” projects and remove the “js” folder from the “ .Shared” project. Download and unzip the source code of the game: PlatformerTouch.zip . It’s an exact copy of the web version except I’ve renamed “index.html” by “default.html” Copy/paste everything into the “ .Shared” project: And you know what? You’re already done! Press F5, it will launch the Windows Store version: You can move the player with the left/right arrow keys and jump by pressing W. If you’ve got a touch device? Control the player with your left thumb and jump by tapping on the right of the screen. Right-click the “ .WindowsPhone” project and select “Set as StartUp Project”. Select the “Emulator 8.1 WVGA 4 inch 512MB” for instance and press F5: You can move the player only using touch this time of course. But have you realized? The exact same source code is being used for the Web, Windows Apps 8.1 & Windows Phone Apps 8.1 version! Step 3: polishing before submitting to the stores Technically, we have done most of the job. But currently, we don’t have any cool splash screens for instance. To definitely finish the universal game, you need to design the visual assets for each platforms and set them in both “package.appxmanifest” files by double-clicking on them. In our case, let’s review another setting. Let’s force the orientation in landscape only as our game offers a better experience in this mode. Double click on the manifest file of the “ .Windows” project and force the only supported rotation: Do the very same operation on the “ .WindowsPhone” project. You can set the proper splash screen assets, various tiles sizes into the “Visual Assets” tab. Live demo in a 3min video! Still not convinced this is that simple? Let me demonstrate it through a 3 min video going from scratch, downloading of the initial solution included! Or how to build a game for both Windows Stores in 180s by simply copy/pasting the web version. . Step 1: create the new project, clean it and copy/paste the code Launch Visual Studio 2013 and create a new project. Choose “JavaScript” -> “Store Apps” -> “Universal Apps” -> “Blank App (Universal Apps) ” and name it “UniversalWebGLGame”. Unzip the WebGLStoreGame.zip and copy the following files & folder into the “ .Shared” project: “css, Espilit, html, js & default.html”. Copy “cpp & images” into the “ .Windows” one. “cpp” contains the Xbox C++ wrapper, but we don’t need support for Xbox controller on Windows Phone. It will then live into the “.Windows” specific only. Remove both “default.html” and “css/default.css” files from the “.Windows” & “ .WindowsPhone” projects. Open “package.appxmanifest” file by double-clicking on it and set various “Visual Assets” properties: Set “Background color” properties to “ #2A3548” and fix “Square 150x150 Logo”, “Wide 310x150 Logo”, “Square 30x30 Logo” and “Splash Screen” using the assets from the “images” folder copy/pasted from the downloaded solution. Press “F5” and the Windows 8.1 version will now run as-is the original downloaded solution. It’s simply because the files we’ve put in the “ .Shared” project come from the original Windows 8.1 project. Step 2: make it works on Windows Phone by referencing WinJS 2.1 To make the same project working fine into Windows Phone 8.1, we need to reference the WinJS 2.1 build for Windows Phone into the shared “default.html” file. For that, add those 2 lines after the references to WinJS 2.0: <link href="//Microsoft.Phone.WinJS.2.1/css/ui-themed.css" rel="stylesheet" /> <script src="//Microsoft.Phone.WinJS.2.1/js/base.js"></script> <script src="//Microsoft.Phone.WinJS.2.1/js/ui.js"></script> Note the special “//” path. In conclusion, we’re referencing both WinJS versions: <!-- WinJS references --> <link href="//Microsoft.WinJS.2.0/css/ui-dark.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.2.0/js/base.js"></script> <script src="//Microsoft.WinJS.2.0/js/ui.js"></script> <link href="//Microsoft.Phone.WinJS.2.1/css/ui-themed.css" rel="stylesheet" /> <script src="//Microsoft.Phone.WinJS.2.1/js/base.js"></script> <script src="//Microsoft.Phone.WinJS.2.1/js/ui.js"></script> Thanks to the magic of the web, the resolution will being dynamically done during runtime. If this code is be executed on Windows 8.1, the WinJS 2.0 scripts & css files will be loaded. If executed on Windows Phone 8.1, the WinJS 2.1 will be loaded instead. Using this approach, you will have a unique HTML file for both platforms. However, you will have some “file not found” errors logged into the console during debug as WinJS 2.1 won’t be found on Windows 8.1 and vice versa. This won’t block the execution pipeline but don’t be surprised to see such errors raised. For instance, while running the Windows 8.1 version in debug mode, the WinJ2 2.1 framework for Windows Phone is not found: You can now set the Windows Phone project as the startup project and press F5. Our Babylon.js game will be loaded as expected! Step 3: differentiate each version At this stage, we haven’t considered the Windows Phone experience a lot, as we’ve started by copy/pasting the Windows version. Still, by referencing the CSS theme of Windows Phone 8.1, you’ll notice a first difference during the loading phase. On Windows 8.1, here is the loading screen we’ve built using WinJS: On Windows Phone 8.1, due to a different style sheet being loaded, we’ve got that instead: The progress bar matches the UX defined on both platforms. On Windows 8.1, in the downloaded solution, we’ve integrated the charms via the settings button to let the user switch between various input methods. We’ve used the settings flyout for that. This paradigm doesn’t exist on Windows Phone. In conclusion, some of our code that lives in “default.js” and “main.js” doesn’t make sense on Windows Phone. It still works as JavaScript is a dynamic language. You’ll just see some undefined parts by debugging it on Windows Phone. This is a not a very clean way to proceed but it works! In conclusion, you need to choose your strategy to share your code between the platforms. We’ve got 3 options: 1 – Separate as much the various files in specific projects and really put only the shared logic into the “ .Shared” project. A great example is the default “Hub/Pivot App (Universal Apps) ” new project template. But it really highlights the HTML/CSS part of the problem. 2 – Start by building and/or reusing the Windows 8.1 project in the “ .Shared” project and override some of the file in the “ .WindowsPhone” version. For instance, if I’m creating a “ /js/main.js” JavaScript file inside my “ .WindowsPhone” project, it will be called instead of the current “ /js/main.js” living in the “ .Shared” project. It’s a bit crappy but again it works. 3 – Try to use a #IFDEF like approach for the JavaScript code. I wanted to explore the third option. Unfortunately, there is no #IFDEF option in JavaScript which sounds logical as you’re not supposed to target a specific platform when doing plain JavaScript. This is not then part of the language’s syntax. To have a similar approach, I’m going to use the dynamic nature of JS. The namespaces are not strictly the same between both platforms. For instance, on Windows Phone, the namespace “Windows.Phone” is defined and logically not on Windows. This sounds as a possible workaround for my IFDEF then. Let’s imagine for instance that on Windows Phone only, you’d like to expose a double virtual touch joysticks to control the camera instead of the default camera loaded. Change the loading part of the startGame() function by this code: BABYLON.SceneLoader.Load("Espilit/", "Espilit.babylon", engine, function (newScene) { scene = newScene; // Wait for textures and shaders to be ready newScene.executeWhenReady(function () { if (Windows.Phone) { var VJC = new BABYLON.VirtualJoysticksCamera("VJC", newScene.activeCamera.position, newScene); VJC.rotation = newScene.activeCamera.rotation; VJC.applyGravity = newScene.activeCamera.applyGravity; VJC.checkCollisions = newScene.activeCamera.checkCollisions; VJC.ellipsoid = newScene.activeCamera.ellipsoid; newScene.activeCamera = VJC; } // Attach camera to canvas inputs newScene.activeCamera.attachControl(canvas); WinJS.Utilities.addClass(workInProgressElement, "hidden"); // Once the scene is loaded, just register a render loop to render it engine.runRenderLoop(function () { newScene.render(); }); }); }, function (progress) { // To do: give progress feedback to user });! Follow the author @davrous
https://docs.microsoft.com/en-us/archive/blogs/davrous/creating-a-html5-phone-tablet-pc-game-using-the-universal-apps-project-for-windows-stores
CC-MAIN-2020-29
refinedweb
1,642
61.02
(For more resources on .NET, see here.)ASP.NET MVC Web Application template to create a new web application using this template. first project is a web project where you'll implement your application. The second is a testing project that you can use to write unit tests against. first create some ASPX pages in the Views folder. Note that VS has already created files: . Therefore, the URL is sent to IIS and then to ASP.NET runtime, where it initiates a controller class based on the URL, using the URL routes, and the controller class then loads the data from the model, with this data finally file and examine the following code: public class Global); } The RegisterRoutes() method contains the URL mapping routes. Initially we have only the default rule set: routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); The RegisterRoutes() method contains the URL mapping routes. Initially we have only the default rule set: The MapRoute() method, which handles URL routing and mapping, takes three arguments: - Name of the route (string) - URL format (string) - Default settings (object type) In our case, we named the first route "Default" (which is the route name) and then set the-only specified. files in the URLs. So in the code-behind of our Default.aspx page, we have a simple redirect: public void Page_Load(object sender, System.EventArgs e) { Response.Redirect("~/Home"); } So the runtime will first: Controller/action/id So /Home corresponds to a controller named Home, and because we have not specified any action or ID, it takes the default values we specified in the RegisterRoutes() method in the globals.asax.cs. So the default action was Index and the default parameter was an empty string. The runtime initializes the HomeController.cs class, and fires-specific file),Dat a 5-Tier solution and change the GUI layer to make it follow the MVC design using the ASP.NET MVC framework. Open the 5-Tier solution and delete the ASP.NET web project from it. The solution will then only contain 5Tier.BL, 5Tier.DAL and 5Tier.Common projects. Right click the solution in VS, and select Add New Project, and then selectASP.NET MVC Web Application from the dialog box. Name this new web project as Test.MVC. This web project will be the new MVC based UI tier of our OMS application. The Customer.cs and CustomerCollection.cs class files in the business tier (5Tier.Business class library) will be the Model in our MVC application. To show a list of customers, the CustomerCollection class simply calls the FindCustomer() method in CustomerDAL.cs. So we can use an n-tier architecture in an MVC application, hence this shows that MVC and n-tier are not mutually exclusive options while consideringthe application architecture of your web application. Both actually complimenteach file in the Test file under the existing home page route as follows: routes.MapRoute( "Customer", "Customer/{action}/{id}", new { controller = "Customer", action = "Show", id="" } ); This new route will simply fire public first file(REST refers to Representation State Transfer). fire the Add method as shown in the sample code (just for demonstration purposes): public class CustomerController : Controller { public ActionResult Add(string customerName) { //create a business object and fire the add method Customerin the code using one of the many available unit testing frameworks, such as NUnit and MBUnit. Unit testing the GUI of our ASP.NET projects is highly important; but it is difficult to do so under the standard page controller model. However,.The core principle of the ASP.NET MVC framework is that the URL should talk directly to the requested resource in the web application. It is a very good choice for creating a unit-testable and search engine friendly web application which makes our web UI much cleaner by having a clear separation between the UI and code logic. Further resources on this subject: - Working with Master Pages in ASP.NET MVC 2 [article] - ASP.NET MVC 2: Validating MVC [article] - ER Diagrams, Domain Model, and N-Layer Architecture with ASP.NET 3.5 (part1) - ER Diagrams, Domain Model, and N-Layer Architecture with ASP.NET 3.5 (part2)
https://www.packtpub.com/books/content/aspnet-mvc-framework
CC-MAIN-2015-48
refinedweb
705
65.52
Table of Contents In this chapter, we'll develop a small, but complete, Haskell library. Our library will manipulate and serialize data in a popular form known as JSON. The JSON (JavaScript Object Notation) language is a small, simple representation for storing and transmitting structured data, for example over a network connection. It is most commonly used to transfer data from a web service to a browser-based JavaScript application. The JSON format is described at, and in greater detail by RFC 4627. JSON supports four basic types of value: strings, numbers, booleans, and a special value named null. "a string" 12345 true null The language provides two compound types: an array is an ordered sequence of values, and an object is an unordered collection of name/value pairs. The names in an object are always strings; the values in an object or array can be of any type. [-3.14, true, null, "a string"] {"numbers": [1,2,3,4,5], "useful": false} To work with JSON data in Haskell, we use an algebraic data type to represent the range of possible JSON types. -- file: ch05/SimpleJSON.hs data JValue = JString String | JNumber Double | JBool Bool | JNull | JObject [(String, JValue)] | JArray [JValue] deriving (Eq, Ord, Show) For each JSON type, we supply a distinct value constructor. Some of these constructors have parameters: if we want to construct a JSON string, we must provide a String value as an argument to the JString constructor. To start experimenting with this code, save the file SimpleJSON.hs in your editor, switch to a ghci window, and load the file into ghci. ghci> :load SimpleJSON[1 of 1] Compiling SimpleJSON ( SimpleJSON.hs, interpreted ) Ok, modules loaded: SimpleJSON. ghci> JString "foo"JString "foo" ghci> JNumber 2.7JNumber 2.7 ghci> :type JBool TrueJBool True :: JValue We can see how to use a constructor to take a normal Haskell value and turn it into a JValue. To do the reverse, we use pattern matching. Here's a function that we can add to SimpleJSON.hs that will extract a string from a JSON value for us. If the JSON value actually contains a string, our function will wrap the string with the Just constructor. Otherwise, it will return Nothing. -- file: ch05/SimpleJSON.hs getString :: JValue -> Maybe String getString (JString s) = Just s getString _ = Nothing When we save the modified source file, we can reload it in ghci and try the new definition. (The :reload command remembers the last source file we loaded, so we do not need to name it explicitly.) ghci> :reloadOk, modules loaded: SimpleJSON. ghci> getString (JString "hello")Just "hello" ghci> getString (JNumber 3)Nothing A few more accessor functions, and we've got a small body of code to work with. -- file: ch05/SimpleJSON.hs The truncate function turns a floating point or rational number into an integer by dropping the digits after the decimal point. ghci> truncate 5.85 ghci> :module +Data.Ratio ghci> truncate (22 % 7)3 A Haskell source file contains a definition of a single module. A module lets us determine which names inside the module are accessible from other modules. A source file begins with a module declaration. This must precede all other definitions in the source file. -- file: ch05/SimpleJSON.hs module SimpleJSON ( JValue(..) , getString , getInt , getDouble , getBool , getObject , getArray , isNull ) where The word module is reserved. It is followed by the name of the module, which must begin with a capital letter. A source file must have the same base name (the component before the suffix) as the name of the module it contains. This is why our file SimpleJSON.hs contains a module named SimpleJSON. Following the module name is a list of exports, enclosed in parentheses. The where keyword indicates that the body of the module follows. The list of exports indicates which names in this module are visible to other modules. This lets us keep private code hidden from the outside world. The special notation (..) that follows the name JValue indicates that we are exporting both the type and all of its constructors. It might seem strange that we can export a type's name (i.e. its type constructor), but not its value constructors. The ability to do this is important: it lets us hide the details of a type from its users, making the type abstract. If we cannot see a type's value constructors, we cannot pattern match against a value of that type, nor can we construct a new value of that type. Later in this chapter, we'll discuss some situations in which we might want to make a type abstract. If we omit the exports (and the parentheses that enclose them) from a module declaration, every name in the module will be exported. -- file: ch05/Exporting.hs module ExportEverything where To export no names at all (which is rarely useful), we write an empty export list using a pair of parentheses. -- file: ch05/Exporting.hs module ExportNothing () where In addition to the ghci interpreter, the GHC distribution includes a compiler, ghc, that generates native code. If you are already familiar with a command line compiler such as gcc or cl (the C++ compiler component of Microsoft's Visual Studio), you'll immediately be at home with ghc. To compile a source file, we first open a terminal or command prompt window, then invoke ghc with the name of the source file to compile. ghc -c SimpleJSON.hs The -c option tells ghc to only generate object code. If we were to omit the -c option, the compiler would attempt to generate a complete executable. That would fail, because we haven't written a main function, which GHC calls to start the execution of a standalone program. After ghc completes, if we list the contents of the directory, it should contain two new files: SimpleJSON.hi and SimpleJSON.o. The former is an interface file, in which ghc stores information about the names exported from our module in machine-readable form. The latter is an object file, which contains the generated machine code. Now that we've successfully compiled our minimal library, we'll write a tiny program to exercise it. Create the following file in your text editor, and save it as Main.hs. -- file: ch05/Main.hs module Main () where import SimpleJSON main = print (JObject [("foo", JNumber 1), ("bar", JBool False)]) Notice the import directive that follows the module declaration. This indicates that we want to take all of the names that are exported from the SimpleJSON module, and make them available in our module. Any import directives must appear in a group at the beginning of a module. They must appear after the module declaration, but before all other code. We cannot, for example, scatter them throughout a source file. Our choice of naming for the source file and function is deliberate. To create an executable, ghc expects a module named Main that contains a function named main. The main function is the one that will be called when we run the program once we've built it. ghc -o simple Main.hs SimpleJSON.o This time around, we're omitting the -c option when we invoke ghc, so it will attempt to generate an executable. The process of generating an executable is called linking. As our command line suggests, ghc is perfectly able to both compile source files and link an executable in a single invocation. We pass ghc a new option, -o, which takes one argument: this is the name of the executable that ghc should create[10]. Here, we've decided to name the program simple. On Windows, the program will have the suffix .exe, but on Unix variants there will not be a suffix. Finally, we supply the name of our new source file, Main.hs, and the object file we already compiled, SimpleJSON.o. We must explicitly list every one of our files that contains code that should end up in the executable. If we forget a source or object file, ghc will complain about undefined symbols, which indicates that some of the definitions that it needs are not provided in the files we have supplied. When compiling, we can pass ghc any mixture of source and object files. If ghc notices that it has already compiled a source file into an object file, it will only recompile the source file if we've modified it. Once ghc has finished compiling and linking our simple program, we can run it from the command line. Now that we have a Haskell representation for JSON's types, we'd like to be able to take Haskell values and render them as JSON data. There are a few ways we could go about this. Perhaps the most direct would be to write a rendering function that prints a value in JSON form. Once we're done, we'll explore some more interesting approaches. -- file: ch05/PutJSON.hs module PutJSON where import Data.List (intercalate) import SimpleJSON renderJValue :: JValue -> String renderJValue (JString s) = show s renderJValue (JNumber n) = show n renderJValue (JBool True) = "true" renderJValue (JBool False) = "false" renderJValue JNull = "null" renderJValue (JObject o) = "{" ++ pairs o ++ "}" where pairs [] = "" pairs ps = intercalate ", " (map renderPair ps) renderPair (k,v) = show k ++ ": " ++ renderJValue v renderJValue (JArray a) = "[" ++ values a ++ "]" where values [] = "" values vs = intercalate ", " (map renderJValue vs) Good Haskell style involves separating pure code from code that performs I/O. Our renderJValue function has no interaction with the outside world, but we still need to be able to print a JValue. -- file: ch05/PutJSON.hs putJValue :: JValue -> IO () putJValue v = putStrLn (renderJValue v) Printing a JSON value is now easy. Why should we separate the rendering code from the code that actually prints a value? This gives us flexibility. For instance, if we wanted to compress the data before writing it out, and we intermixed rendering with printing, it would be much more difficult to adapt our code to that change in circumstances. This idea of separating pure from impure code is powerful, and pervasive in Haskell code. Several Haskell compression libraries exist, all of which have simple interfaces: a compression function accepts an uncompressed string and returns a compressed string. We can use function composition to render JSON data to a string, then compress to another string, postponing any decision on how to actually display or transmit the data. A Haskell compiler's ability to infer types is powerful and valuable. Early on, you'll probably be faced by a strong temptation to take advantage of type inference by omitting as many type declarations as possible: let's simply make the compiler figure the whole lot out! Skimping on explicit type information has a downside, one that disproportionately affects new Haskell programmer. As a new Haskell programmer, we're extremely likely to write code that will fail to compile due to straightforward type errors. When we omit explicit type information, we force the compiler to figure out our intentions. It will infer types that are logical and consistent, but perhaps not at all what we meant. If we and the compiler unknowingly disagree about what is going on, it will naturally take us longer to find the source of our problem. Suppose, for instance, that we write a function that we believe returns a String, but we don't write a type signature for it. -- file: ch05/Trouble.hs upcaseFirst (c:cs) = toUpper c -- forgot ":cs" here Here, we want to upper-case the first character of a word, but we've forgotten to append the rest of the word onto the result. We think our function's type is String -> String, but the compiler will correctly infer its type as String -> Char. Let's say we then try to use this function somewhere else. -- file: ch05/Trouble.hs camelCase :: String -> String camelCase xs = concat (map upcaseFirst (words xs)) When we try to compile this code or load it into ghci, we won't necessarily get an obvious error message. ghci> :load Trouble[1 of 1] Compiling Main ( Trouble.hs, interpreted ) Trouble.hs:9:27: Couldn't match expected type `[Char]' against inferred type `Char' Expected type: [Char] -> [Char] Inferred type: [Char] -> Char In the first argument of `map', namely `upcaseFirst' In the first argument of `concat', namely `(map upcaseFirst (words xs))' Failed, modules loaded: none. Notice that the error is reported where we use the upcaseFirst function. If we're erroneously convinced that our definition and type for upcaseFirst are correct, we may end up staring at the wrong piece of code for quite a while, until enlightenment strikes. Every time we write a type signature, we remove a degree of freedom from the type inference engine. This reduces the likelihood of divergence between our understanding of our code and the compiler's. Type declarations also act as shorthand for ourselves as readers of our own code, making it easier for us to develop a sense of what must be going on. This is not to say that we need to pepper every tiny fragment of code with a type declaration. It is, however, usually good form to add a signature to every top-level definition in our code. It's best to start out fairly aggressive with explicit type signatures, and slowly ease back as your mental model of how type checking works becomes more accurate. Our JSON rendering code is narrowly tailored to the exact needs of our data types and the JSON formatting conventions. The output it produces can be unfriendly to human eyes. We will now look at rendering as a more generic task: how can we build a library that is useful for rendering data in a variety of situations? We would like to produce output that is suitable either for human consumption (e.g. for debugging) or for machine processing. Libraries that perform this job are referred to as pretty printers. There already exist several Haskell pretty printing libraries. We are creating one of our own not to replace them, but for the many useful insights we will gain into both library design and functional programming techniques. We will call our generic pretty printing module Prettify, so our code will go into a source file named Prettify.hs. To make sure that Prettify meets practical needs, we write a new JSON renderer that uses the Prettify API. After we're done, we'll go back and fill in the details of the Prettify module. Instead of rendering straight to a string, our Prettify module will use an abstract type that we'll call Doc. By basing our generic rendering library on an abstract type, we can choose an implementation that is flexible and efficient. If we decide to change the underlying code, our users will not be able to tell. We will name our new JSON rendering module PrettyJSON.hs, and retain the name renderJValue for the rendering function. Rendering one of the basic JSON values is straightforward. -- file: ch05/PrettyJSON.hs renderJValue :: JValue -> Doc renderJValue (JBool True) = text "true" renderJValue (JBool False) = text "false" renderJValue JNull = text "null" renderJValue (JNumber num) = double num renderJValue (JString str) = string str The text, double, and string functions will be provided by our Prettify module. Early on, as we come to grips with Haskell development, we have so many new, unfamiliar concepts to keep track of at one time that it can be a challenge to write code that compiles at all. As we write our first substantial body of code, it's a huge help to pause every few minutes and try to compile what we've produced so far. Because Haskell is so strongly typed, if our code compiles cleanly, we're assuring ourselves that we're not wandering too far off into the programming weeds. One useful technique for quickly developing the skeleton of a program is to write placeholder, or stub versions of types and functions. For instance, we mentioned above that our string, text and double functions would be provided by our Prettify module. If we don't provide definitions for those functions or the Doc type, our attempts to “compile early, compile often” with our JSON renderer will fail, as the compiler won't know anything about those functions. To avoid this problem, we write stub code that doesn't do anything. -- file: ch05/PrettyStub.hs import SimpleJSON data Doc = ToBeDefined deriving (Show) string :: String -> Doc string str = undefined text :: String -> Doc text str = undefined double :: Double -> Doc double num = undefined The special value undefined has the type a, so it always typechecks, no matter where we use it. If we attempt to evaluate it, it will cause our program to crash. ghci> :type undefinedundefined :: a ghci> undefined*** Exception: Prelude.undefined ghci> :type doubledouble :: Double -> Doc ghci> double 3.14*** Exception: Prelude.undefined Even though we can't yet run our stubbed code, the compiler's type checker will ensure that our program is sensibly typed. When we must pretty print a string value, JSON has moderately involved escaping rules that we must follow. At the highest level, a string is just a series of characters wrapped in quotes. -- file: ch05/PrettyJSON.hs string :: String -> Doc string = enclose '"' '"' . hcat . map oneChar The enclose function simply wraps a Doc value with an opening and closing character. -- file: ch05/PrettyJSON.hs enclose :: Char -> Char -> Doc -> Doc enclose left right x = char left <> x <> char right We provide a (<>) function in our pretty printing library. It appends two Doc values, so it's the Doc equivalent of (++). -- file: ch05/PrettyStub.hs (<>) :: Doc -> Doc -> Doc a <> b = undefined char :: Char -> Doc char c = undefined Our pretty printing library also provides hcat, which concatenates multiple Doc values into one: it's the analogue of concat for lists. -- file: ch05/PrettyStub.hs hcat :: [Doc] -> Doc hcat xs = undefined Our string function applies the oneChar function to every character in a string, concatenates the lot, and encloses the result in quotes. The oneChar function escapes or renders an individual character. -- file: ch05/PrettyJSON.hs oneChar :: Char -> Doc oneChar c = case lookup c simpleEscapes of Just r -> text r Nothing | mustEscape c -> hexEscape c | otherwise -> char c where mustEscape c = c < ' ' || c == '\x7f' || c > '\xff' simpleEscapes :: [(Char, String)] simpleEscapes = zipWith ch "\b\n\f\r\t\\\"/" "bnfrt\\\"/" where ch a b = (a, ['\\',b]) The simpleEscapes value is a list of pairs. We call a list of pairs an association list, or alist for short. Each element of our alist associates a character with its escaped representation. ghci> take 4 simpleEscapes[('\b',"\\b"),('\n',"\\n"),('\f',"\\f"),('\r',"\\r")] Our case expression attempts to see if our character has a match in this alist. If we find the match, we emit it, otherwise we might need to escape the character in a more complicated way. If so, we perform this escaping. Only if neither kind of escaping is required do we emit the plain character. To be conservative, the only unescaped characters we emit are printable ASCII characters. The more complicated escaping involves turning a character into the string “ \u” followed by a four-character sequence of hexadecimal digits representing the numeric value of the Unicode character. -- file: ch05/PrettyJSON.hs smallHex :: Int -> Doc smallHex x = text "\\u" <> text (replicate (4 - length h) '0') <> text h where h = showHex x "" The showHex function comes from the Numeric library (you will need to import this at the beginning of Prettify.hs), and returns a hexadecimal representation of a number. ghci> showHex 114111 """1bdbf" The replicate function is provided by the Prelude, and builds a fixed-length repeating list of its argument. ghci> replicate 5 "foo"["foo","foo","foo","foo","foo"] There's a wrinkle: the four-digit encoding that smallHex provides can only represent Unicode characters up to 0xffff. Valid Unicode characters can range up to 0x10ffff. To properly represent a character above 0xffff in a JSON string, we follow some complicated rules to split it into two. This gives us an opportunity to perform some bit-level manipulation of Haskell numbers. -- file: ch05/PrettyJSON.hs astral :: Int -> Doc astral n = smallHex (a + 0xd800) <> smallHex (b + 0xdc00) where a = (n `shiftR` 10) .&. 0x3ff b = n .&. 0x3ff The shiftR function comes from the Data.Bits module, and shifts a number to the right. The (.&.) function, also from Data.Bits, performs a bit-level and of two values. ghci> 0x10000 `shiftR` 4 :: Int4096 ghci> 7 .&. 2 :: Int2 Now that we've written smallHex and astral, we can provide a definition for hexEscape. -- file: ch05/PrettyJSON.hs hexEscape :: Char -> Doc hexEscape c | d < 0x10000 = smallHex d | otherwise = astral (d - 0x10000) where d = ord c Compared to strings, pretty printing arrays and objects is a snap. We already know that the two are visually similar: each starts with an opening character, followed by a series of values separated with commas, followed by a closing character. Let's write a function that captures the common structure of arrays and objects. -- file: ch05/PrettyJSON.hs series :: Char -> Char -> (a -> Doc) -> [a] -> Doc series open close item = enclose open close . fsep . punctuate (char ',') . map item We'll start by interpreting this function's type. It takes an opening and closing character, then a function that knows how to pretty print a value of some unknown type a, followed by a list of values of type a, and it returns a value of type Doc. Notice that although our type signature mentions four parameters, we have only listed three in the definition of the function. We are simply following the same rule that lets us simplify a definiton like myLength xs = length xs to myLength = length. We have already written enclose, which wraps a Doc value in opening and closing characters. The fsep function will live in our Prettify module. It combines a list of Doc values into one, possibly wrapping lines if the output will not fit on a single line. -- file: ch05/PrettyStub.hs fsep :: [Doc] -> Doc fsep xs = undefined By now, you should be able to define your own stubs in Prettify.hs, by following the examples we have supplied. We will not explicitly define any more stubs. The punctuate function will also live in our Prettify module, and we can define it in terms of functions for which we've already written stubs. -- file: ch05/Prettify.hs punctuate :: Doc -> [Doc] -> [Doc] punctuate p [] = [] punctuate p [d] = [d] punctuate p (d:ds) = (d <> p) : punctuate p ds With this definition of series, pretty printing an array is entirely straightforward. We add this equation to the end of the block we've already written for our renderJValue function. -- file: ch05/PrettyJSON.hs renderJValue (JArray ary) = series '[' ']' renderJValue ary To pretty print an object, we need to do only a little more work: for each element, we have both a name and a value to deal with. -- file: ch05/PrettyJSON.hs renderJValue (JObject obj) = series '{' '}' field obj where field (name,val) = string name <> text ": " <> renderJValue val Now that we have written the bulk of our PrettyJSON.hs file, we must go back to the top and add a module declaration. -- file: ch05/PrettyJSON.hs module PrettyJSON ( renderJValue ) where import Numeric (showHex) import Data.Char (ord) import Data.Bits (shiftR, (.&.)) import SimpleJSON (JValue(..)) import Prettify (Doc, (<>), char, double, fsep, hcat, punctuate, text, compact, pretty) We export just one name from this module: renderJValue, our JSON rendering function. The other definitions in the module exist purely to support renderJValue, so there's no reason to make them visible to other modules. Regarding imports, the Numeric and Data.Bits modules are distributed with GHC. We've already written the SimpleJSON module, and filled our Prettify module with skeletal definitions. Notice that there's no difference in the way we import standard modules from those we've written ourselves. With each import directive, we explicitly list each of the names we want to bring into our module's namespace. This is not required: if we omit the list of names, all of the names exported from a module will be available to us. However, it's generally a good idea to write an explicit import list. An explicit list makes it clear which names we're importing from where. This will make it easier for a reader to look up documentation if they encounter an unfamiliar function. Occasionally, a library maintainer will remove or rename a function. If a function disappears from a third party module that we use, any resulting compilation error is likely to happen long after we've written the module. The explicit list of imported names can act as a reminder to ourselves of where we had been importing the missing name from, which will help us to pinpoint the problem more quickly. It can also occur that someone will add a name to a module that is identical to a name already in our own code. If we don't use an explicit import list, we'll end up with the same name in our module twice. If we use that name, GHC will report an error due to the ambiguity. An explicit list lets us avoid the possibility of accidentally importing an unexpected new name. This idea of using explicit imports is a guideline that usually makes sense, not a hard-and-fast rule. Occasionally, we'll need so many names from a module that listing each one becomes messy. In other cases, a module might be so widely used that a moderately experienced Haskell programmer will probably know which names come from that module. In our Prettify module, we represent our Doc type as an algebraic data type. -- file: ch05/Prettify.hs data Doc = Empty | Char Char | Text String | Line | Concat Doc Doc | Union Doc Doc deriving (Show,Eq) Observe that the Doc type is actually a tree. The Concat and Union constructors create an internal node from two other Doc values, while the Empty and other simple constructors build leaves. In the header of our module, we will export the name of the type, but not any of its constructors: this will prevent modules that use the Doc type from creating and pattern matching against Doc values. Instead, to create a Doc, a user of the Prettify module will call a function that we provide. Here are the simple construction functions. As we add real definitions, we must replace any stubbed versions already in the Prettify.hs source file. -- file: ch05/Prettify.hs empty :: Doc empty = Empty char :: Char -> Doc char c = Char c text :: String -> Doc text "" = Empty text s = Text s double :: Double -> Doc double d = text (show d) The Line constructor represents a line break. The line function creates hard line breaks, which always appear in the pretty printer's output. Sometimes we'll want a soft line break, which is only used if a line is too wide to fit in a window or page. We'll introduce a softline function shortly. -- file: ch05/Prettify.hs line :: Doc line = Line Almost as simple as the basic constructors is the (<>) function, which concatenates two Doc values. -- file: ch05/Prettify.hs (<>) :: Doc -> Doc -> Doc Empty <> y = y x <> Empty = x x <> y = x `Concat` y We pattern match against Empty so that concatenating a Doc value with Empty on the left or right will have no effect. This keeps us from bloating the tree with useless values. ghci> text "foo" <> text "bar"Concat (Text "foo") (Text "bar") ghci> text "foo" <> emptyText "foo" ghci> empty <> text "bar"Text "bar" Our hcat and fsep functions concatenate a list of Doc values into one. In the section called “Exercises”, we mentioned that we could define concatenation for lists using foldr. -- file: ch05/Concat.hs concat :: [[a]] -> [a] concat = foldr (++) [] Since (<>) is analogous to (++), and empty to [], we can see how we might write hcat and fsep as folds, too. -- file: ch05/Prettify.hs hcat :: [Doc] -> Doc hcat = fold (<>) fold :: (Doc -> Doc -> Doc) -> [Doc] -> Doc fold f = foldr f empty The definition of fsep depends on several other functions. -- file: ch05/Prettify.hs fsep :: [Doc] -> Doc fsep = fold (</>) (</>) :: Doc -> Doc -> Doc x </> y = x <> softline <> y softline :: Doc softline = group line These take a little explaining. The softline function should insert a newline if the current line has become too wide, or a space otherwise. How can we do this if our Doc type doesn't contain any information about rendering? Our answer is that every time we encounter a soft newline, we maintain two alternative representations of the document, using the Union constructor. -- file: ch05/Prettify.hs group :: Doc -> Doc group x = flatten x `Union` x Our flatten function replaces a Line with a space, turning two lines into one longer line. -- file: ch05/Prettify.hs flatten :: Doc -> Doc flatten (x `Concat` y) = flatten x `Concat` flatten y flatten Line = Char ' ' flatten (x `Union` _) = flatten x flatten other = other Notice that we always call flatten on the left element of a Union: the left of each Union is always the same width (in characters) as, or wider than, the right. We'll be making use of this property in our rendering functions below. We frequently need to use a representation for a piece of data that contains as few characters as possible. For example, if we're sending JSON data over a network connection, there's no sense in laying it out nicely: the software on the far end won't care whether the data is pretty or not, and the added white space needed to make the layout look good would add a lot of overhead. For these cases, and because it's a simple piece of code to start with, we provide a bare-bones compact rendering function. -- file: ch05/Prettify.hs compact :: Doc -> String compact x = transform [x] where transform [] = "" transform (d:ds) = case d of Empty -> transform ds Char c -> c : transform ds Text s -> s ++ transform ds Line -> '\n' : transform ds a `Concat` b -> transform (a:b:ds) _ `Union` b -> transform (b:ds) The compact function wraps its argument in a list, and applies the transform helper function to it. The transform function treats its argument as a stack of items to process, where the first element of the list is the top of the stack. The transform function's (d:ds) pattern breaks the stack into its head, d, and the remainder, ds. In our case expression, the first several branches recurse on ds, consuming one item from the stack for each recursive application. The last two branches add items in front of ds: the Concat branch adds both elements to the stack, while the Union branch ignores its left element, on which we called flatten, and adds its right element to the stack. We have now fleshed out enough of our original skeletal definitions that we can try out our compact function in ghci. ghci> let value = renderJValue (JObject [("f", JNumber 1), ("q", JBool True)]) ghci> :type valuevalue :: Doc ghci> putStrLn (compact value){"f": 1.0, "q": true } To better understand how the code works, let's look at a simpler example in more detail. ghci> char 'f' <> text "oo"Concat (Char 'f') (Text "oo") ghci> compact (char 'f' <> text "oo")"foo" When we apply compact, it turns its argument into a list and applies transform. The transform function receives a one-item list, which matches the (d:ds) pattern. Thus d is the value Concat (Char 'f') (Text "oo"), and ds is the empty list, []. Since d's constructor is Concat, the Concat pattern matches in the case expression. On the right hand side, we add Char 'f' and Text "oo" to the stack, and apply transformrecursively. The transform function receives a two-item list, again matching the (d:ds) pattern. The variable d is bound to Char 'f', and ds to [Text "oo"]. The case expression matches in the Char branch. On the right hand side, we use (:) to construct a list whose head is 'f', and whose body is the result of a recursive application of transform. The recursive invocation receives a one-item list. The variable d is bound to Text "oo", and ds to []. The case expression matches in the Text branch. On the right hand side, we use (++) to concatenate "oo" with the result of a recursive application of transform. The result is "oo" ++ "". The result is 'f' : "oo" ++ "". While our compact function is useful for machine-to-machine communication, its result is not always easy for a human to follow: there's very little information on each line. To generate more readable output, we'll write another function, pretty. Compared to compact, pretty takes one extra argument: the maximum width of a line, in columns. (We're assuming that our typeface is of fixed width.) -- file: ch05/Prettify.hs pretty :: Int -> Doc -> String To be more precise, this Int parameter controls the behaviour of pretty when it encounters a softline. Only at a softline does pretty have the option of either continuing the current line or beginning a new line. Elsewhere, we must strictly follow the directives set out by the person using our pretty printing functions. Here's the core of our implementation -- Our best helper function takes two arguments: the number of columns emitted so far on the current line, and the list of remaining Doc values to process. In the simple cases, best updates the col variable in straightforward ways as it consumes the input. Even the Concat case is obvious: we push the two concatenated components onto our stack/list, and don't touch col. The interesting case involves the Union constructor. Recall that we applied flatten to the left element, and did nothing to the right. Also, remember that flatten replaces newlines with spaces. Therefore, our job is to see which (if either) of the two layouts, the flattened one or the original, will fit into our width restriction. To do this, we write a small helper that determines whether a single line of a rendered Doc value will fit into a given number of columns. -- file: ch05/Prettify.hs fits :: Int -> String -> Bool w `fits` _ | w < 0 = False w `fits` "" = True w `fits` ('\n':_) = True w `fits` (c:cs) = (w - 1) `fits` cs In order to understand how this code works, let's first consider a simple Doc value. ghci> empty </> char 'a'Concat (Union (Char ' ') Line) (Char 'a') We'll apply pretty 2 on this value. When we first apply best, the value of col is zero. It matches the Concat case, pushes the values Union (Char ' ') Line and Char 'a' onto the stack, and applies itself recursively. In the recursive application, it matches on Union (Char ' ') Line. At this point, we're going to ignore Haskell's usual order of evaluation. This keeps our explanation of what's going on simple, without changing the end result. We now have two subexpressions, best 0 [Char ' ', Char 'a'] and best 0 [Line, Char 'a']. The first evaluates to " a", and the second to "\na". We then substitute these into the outer expression to give nicest 0 " a" "\na". To figure out what the result of nicest is here, we do a little substitution. The values of width and col are 0 and 2, respectively, so least is 0, and width - least is 2. We quickly evaluate 2 `fits` " a" in ghci. ghci> 2 `fits` " a"True Since this evaluates to True, the result of nicest here is " a". If we apply our pretty function to the same JSON data as earlier, we can see that it produces different output depending on the width that we give it. ghci> putStrLn (pretty 10 value){"f": 1.0, "q": true } ghci> putStrLn (pretty 20 value){"f": 1.0, "q": true } ghci> putStrLn (pretty 30 value){"f": 1.0, "q": true } The Haskell community has built a standard set of tools, named Cabal, that help with building, installing, and distributing software. Cabal organises software as a package. A package contains one library, and possibly several executable programs. To do anything with a package, Cabal needs a description of it. This is contained in a text file whose name ends with the suffix .cabal. This file belongs in the top-level directory of your project. It has a simple format, which we'll describe below. A Cabal package must have a name. Usually, the name of the package matches the name of the .cabal file. We'll call our package mypretty, so our file is mypretty.cabal. Often, the directory that contains a .cabal file will have the same name as the package, e.g. mypretty. A package description begins with a series of global properties, which apply to every library and executable in the package. Name: mypretty Version: 0.1 -- This is a comment. It stretches to the end of the line. Package names must be unique. If you create and install a package that has the same name as a package already present on your system, GHC will become very confused. The global properties include a substantial amount of information that is intended for human readers, not Cabal itself. Synopsis: My pretty printing library, with JSON support Description: A simple pretty printing library that illustrates how to develop a Haskell library. Author: Real World Haskell Maintainer: nobody@realworldhaskell.org As the Description field indicates, a field can span multiple lines, provided they're indented. Also included in the global properties is license information. Most Haskell packages are licensed under the BSD license, which Cabal calls BSD3[11]. (Obviously, you're free to choose whatever license you think is appropriate.) The optional License-File field lets us specify the name of a file that contains the exact text of our package's licensing terms. The features supported by successive versions of Cabal evolve over time, so it's wise to indicate what versions of Cabal we expect to be compatible with. The features we are describing are supported by versions 1.2 and higher of Cabal. Cabal-Version: >= 1.2 To describe an individual library within a package, we write a library section. The use of indentation here is significant: the contents of a section must be indented. library Exposed-Modules: Prettify PrettyJSON SimpleJSON Build-Depends: base >= 2.0 The Exposed-Modules field contains a list of modules that should be available to users of this package. An optional field, Other-Modules, contains a list of internal modules. These are required for this library to function, but will not be visible to users. The Build-Depends field contains a comma-separated list of packages that our library requires to build. For each package, we can optionally specify the range of versions with which this library is known to work. The base package contains many of the core Haskell modules, such as the Prelude, so it's effectively always required. GHC includes a simple package manager that tracks which packages are installed, and what the versions of those packages are. A command line tool named ghc-pkg lets us work with its package databases. We say databases because GHC distinguishes between system-wide packages, which are available to every user, and per-user packages, which are only visible to the current user. The per-user database lets us avoid the need for administrative privileges to install packages. The ghc-pkg command provides subcommands to address different tasks. Most of the time, we'll only need two of them. The ghc-pkg list command lets us see what packages are installed. When we want to uninstall a package, ghc-pkg unregister tells GHC that we won't be using a particular package any longer. (We will have to manually delete the installed files ourselves.) In addition to a .cabal file, a package must contain a setup file. This allows Cabal's build process to be heavily customised, if a package needs it. The simplest setup file looks like this. -- file: ch05/Setup.hs #!/usr/bin/env runhaskell import Distribution.Simple main = defaultMain We save this file under the name Setup.hs. Once we have the .cabal and Setup.hs files written, we have three steps left. To instruct Cabal how to build and where to install a package, we run a simple command. $ runghc Setup configure This ensures that the packages we need are available, and stores settings to be used later by other Cabal commands. If we do not provide any arguments to configure, Cabal will install our package in the system-wide package database. To install it into our home directory and our personal package database, we must provide a little more information. $ runghc Setup configure --prefix=$HOME --user Following the configure step, we build the package. $ runghc Setup build If this succeeds, we can install the package. We don't need to indicate where to install to: Cabal will use the settings we provided in the configure step. It will install to our own directory and update GHC's per-user package database. $ runghc Setup install GHC already bundles a pretty printing library, Text.PrettyPrint.HughesPJ. It provides the same basic API as our example, but a much richer and more useful set of pretty printing functions. We recommend using it, rather than writing your own. The design of the HughesPJ pretty printer was introduced by John Hughes in [Hughes95]. The library was subsequently improved by Simon Peyton Jones, hence the name. Hughes's paper is long, but well worth reading for his discussion of how to design a library in Haskell. In this chapter, our pretty printing library is based on a simpler system described by Philip Wadler in [Wadler98]. His library was extended by Daan Leijen; this version is available for download from Hackage as wl-pprint. If you use the cabal command line tool, you can download, build, and install it in one step with cabal install wl-pprint.
http://book.realworldhaskell.org/read/writing-a-library-working-with-json-data.html
CC-MAIN-2017-26
refinedweb
6,994
62.68
How Bitmovin is Doing Multi-Stage Canary Deployments with Kubernetes in the Cloud and On-Prem April 21 2017 http RBAC Support in Kubernetes April 06 2017 The Configuring Private DNS Zones and Upstream Nameservers in Kubernetes April 04 2017 Editor’s note: this post is part of a series of in-depth articles on what’s new in Kubernetes 1.6 Many... apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {“acme.local”: [“1.2.3.4”]} upstreamNameservers: | [“8.8.8.8”, “8.8.4.4”]. apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: stubDomains: | {“consul.local”: [“10.150.0.1”]}. ``` apiVersion: v1 kind: ConfigMap metadata: name: kube-dns namespace: kube-system data: [“172.16.0.1”] Get involved Advanced Scheduling in Kubernetes March 31 2017 Scalability updates in Kubernetes 1.6: 5,000 node and 150,000 pod clusters March 30 2017. Five Days of Kubernetes 1.6 March 29 2017 With the help of our growing community of 1,110 plus contributors, we pushed around 5,000 commits to deliver Kubernetes 1.6, bringing focus on multi-user, multi-workloads at scale. While many improvements have been contributed, we selected few features to highlight in a series of in-depths posts listed below. Follow along and read what’s new: Connect - Post questions (or answer questions) on Stack Overflow - Join the community portal for advocates on K8sPort - Get involved with the Kubernetes project on GitHub - Connect with the community on Slack - Download Kubernetes Dynamic Provisioning and Storage Classes in Kubernetes March 29 2017
https://kubernetes.io/blog/page8/
CC-MAIN-2018-17
refinedweb
264
52.49
I imported a planet pbf into my local PostgreSQL database and need to do some private update (creation, modification and deletion) because of my business needs. However, I want to keep the local database in sync with osm official database periodically. It can be weekly, even monthly. Is there any best way to do that? asked 04 Jul '18, 07:39 Hanson 26●1●1●3 accept rate: 0% edited 04 Jul '18, 07:44 Please clarify how you want to proceed in these cases: @Frederik Ramm ♦ Thank you for your reminder. It will include these cases: Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: import ×184 postgresql ×160 database ×104 merge ×35 question asked: 04 Jul '18, 07:39 question was seen: 831 times last updated: 04 Jul '18, 10:19 Best way to get all cities of a specific area? Osm to postgresql import and basemaps problem How to setup PostGIS server and import .osm-file on Windows Why is my import of planet-latest.osm KILLED? How to check Nominatim planet import execution is running in background or terminated? What Schema/Tool to store OSM GPS node coordinates in database? Merge accounts? (Was: Can not log in, and my e-mail-service has been closed.) Error while following osmosis import to database examples XML/Postgresql Rendering Failed to allocate space for node cache file First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/64519/how-to-keep-a-local-modified-database-in-sync-with-osm-database
CC-MAIN-2021-17
refinedweb
259
70.63
In the previous lesson, we learned the mathematical definition of a gradient. We saw that the gradient of a function was a combination of our partial derivatives with respect to each variable of that function. We saw the direction of gradient descent was simply to move in the negative direction of the gradient. For example, if the direction of ascent of a function is a move up and to the right, the descent is down and to the left. In this lesson we will apply gradient descent to our cost function to see how we can move towards a best fit regression line by changing variables of $m$ and $b$. Think about why gradient descent applies so well to a cost function. Initially, we said that the cost of our function, meaning the difference between what our regression line predicted and the dataset, changed as we altered the y-intercept or the slope of the function. Remember that mathematically, when we say cost function, we use the residual sum of squares where $$ RSS = \sum_{i=1}^n(actual - expected)^2 = \sum_{i=1}^n(y_i - \hat{y})^2 = \sum_{i=1}^n(y_i - mx_i + b)^2$$ for all $x$ and $y$ values of our dataset. So in the graph directly below, $x_i$ and $y_i$ would be our points representing a movie's budget and revenue. Meanwhile, $mx_i + b $ is our predicted $y$ value for a given $x$ value, of a budget. And RSS takes the difference between $mx_i + b$, the $y_i$ value our regression line predicts, and our actual $y$, represented by the length of the red lines. Then we square this difference, and sum up these squares for each piece of data in our dataset. That is the residual sum of squares. And we when we just plotted how RSS changes as we change one variable of our regression line, $m$ or $b$, we note how this looks like a curve, and call it our cost curve. import plotly from plotly.offline import init_notebook_mode, iplot from graph import m_b_trace, trace_values, plot init_notebook_mode(connected=True) b_values = list(range(70, 150, 10)) rss = [10852, 9690, 9128, 9166, 9804, 11042, 12880, 15318] cost_curve_trace = trace_values(b_values, rss, mode="lines", name = 'RSS with changes to y-intercept') plot([cost_curve_trace]) In two dimensions, we decrease our RSS simply by moving forwards or backwards along the cost curve which is the equivalent of changing our variable, in this case y-intercept. So the cost curve above indicates that changing the regression line from having a y-intercept of 70 to 80 decreases our cost, the RSS. Allowing us to change both variables, $m$ and $b$ means calculating how RSS varies with both $m$ and $b$. Because the RSS is a function of how we change our values of $m$ and $b$, we can express this relationship mathematically by saying the cost function, $J$ is the following: $$J(m, b) = \sum_{i=1}^{n}(y_i - (mx_i + b))^2$$ In the function above, $J$ is a function of $m$ and $b$. $J$ just represents the residual sum of squares, which varies as the $m$ and $b$ variables of our regression line are changed. Just our other multivariable functions we have seen thus far, we can display it in three dimensions, and it looks like the following. The three-dimensional graph above shows how the cost associated with our regression line changes as the slope and y-intercept values are changed. Let's explore using gradient descent to determine how to change our regression line when we can alter both $m$ and $b$ variables. When applied to a general multivariable function $f(x,y)$, gradient descent answered how much move the $x$ variable and the $y$ variable to produce the greatest decrease in output. Now that we are applying gradient descent to our cost curve $J(m, b)$, the technique should answer how much to move the $m$ variable and the $b$ variable to produce the greatest decrease in cost, or RSS. In other words, when altering our regression line, we want to know how much of this change should be derived from a move in the slope versus how much should be derived from a change in the y-intercept. As we know, the gradient of a function is simply the partial derivatives with respect to each of the variables, so: $$ \nabla J(m, b) = \frac{\delta J}{\delta m}, \frac{\delta J}{\delta b}$$ In calculating the partial derivatives of our function $J(m, b) = \sum_{i=1}^{n}(y_i - (mx_i + b))^2$, we won't change the result if we ignore the summation until the very end. We'll do that to make our calculations easier. Ok, so let's take our partial derivatives of the following: $$\frac{\delta J}{\delta m}J(m, b) = \frac{\delta J}{\delta m}(y - (mx + b))^2$$ $$\frac{\delta J}{\delta b}J(m, b) = \frac{\delta J}{\delta b}(y - (mx + b))^2$$ Let's start with taking the partial derivative with respect to $m$. $$\frac{\delta J}{\delta m}J(m, b) = \frac{\delta J}{\delta m}(y - (mx + b))^2$$ Now this is a tricky function to take the derivative of. So we can use functional composition followed by the chain rule to make it easier. Using functional composition, we can rewrite our function $J$ as two functions: $$g(m,b) = y - (mx + b)$$ $$J(g(m,b)) = (g(m,b))^2$$ Now using the chain rule to find the partial derivative with respect to a change in the slope, gives us: $$\frac{dJ}{dm}J(g) = \frac{dJ}{dg}J(g(m, b))*\frac{dg}{dm}g(m,b)$$ Our next step is to solve these derivatives individually: $$\frac{dJ}{dg}J(g(m, b)) = \frac{dJ}{dg}g(m,b)^2 = 2*g(m,b)$$ $$\frac{dg}{dm}g(m,b) = \frac{dg}{dm} (y - (mx +b)) = \frac{dg}{dm}y - \frac{dg}{dm}mx - \frac{dg}{dm}b = -x $$ Each of the terms are treated as constants, except for the middle term. Now plugging these back into our chain rule we have: $\frac{dJ}{dg}J(g(m,b))\frac{dg}{dm}g(m,b) = (2*g(m,b))-x = 2*(y - (mx + b))*-x $ So $$\frac{\delta J}{\delta m}J(m, b) = 2*(y - (mx + b))-x = -2x(y - (mx + b )) $$ Ok, now let's calculate the partial derivative with respect to a change in the y-intercept. We express this mathematically with the following: $$\frac{\delta J}{\delta b}J(m, b) = \frac{dJ}{db}(mx + b - y)^2$$ Then once again, we use functional composition following by the chain rule. So we view our cost function as the same two functions $g(m,b)$ and $J(g(m,b))$. $$g(m,b) = y - (mx + b)$$ $$J(g(m,b)) = (g(m,b))^2$$ So applying the chain rule, to this same function composition, we get: $$\frac{dJ}{db}J(g) = \frac{dJ}{dg}J(g)*\frac{dg}{db}g(m,b)$$ Now, our next step is to calculate these partial derivatives individually. From our earlier calculation of the partial derivative, we know that $\frac{dJ}{dg}J(g(m,b)) = \frac{dJ}{dg}g(m,b)^2 = 2*g(m,b)$. The only thing left to calculate is $\frac{dg}{db}g(m,b)$. $\frac{dg}{db}g(m,b) = \frac{dg}{db}(y - (mx + b) ) = -1$ Now we plug our terms into our chain rule and get: $$ \frac{dJ}{dg}J(g)\frac{dg}{db}g(m,b) = 2*g(m,b)-1 = -2*(y - (mx + b)) $$ Ok, so now we have our two partial derivatives for $\nabla J(m, b)$: $$ \frac{dJ}{dm}J(m,b) = -2*x(y - (mx + b )) $$ $$ \frac{dJ}{db}J(m,b) = -2*(y - (mx + b)) $$ And as $mx + b$ = is just our regression line, we can simplify these formulas to be: $$ \frac{dJ}{dm}J(m,b) = -2*x(y - \hat{y}) = -2x*\epsilon$$ $$ \frac{dJ}{db}J(m,b) = -2*(y - \hat{y}) = -2\epsilon$$ Remember, error = actual - expected, so we can replace $y - \hat{y}$ with $\epsilon$, our error. As we mentioned above, our last step is adding back the summations. Since $-2$ is a constant, we can keep this outside of the summation. Our value for $x$ changes depending upon what x value we are at, so it must be included inside the summation for the first equation. Below, we have: $$ \frac{dJ}{dm}J(m,b) = -2*\sum_{i=1}^n x(y_i - \hat{y}i) = -2*\sum{i=1}^n x_i*\epsilon_i$$ $$ \frac{dJ}{db}J(m,b) = -2*\sum_{i=1}^n(y_i - \hat{y}i) = -2*\sum{i=1}^n \epsilon_i$$ So that is what what we'll do to find the "best fit regression line." We'll start with an initial regression line with values of $m$ and $b$. Then we'll go through our dataset, and we will use the above formulas with each point to tell us how to update our regression line such that it continues to minimize our cost function. In the context of gradient descent, we use these partial derivatives to take a step size. Remember that our step should be in the opposite direction of our partial derivatives as we are descending towards the minimum. So to take a step towards gradient descent we use the general formula of: current_m = old_m $ - \frac{dJ}{dm}J(m,b)$ current_b = old_b $ - \frac{dJ}{db}J(m,b) $ or in the code that we just calculated: current_m = old_m $ - (-2*\sum_{i=1}^n x_i*\epsilon_i )$ current_b = old_b $ - ( -2*\sum_{i=1}^n \epsilon_i )$ In the next lesson, we'll work through translating this technique, with use of our $\nabla J(m, b)$, into code to descend along our cost curve and find the "best fit" regression line. In this section, we developed some intuition for why the gradient of a function is the direction of steepest ascent and the negative gradient of a function is the direction of steepest decent. Essentially, the gradient uses the partial derivatives to see what change will result from changes in the function's dimensions, and then moves in that direction weighted more towards the partial derivative with the larger magnitude. We also practiced calculating some gradients, and ultimately calculated the gradient for our cost function. This gave us two formulas which tell us how to update our regression line so that it descends along our cost function and approaches a "best fit line".
https://learn.co/lessons/gradient-to-cost-function
CC-MAIN-2019-09
refinedweb
1,759
52.02
.” Not. Another: Problem thirteen from Project Euler is one of those problems that's so simple, I don't understand why it's in the double digits section. The problem reads: “Work out the first ten digits of the sum of the following one-hundred 50-digit numbers.” It then proceeds to list 100 long numbers. I'm not going to paste them here because they are in the code solutions below and I don't want to clog up the “tubez” with more redundant information than I'm about to. Enough of my jibber-jabber. Here is my Haskell solution first (trying to change things up here): module Main where main :: IO()main = do print . take 10 . show $ sum big_number where] followed by my Python solution: #!/usr/bin/python"""code solution for project euler's problem #13 in python."""from __future__ import print_function def print_10(number): print(str(number)[0:10]) if __name__ == "__main__":] print_10(sum(big_number)) and to continue adding in the spice, I have included a solution in Scala: import BigInt._ object problem_13 { def main (args : Array[String]){ val big_number = List(") map {BigInt(_)} val sums = big_number sum val su = sums toString val su10 = su take 10 println(su10) }} The solution, in all three languages, is pretty simple. The recipe essentially says, “Put all numbers into a list. Get the sum of that list, turn that number into a string, and get the first 10 characters of that string.” Times: Haskell (compiled) : real 0m0.004s Haskell (runghc) : real 0m0.314s Python : real 0m0.059s Scala (compiled) : real 0m0.757s For the most part it's pretty standard in these tests to see performance times such that Haskell (compiled) < Python < Haskell (runghc). Java and Perl usually fall somewhere between the Haskell (compiled) and Python, in that order. To see Scala be 2x slower than Haskell (runghc) was a shocker. The only thing that makes sense to me for the slowdown is having to use the BigInt library. That is probably the biggest thing I took away from these time tests - if I want to do REALLY large number crunching and performance DOES matter, JVM-based languages might not be the best option. A few thoughts on Scala: If I haven't stated it already in this blog, I should now give the disclaimer that I'm not a Java fan. I know it still has its loyal followers, but I'm not one of them. Moving on. This was my first time working with Scala, and I'd like to finally welcome Java to the 21st century. While doing some research on the Scala language itself I read that “the industry” was moving to replace Java with Scala. I welcome that change. Does that mean I “like” Scala? The honest answer is, to butcher the quote the appliances from the Flintstones, “Eh, it's a language.” Scala is definitely an improvement over Java – not really that hard to do in my opinion – but, the language still feels unpolished. One quick way to kill the interpreter in Scala is to type “Int” then hit the enter key. Instead of error-ing out, the interpreter does a great job of interpreting a crash test car hitting a cement wall (I had to restart the whole thing.) When I tried the same “technique” in the Python interpreter, I got as a response and for Haskell's interpreter I received “Not in scope: data constructor `Int'”. I also found Scala's function composition to be a little lacking when compared to Haskell. I wasn't able to cleanly change the BigInt data type to String, and then only print out ten characters without requiring three separate val's. Yes, I could have used one var instead, but that's beside the point. I will admit it could be my inexperience with the language showing, so if anyone knows a smoother way to do this in Scala please share it in the comments. All that being said, I do like the way Scala is trying to handle the reducing of Java's dot notation, and I think it's starting to make strides in the right direction in other areas. I'm open to working with Scala more, and look forward to seeing how it evolves over the next few years.
http://scrollingtext.org/node?page=2
CC-MAIN-2015-14
refinedweb
719
70.43
Is there any way to kill a Thread in Python? Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.? Answers It is generally a bad pattern to kill a thread abruptly, in Python and in any language. Think of the following cases: - threads checks on regular interval to see if it is time for him to exit. For example: import threading class StoppableThread(threading.Thread): """Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.""" def __init__(self): super(StoppableThread, self).__init__() self._stop = threading.Event() def stop(self): self._stop.set() def stopped(self): return self._stop.isSet(). Need Your Help : unrecognized selector sent in Objective C ios objective-c core-dataafter checking all the answers of "Unrecognised selector sent" questions such as unrecognized selector sent to instance, and Unrecognized selector sent to instance? did not satisfy my situation so ... How to use native Textview (View) in Andengine Game Activity android textview arabic andengineI want to know is there any way to use native TextView or any other layout of android inside BaseAndEngine Activity.
http://unixresources.net/faq/323972.shtml
CC-MAIN-2019-04
refinedweb
188
58.18
Viktor Haag <address@hidden> writes: > I'm using Emacs 21.3.50.1 built on Mac OSX through the Fink > project with the tarball 'emacs-21.3.50-20040617.tar.gz'. > > I've seen new behaviour that I don't like; when I use 'find-file' > the completion now seems to be case-insensitive. I find this > annoying as it now sees ~/Library and ~/lib as similar requiring > more keystrokes when finding a file in the ~/lib subtree. > > There must be a way to retrieve the old case-sensitive behaviour > so that ~/Lib and ~/lib are matched as different names. > > Can some kind soul please let me know where the variables are > that control this behaviour? There isn't presently an obvious way because the builtin `read-file-name' was changed to bind `completion-ignore-case' to t before calling `completing-read' on DOS, NT, VMS and MACOSX systems. You could partially revert that change to Fread_file_name, eg: --- fileio.c 22 May 2004 23:17:17 +0100 1.505 +++ fileio.c 23 Jun 2004 17:19:07 +0100 @@ -6284,7 +6284,7 @@ } count = SPECPDL_INDEX (); -#if defined VMS || defined DOS_NT || defined MAC_OSX +#if defined VMS || defined DOS_NT specbind (intern ("completion-ignore-case"), Qt); #endif Maybe there should be a `read-file-name-completion-ignore-case' user variable ?
http://lists.gnu.org/archive/html/help-gnu-emacs/2004-06/msg00386.html
CC-MAIN-2013-20
refinedweb
216
61.87
What's new in SharePoint 2013 search for developers Learn about the new features available for developers in Search in SharePoint 2013. Last modified: July 01, 2013 Applies to: SharePoint Foundation 2013 | SharePoint Server 2013 In this article Search client object model for access to Query object model functionality for online, on-premises, and mobile development SQL Syntax Support Removed Search REST service for remote execution of queries from client applications SharePoint Search Query web service is deprecated SharePoint Search Query object model enhancements Keyword query language enhancements Rich results framework for customizing search results UI Connector framework enhancements Additional resources search results. The Search CSOM includes a Microsoft .NET Framework managed client object model and JavaScript object model, and it is built on SharePoint 2013. First, client code accesses the SharePoint CSOM. Then, client code accesses the Search CSOM. To use the Search .NET Framework managed CSOM, you must get a ClientContext instance (located in the Microsoft.SharePoint.Client namespace in the Microsoft.SharePoint.Client.dll). Then, use the object model in the Microsoft.SharePoint.Client.Search.Query namespace in the Microsoft.Office.Server.Search.Client.dll. For more information about the SharePoint CSOM, see SharePoint 2010 Client Object Model. For more information about the ClientContext object, which is the entry point to the CSOM, see Client Context as Central Object. The Search CSOM returns the search results data from the server in JavaScript Object Notation (JSON). The JSON for the search results data contains a ResultTableCollection collection composed of ResultTable objects that represent different result sets. Custom search solutions in SharePoint Server 2013 do not support SQL syntax. Search in SharePoint 2013 supports FQL syntax and KQL syntax for custom search solutions. You cannot use SQL syntax in custom search solutions using any technologies, including the Query server object model, the client object model, and the Search REST service. Custom search solutions that use SQL syntax with the Query server object model and the Query web service that were created in earlier versions of SharePoint Server will not work when you upgrade them to SharePoint Server 2013. Queries submitted via these applications will return an error. For more information about using FQL syntax and KQL syntax, see Keyword Query Language (KQL) syntax reference and FAST Query Language (FQL) syntax reference. SharePoint Server 2013 includes a Representational State Transfer (REST) service that enables you to remotely execute queries against the SharePoint 2013 Search service from client applications by using any technology that supports REST web requests. The Search REST service exposes two endpoints, query and suggest, and will support both GET and POST operations. Results are returned in either XML or JSON format. The following is the access point for the service:. You can also specify the site in the URL, as follows:. The search service returns results from the entire site collection, so the same results are returned for both ways to access the service. You can also use the URL that references client.svc to access the service, as follows:. However, using _api is the preferred convention. Use the following access point to access the service metadata: For general information about the REST service in SharePoint 2013, see Use OData query operations in SharePoint REST requests. The Query web service (located in the path) is deprecated in SharePoint 2013. If you write new applications, avoid using this deprecated feature and instead use the new Query CSOM or Query REST service. If you modify existing applications, we strongly encourage you to remove any dependency on this feature. Query properties provide information about a search query. In SharePoint 2013 Search, a property bag was added to the query and result classes to enable user-defined query properties. You can access existing query properties via the property on one of the query classes, as follows: KeywordQuery.EnableStemming Or you can use the property bag, as follows: KeywordQuery.Properties["EnableStemming"] You can access user-defined properties only by using the property bag, as follows: KeywordQuery.Properties["UserDefinedProperty"] SharePoint 2013 Search includes query properties in the property bag, including new query properties such as: BypassResultTypes Specifies whether the search result item type is returned for the query results. Specify true to return no result type; otherwise, false. EnableInterleaving Specifies whether the result sets generated by executing query rule actions to add a result block are mixed with the result set for the original query. Specify true to mix the generated result set with the original result set; otherwise, false. EnableQueryRules Specifies whether query rules are turned on for this query. Specify true to enable query rules for the query; otherwise, false. You can specify any property in the property bag, including user-defined properties, as query rule conditions. You use query rules to customize the search experience for the kinds of queries that are important to your users. When a query meets conditions specified in a query rule, the rule specifies actions to improve the relevance of the associated search results. SharePoint 2013 includes improvements to the Keyword query language, which are described in this section. Improved NEAR operator In SharePoint Server 2010, the NEAR operator implied a maximum token distance of 8 and preserved the ordering of the input tokens. In SharePoint 2013, the NEAR operator no longer preserves the ordering of tokens. In addition, the NEAR operator now receives an optional parameter that indicates maximum token distance. However, the default value is still 8. If you must use the previous behavior, use ONEAR instead. The NEAR operator can be used in property restriction expressions, as shown in the following example: "acquisition" NEAR "debt" This query matches items where the tokens "acquisition" and "debt" appear within the same document, with a maximum token distance of 8 (which is the default value of n if no value is provided). The order of the tokens is not significant for the match. If you require a smaller token distance, you can specify it as follows: "acquisition" NEAR(n=3) "debt" This query matches items where the two tokens "acquisition" and "debt" appear within the same document, with a maximum token distance of 3. The order of the tokens is not significant for the match. New ONEAR operator The ONEAR operator provides ordered near functionality. It receives an optional parameter that indicates maximum token distance; the default value is 8. The ONEAR operator preserves the order of the input expressions. For unordered proximity, use NEAR. You can use the ONEAR operator in property restriction expressions, as shown in the following example: "acquisition" ONEAR "debt" This query matches items where the two tokens "acquisition" and "debt" appear within the same document, with a maximum token distance of 8 (which is the default value of n if no value is provided). The order of the tokens must match for an item to be returned. If you require a smaller token distance, you can specify it as follows: "acquisition" ONEAR(n=3) "debt" This query matches items where the two tokens "acquisition" and "debt" appear within the same document, with a maximum token distance of 3. The order of the tokens must match for an item to be returned. New XRANK operator In SharePoint Server 2010, the XRANK operator was available only with FAST Query language (FQL). SharePoint 2013 introduces a new and powerful XRANK operator. The XRANK operator provides dynamic control of ranking. This operator boosts the dynamic rank of items based on the occurrence of certain terms without changing items that match the query. SharePoint 2013 Search includes a new results framework that makes it easy to customize the appearance (look and feel) of the search results user interface (UI). Now, instead of writing a custom XSLT to change how search results are displayed, you can customize the appearance of important types of results by using display templates and result types. Display templates Display templates define the visual layout and behavior of a result type by using HTML, CSS, and JavaScript. You can customize the existing display templates or create display templates by using an HTML editor and upload them to the display templates gallery. Result types Result types define how to display a set of search results based on a collection of the following: Rules Determine when to apply a result type, based on the specified conditions. Rule conditions can be joined by using equality, comparison, and logical operators. Properties Determine the list of managed properties for the result. You must add managed properties to the list before you map the managed property to a display template. Display templates Define the visual layout of the result type. Administrators can create and manage result types at the site level or service application level; no custom coding is required. SharePoint 2013 Search enables you to retrieve claims information for content stored in custom external data sources that are crawled by using the connector framework. The connector framework also provides improved exception capturing and logging to help you troubleshoot errors encountered when crawling content sources using custom connectors that are built on top of the connector framework. For information about the connector framework, see Search connector framework in SharePoint 2013.
https://msdn.microsoft.com/en-us/library/jj163951(v=office.15)
CC-MAIN-2015-32
refinedweb
1,522
53.21
Turbo C - Calculating Slope of a Line Given Two End Points Here is the Turbo C program for Calculating Slope of a Line Given Two End Points It uses the following formula given points are (x1,y1) and (x2, y2) slope = (y2 - y1) / (x2 - x1) Source Code #include <stdio.h> #include <math.h> void main() { float slope; float x1, y1, x2, y2; float dx, dy; printf("Program to find the slope of a line given two end points\n"); printf("Enter X1: "); scanf("%f", &x1); printf("Enter Y1: "); scanf("%f", &y1); printf("Enter X2: "); scanf("%f", &x2); printf("Enter Y2: "); scanf("%f", &y2); dx = x2 - x1; dy = y2 - y1; slope = dy / dx; printf("Slope of the line with end points (%.4f, %.4f) and (%.4f, %.4f) = %.4f", x1, y1, x2, y2, slope); } Output Program to find the slope of a line given two end points Enter X1: 2.5 Enter Y1: 7.5 Enter X2: 12.5 Enter Y2: 18 Slope of the line with end points (2.5, 7.5 and (12.5, 18) = 1.05 Press any key to continue . . .
http://www.softwareandfinance.com/Turbo_C/Slope_of_a_line.html
CC-MAIN-2017-43
refinedweb
181
80.62
Components and supplies Necessary tools and machines Apps and online services About this project You can view the basics of how my machine works, and some clips of it in action, in the video below. In this article I will explain in more detail the process through which I arrived at my current design. For the sake of clarity, some concepts found in the video will also be covered in this article.The Idea When I first had the idea to hack Pie Face, I had several design goals in mind: - The outward appearance of the game should remain unchanged. - The game should shouldn't create a lot of additional noise (i.e. no loud whirring noises from motors or servos.) - The game should interface with a mobile phone wirelessly. In short, I wanted to have complete control over the throwing mechanism of the game, without anyone else noticing. A daunting task to say the least. The main problem being packing everything into a very limited volume.Reverse Engineering In order to understand how to hack Pie Face, I first needed to understand how the mechanics of the game worked. And that required some reverse engineering. This module contains all of the mechanics of Pie Face. As you can see there's a lot going on under the hood of this game. The lever on the right in figures 1 and 2 is what the purple throwing arm attaches to, and the large gear in the center of figure 2 is what is rotated by the knob on the side of the game. If you look closely at figure 2, you can see tiny wedges scattered around the inner track of the gear. As you'll see in the following animation, these wedges are the key element that makes the game run. This animation represents how the mechanics of Pie Face operate. The large gear is turned by the user, once the idler(shown in red) is forced upwards by one of the wedges, the throwing arm(shown in purple) is released. There are only 2 wedges in this animation for the sake of simplicity. As you can see in figure 2, the actual gear has many more wedges scattered randomly around its inner track. This means that Pie Faceisn’t random at all! It’s just designed to feel like it is. In reality, there is a defined pattern for when the arm will be triggered.Implementation I hacked the game by doing the following. I took apart the mechanics module, sanded down the triggering wedges, and drilled a hole in the idler. Then I attached a fishing line to the idler, routed the line through an eyebolt, then attached the other end to a servo. Now, the throwing arm isn't ever triggered unless the servo pulls down on the fishing line. I won't go into detail on why I chose this method to hack the game over others, as that was covered in the video. But I will say that this method willnotworkwithamicro-servo. The torque required to overcome the force of the idler is much higher than a micro-servo can handle. I used a servo with an output torque of 20 kg, and that has worked great so far. On another note, I was initially concerned that the amount of noise the servo made would tip people off. And after initial testing, I discovered that you can hear the servo when it's turning inside the game, but the sound is so brief that it doesn't raise suspicion. As far as electronics go, I used an HC-05 Bluetooth module to establish communication between an Arduino Nano, and an app on my phone called Serial Bluetooth Terminal. Many different apps exist for interfacing with Bluetooth modules and an Arduino, but this was the one that worked best for me. Connecting to my Bluetooth module with this app gives me direct access to the Nano’s serial port, which allows for extremely simple commands to be used to communicate with the Nano.V0.5 During the early stages of development, I used an ESP8266 WiFi Arduino to establish communication between the phone and the Pie Face game. This method worked fine, but I quickly abandoned this approach in favor of Bluetooth, which would allow me to connect to the game even if there wasn't a Wifi network available.V1 My first iteration was really rough around the edges. All the connections were hot glued and taped, and the power supply was a AA battery bank I salvaged from an old RC car (figure 8.) In order to ensure the arm was triggered at the exact moment the user turns the knob (as it does in the original game), I hot glued a limit switch right beneath a protrusion that oscillates back and forth when the central gear is rotated (figures 9-10.) I then programmed the Arduino Nano to wait for both a trigger from my phone, and a trigger from the limit switch. Only when both of these conditions are met does it launch the arm. I installed the eyebolt for the fishing line to run through right beneath the chin rest (Figures 10-11.) Technically this modifies the outward appearance of the game, but it hides underneath the chin rest quite nicely, and no one would suspect anything if they saw the end of a bolt on the game anyway. After I took the V1 out for a little test drive (the outdoor game of Pie Face in the video), I decided to rebuild my V1 and make several design improvements.V2 The V2 was a huge improvement over the V1. I made two main modifications to my original design when I build the V2, namely: - I mounted all of the electronics to a central belly pan which attached to the mechanics module (Figure 13.) I did this to allow the two halves of the game to separate completely, which was not possible with the V1 due to the fishing line tying the two halves together due to the way it was attached. I mounted the belly pan by screwing it into standoffs I hot glued to the mechanics module. - I replaced the clunky AA battery pack with a low profile 5V USB battery pack (figures 14-16.) This was a much easier to use and cheaper solution to powering the game, as I didn't have to buy new batteries every time I wanted to use it. Something that is really neat about the software I wrote for my hacked version of Pie Face, is that it utilizes the limit switch to allow me to specify the number of turns I want the game to wait before activating the arm. It's basically a delayed activation feature. This allows me to send the character "5" to the Arduino Nano from my phone (this sets the delay count to 5 turns), then go to the camera app on my phone, and record my friends as they get hit! All in all the V2 has less potential points of failure, and it worked like a charm in field testing (the indoor game of Pie Face in the overview video).Final Thoughts and Possible Improvements If you decide you want to make your own hacked Pie Face game, make sure your power supply has a decent capacity. I mention this because in order for the outward appearance of the game to remain unchanged, I didn't install any kind of power switch. This means the unit must be powered on when assembled, and the battery has to last from the time it's assembled, to the time you start to play the game. The USB battery pack I used has a rating of 2600 mAh, and it can power the game for up to 6 hours when it isn't in use (i.e. it won't power the game for 6 hours while you are playing it, as the servo moving would drain the battery much more quickly.) All in all, I'm very pleased with the way this project turned out. It was a great engineering challenge, I had a blast doing it, my friends didn't suspect a thing, and I was able to fulfill all my original design criteria: - The outward appearance of my hacked game remains virtually unchanged. - It doesn't create a suspicious amount of noise. - It interfaces with my Android phone wirelessly If I ever decide to make a V3, I will probably: - 3D Print the belly pan rather than using wood. - Find an easier way to tension the fishing line. In its current state, it is nearly impossible to fit my fingers in far enough to do so. - Add a switch to the RX and TX lines connected to the Bluetooth module. The Arduino doesn't accept software changes when these serial communication lines are in use. In order to upload new software, I have to unplug the Bluetooth module, upload my code, then plug the module back in. A switch would make this process much easier. Additional images of the V2 are shown above. Code Hacked Pie Face SoftwareArduino #include <Wire.h> #include <Servo.h> // Code for debouncing of the limit switch was taken from official Arduino website: // Servo arm; int buttonState; // the current reading from the input pin int lastButtonState = LOW; // the previous reading from the input pin // the following variables are unsigned longs because the time, measured in // milliseconds, will quickly become a bigger number than can be stored in an int. unsigned long lastDebounceTime = 0; // the last time the output pin was toggled unsigned long debounceDelay = 20; // the debounce time; increase if the output flickers const int buttonPin = 4; // Indexing limit switch pin number boolean isTriggered = false; boolean throwPie = false; char input = ""; int code = -1; double totalCount = 0; int throwIndex = -2; // These two positions will vary depending on the tension of the fishing line int throwPos = 155; int standByPos = 93; void setup() { pinMode(buttonPin, INPUT_PULLUP); arm.attach(3); Serial.begin(9600);// attaches the servo on pin 9 to the servo object } void loop() { if (Serial.available() > 0) { input = Serial.read(); Serial.println(input); } switch (input) { case '1': code = 1; throwIndex = totalCount+code; break; case '2': code = 2; throwIndex = totalCount+code; break; case '3': code = 3; throwIndex = totalCount+code; break; case '4': code = 4; throwIndex = totalCount+code; break; case '5': code = 5; throwIndex = totalCount+code; break; case '6': code = 6; throwIndex = totalCount+code; break; default: break; } // read the state of the switch into a local variable: int reading = !digitalRead(buttonPin); //; totalCount += 0.5; Serial.print("\ntotalCount = "); Serial.println(totalCount); Serial.print("\nthrowIndex = "); Serial.println(throwIndex); } } if (totalCount-throwIndex == -1) { isTriggered = true; throwIndex = -2; } if (isTriggered == true && buttonState == HIGH) { throwPie = true; isTriggered = false; } if (throwPie) { arm.write(throwPos); Serial.println("\nIki Iki Iki Ptwang!"); delay(500); arm.write(standByPos); throwPie = false; } // save the reading. Next time through the loop, it'll be the lastButtonState: lastButtonState = reading; } Schematics Author HarrisonMcIntyre - 2 projects - 3 followers Published onSeptember 17, 2019 Members who respect this project you might like
https://create.arduino.cc/projecthub/HarrisonMcIntyre/control-pie-face-wirelessly-with-your-phone-270715
CC-MAIN-2020-50
refinedweb
1,852
67.18
From: Lorien Dunn (l.dunn_at_[hidden]) Date: 2001-03-15 19:05:16 Hullo, I've now converted all my pointers to boost::shared_ptr, and all seems to be well. However I'm getting python TypeErrors when I use code like this: class MyPyClass(MyBasePyClass, MyBaseCPPClass): def __init__(self): MyBasePyClass.__init__(self) MyBaseCPPClass.__init__(self) The problem occurs with MyBasePyClass; Python says "unbound method must be called with class instance 1st argument" This makes sense to me- I'm passing a meta-class instead of a normal class. However I don't know how to work around it. Thanks for any help, Lorien Dunn ____________________________________________________________________ Get free email and a permanent address at Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/03/9552.php
CC-MAIN-2022-40
refinedweb
135
67.45
I have been working in JavaScript a lot recently and have missed coding in python. I wanted to work on something that would have real world value so I decided to see if I could load an ESRI REST endpoint in to a pandas DataFrame. This post will only scratch the surface of what you can do with data in python but should give you an idea of what is possible and allow you to imagine some really interesting possibilities. I have posted all of the code as a static IPython notebook and on Python Fiddle for easy copy+paste. Getting Data We are going to start by grabbing some open data from the City of Albuquerque – I will use the Crime Incidents REST service. The first step is to make the request and read it. import urllib, urllib2 param = {‘where’:’1=1′,’outFields’:’*’,’f’:’json’} url = ‘…/APD_Incidents/MapServer/0/query ?’ + urllib.urlencode(param) rawreply = urllib2.urlopen(url).read() If you print rawreply, you will see a long string. We know it is a string because if you print rawreply[0] you will see { as the result. To confirm it, you can type(rawreply) and get back str. Working with the Data Now we need to convert it to JSON so that we can do something with the data. import simplejson reply = simplejson.loads(rawreply) print reply[“features”][0][“attributes”][“date”] print reply[“features”][0][“attributes”][“CVINC_TYPE”] The above code will return something like 1405468800000 LARCENY ALL OTHER How many features do we have? 24,440. print len(reply[“features”]) reply[“features”][:3] The above code will give you the count and show you the first three results. You can grab subsets of the data by splicing. If you want records 2 through 4, you can reply[“features”][2:5]. Let’s get to the good stuff. Pandas DataFrame JSON is great, but let’s put the data in to a table (DataFrame). First, a disclaimer. ESRI REST services return a deeply nested JSON object. Converting this to a DataFrame is more difficult than: import pandas.io.json as j jsondata = j.read_json(the URL) But it is not much more difficult. We just need to recreate the JSON object grabbing what we are interested in – the attributes of the features. We need a loop that will create an array of dictionaries. Each dictionary will be the attributes of a single feature. count = 0 myformateddata=[] while (count < len(reply[“features”])): mydict={} for key, value in reply[“features”][count][“attributes”].iteritems(): mydict[key]= value myformateddata.append(mydict) count = count + 1 The code above initializes a counter, array and a dictionary. It then loops until there are no more features. The loop reads the attributes of the feature and creates a dictionary for each attribute and its value. Now we have an array of dictionaries. If you len(myFrame) you will get 24,440. That is the same number of records we had in our JSON object. We have all our data and it is an array. We can now create a DataFrame from it. from pandas import DataFrame as df myFrame = df(data=myformateddata) myFrame The code above imports the tools we need, assigns the frame to a variable and then prints it. You should see a table like the image below. Data Manipulation So what can we do with a DataFrame? First, let’s save it. We can export the DataFrame to Excel (The City of Albuquerque provides open data in several formats, but other sources may only provide you with the REST API so this would be a valuable method for converting the data). from pandas import ExcelWriter writer = ExcelWriter(‘CrimeIncidents.xlsx’) myFrame.to_excel(writer,’Sheet1′,index=False) writer.save() Great! the data has been exported. Where did it go? The code below will return the directory with the file. import os os.getcwd() You can open the Excel file and work with the data or send it to someone else. But let’s continue with some basic manipulation in the DataFrame. How many of each type of incident do we have in our data? myFrame[“CVINC_TYPE”].value_counts() Now I would like a barchart of the top 5 most frequent Incident Types. import matplotlib.pyplot as plt incidentType=myFrame[“CVINC_TYPE”].value_counts()[:5] incidentType.plot(kind=’barh’,rot=0) Lastly, let’s filter our results by date. import datetime import time myDate=datetime.datetime(2015,1,10) milli=time.mktime(myDate.timetuple()) * 1000+ myDate.microsecond / 1000 The above code creates a date of January 10, 2015. It then converts it to milliseconds – 1420873200000. If we pass the date to the DataFrame we can grab all the Incidents after January 10, 2015. myFrame[(myFrame.date>milli)] Now you know how to connect to an ESRI REST endpoint, grab the data, convert it to JSON and put it in a DataFrame. Once you have it in the DataFrame, you can now display a table, export the data to Excel, plot a barchart and filter the data on any field. 2 Responses to “ESRI REST and Python: pandas.DataFrame”
https://paulcrickard.wordpress.com/2015/01/13/esri-rest-and-python-pandas-dataframe/
CC-MAIN-2018-09
refinedweb
846
67.35
This class transparently opens gzipped or bzip2-ed files. More... #include <l_stdio_wrap.h> This class transparently opens gzipped or bzip2-ed files. This is like a fancier version of File_Ptr_Read, handling for you the chance that you've zipped up the target file, even if you don't remember doing so. I.e., this will automatically try your filename with zip-like suffixes added on if the ordinary filename is not found (or otherwise not readable). You, the user, are not expected to append ".gz" or ".bz2" to the filename. Rather, the ctor of this class takes your filename (say "apple.txt") and tries to open it. If it fails, it automatically looks for the presence of zipped versions (e.g., "apple.txt.gz" and "apple.txt.bz2"), and if they are found, this unzips them in a temporary file and gives you a pointer to the temporary file. This temporary file has a randomly generated filename; use method get_temp_filename() to access it. When this object is destroyed the temporary file is deleted. In other words, if the filename given to the ctor is present, then this object acts just like a File_Ptr_Read. If not, but if filename plus a common zip suffix exists, then this object decompresses to a temporary file and opens that as a File_Ptr_Read. We do not specify the order that the compression suffixes are tested; if more than one are present, any one of them might be opened. If the input filename is, for example, "apple.txt.bz2" and if that file is present, this file will simply open the binary compressed file for you, which is likely to be non-text. That maybe wasn't what you meant.
http://kobus.ca/research/resources/doc/doxygen/classkjb_1_1File__Ptr__Smart__Read.html
CC-MAIN-2022-21
refinedweb
284
74.08
How to build a GLM models with dask. GLM models stands for Generalized Linear Models. It is mainly used to solve the regression problems containing continuous values. The Dask-GLM project is nicely modulated, It allows different GLM families and Regularizers as well, It includes a relatively direct interface for implementing custom GLMs. #! pip install dask_glm from dask_glm.datasets import make_regression import dask_glm.algorithms import dask We will create the regression model and pass it through the persist to create the dataframe so that we get the partitions of 100 dask DataFrames. x, y = make_regression(n_samples=2000, n_features=100, n_informative=5, chunksize=100) x, y = dask.persist(x, y) print(x) print(y) algo = dask_glm.algorithms.admm(X, y, max_iter=5) algo
https://www.projectpro.io/recipes/build-glm-models-with-dask
CC-MAIN-2021-39
refinedweb
122
52.26
Hello world! Monday, 9. June 2008, 18:26:28 using System; class HelloWorld { static void Main() { Console.WriteLine("Hello World!"); } } Okay, perhaps I should not scare all that would dare (hirr) to read this blog away. I am a computer science student located in Oslo, with great interest for web programming and designing. I am running several other blogs, but this could probably be somewhat more "casual", and perhaps people would even read it. Perhaps. I also have an interest in gaming, and then in particular MMO games such as World of Warcraft or Lord of the Rings Online, and I am playing more or less casually in both of those games. I am longing for the upcoming game from NCsoft, Aion, which, from what it looks, will be amazing. Recently their website was put up, and I am looking more and more forward to playing it. More about me, I live in a small flat with my beautiful girlfriend (whom I might get to blog at my.opera.com too), but we're moving 1st and 2nd of July. That's going to be so great, as we are living on the top floor with only one great window in the living room. That's right, no windows in our bedroom, and thus it gets incredibly hot during the summertime. But from July -- NO MORE! There are not very many more things to tell about me. I am soon turning 22, and life's good. If you follow this blog (at least sporadically), you will probably get to know me better. If I'm the Prime Evil or actually a nice guy, it's up to you to decide. Nerak # 30. May 2009, 11:27 BTW - I really love your blog. The mix of personal things along with geeky things really meshes so well. Not to mention the theme you've created. It's so lovely. It makes me not want to leave...obviously since I've commented on about 10 different posts in the past 10 minutes! Anyway, have a terrific day & I'll be talking to you soon!
http://my.opera.com/Amnith/blog/2008/06/09/hello-world
crawl-002
refinedweb
351
73.68
Tigran Aivazian writes:> On Tue, 22 Feb 2000, Richard Gooch wrote:> > I don't have /proc/rtc on my machine, but I do have /dev/misc/rtc, so> > I don't know where /proc/rtc comes from.> > Sorry, my typo - I meant /proc/driver/rtc. The source is > drivers/char/rtc.c and it does OK, I've had a quick squiz at it.>.> > I think we should be strict with the new devfs namespace. If it's not> > actually part of the CPU, it doesn't belong in /dev/cpu. If we're not> > strict, we end up with the same ad-hockery as /proc.> > ok, then it could go to /dev/misc/rtc.txt?Sigh. It's a pity that the RTC dev driver implements a read() method.Otherwise I'd urge a scheme like MTRR: read() for humans and ioctl()for programmes.Hm. Is the current rtc_read() method actually used by applications?Otherwise, /dev/misc/rtc.txt is probably
https://lkml.org/lkml/2000/2/22/173
CC-MAIN-2014-10
refinedweb
162
85.69
WSActivate (C Function) Details - WSActivate() can be called only after one of the WSOpen functions such as WSOpenArgcArgv() or WSOpenString(). - Any call that reads data from or writes data to a link will activate the link automatically if WSActivate() has not been called. - WSActivate() returns 0 in the event of an error, and a nonzero value if the function succeeds. - WSActivate() on an already activated link will do nothing. - If the other side of the link has not yet been created, then the behavior depends upon whether the link was created using "-linkconnect". If the link was created, then WSActivate() will wait for the link to be both connected and activated. If the link is a connecting link, then WSActivate() will immediately return an error and close the link. - If WSActivate() blocks, it will call the yield function set by WSSetYieldFunction(). - When called on an inactive link, WSReady() can be used to determine whether the other side of the link has been created. - Use WSError() to retrieve the error code if WSActivate() fails. - WSActivate() is declared in the WSTP header file wstp.h. Examples Basic Examples (1) #include "wstp.h" /* create a link and establish the connection */ int main(int argc, char **argv) { WSENV env; WSLINK link; int error; env = WSInitialize((char *)0); if(env == (WSENV)0) { /* unable to initialize the WSTP environment */ } /* let WSOpenArgcArgv process the command line */ link = WSOpenArgcArgv(env, argc, argv, &error); if(link == (WSLINK)0 || error != WSEOK) { /* unable to create the link */ } /* WSActivate will establish the connection */ if(!WSActivate(link)) { /* unable to establish communication */ } /* ... */ WSClose(link); WSDeinitialize(env); return 0; }
https://reference.wolfram.com/language/ref/c/WSActivate.html
CC-MAIN-2020-05
refinedweb
264
55.54
Trying to use the following on a Sikuli script: from google.cloud import vision ran: sudo pip install google-cloud-vision when I use Sikuli IDE I get error: [error] ImportError ( No module named google ) How can I get this to work? Thx Question information - Language: - English Edit question - Status: - Answered - For: - Sikuli Edit question - Assignee: - No assignee Edit question - Last query: - 2019-10-10 - Last reply: - 2019-10-10 see: /sikulix- 2014.readthedoc s.io/en/ latest/ scenarios. html#access- python- packages- from-sikulix- scripts- run-by- sikulix- gui-or- commandline https:/ be aware: python modules used in the SikuliX environment must: - be written in plain python - not contain any C-based stuff nor dependencies to native libraries - conform to Python language level 2.7 means: can be used in a Jython environment.
https://answers.launchpad.net/sikuli/+question/684900
CC-MAIN-2020-10
refinedweb
134
59.94
Understanding Java RMI Internals Is RMIRegistry Absolutely Necessary? Can I Do Away With RMIRegistry and Bind/Rebind/Lookup Methods? That is how it works. Is RMIRegistry an absolute necessity? From the above discussion, it is evident that an RMIRegistry is not a must. RMIRegistry is used just for the bootstrapping purpose, nothing else. Once the client gets the stub object from the Registry, the Registry is not used anymore. The communication takes place directly between client and server by using sockets. So, if you can provide the bootstrapping in some other way, you can eliminate the use of Registry itself. In other words, if the client can get an object of the stub from the server somehow, that is just enough. That means if you have some facility to transport the stub object created in the server to the client machine, you don't have to use the RMIRegistry or bin/rebind/lookup methods. That sounds great. eh? Also, it should be noted that even if the client has CalcImpl_Stub.class on its machine, it cannot simply create an object of the stub because its constructor takes a RemoteRef reference as a parameter; you can get only from an object of the remote server, which is exactly what we are trying to access! To facilitate the transportation of the stub object from server to client, you can build a transport facility of your own that is based on sockets that serialize the Stub object and sends it to the clients. That will eliminate the need of the Registry at all! I will give you an example that does not use the RMIRegistry. But, in this example, the client and server run on the same machine. You can use socket programming as I stated earlier if the client and server are on different machines, but the program will make the basic concepts clear. Source code of CalcImpl.java: import java.rmi.server.UnicastRemoteObject; import java.rmi.RemoteException; public class CalcImpl extends UnicastRemoteObject implements Calc { public CalcImpl()throws RemoteException { super(); } public int add(int i,int j)throws RemoteException { return i+j; } public static void main(String[] args)throws Exception { CalcImpl ci=new CalcImpl(); RemoteRef ref=ci.getRef(); CalcClient cc=new CalcClient(ref); } } Source code of CalcClient.java: import java.rmi.server.*; import java.rmi.*; public class CalcClient { public CalcClient(RemoteRef ref)throws RemoteException { CalcImpl_Stub stub=new CalcImpl_Stub(ref); System.out.println(stub.add(3,4)); } } On console 1: C:\test2>java CalcImpl On console 2: C:\test2>java CalcClient You will get an output of 7 on your screen. That is it!!! Questions Revisited<<
https://www.developer.com/java/other/article.php/10936_3455311_3/Understanding-Java-RMI-Internals.htm
CC-MAIN-2019-09
refinedweb
433
57.27
Scott Ellsworth wrote: > > "Non public classes cause two problems. Firstly depend cannot relate the > class file to a source file." > > I had remembered this as only impacting static constants, not extra > non-public classes defined in the same file. > In 1.4.1 <depend> could remove a non-public class which was out of date but not remove the main class. This would not trigger <javac> to recompile the class and you could have some problems. In 1.5 <depend> now warns you about such classes being out of date and does not remove them. I think this is preferable. Another option would be to trigger a complete build in this case. > Any chance someone is working on the "In the future this may be > addressed using the source file attribute in the classfile" enhancement > suggested in the documentation? > There is some chance :-). Of course, such an approach will only work if the class file is compiled with debugging info. Then again, that is usually going to be the case when you want incremental builds. Conor -- To unsubscribe, e-mail: <mailto:ant-user-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:ant-user-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/ant-user/200205.mbox/%3C3CE6245D.8060904@cortexebusiness.com.au%3E
CC-MAIN-2015-27
refinedweb
199
66.84
C Programming Pointers and Arrays Arrays are closely related to pointers in C programming but the important difference between them is that, a pointer variable takes different addresses as value whereas, in case of array it is fixed. This can be demonstrated by an example: #include <stdio.h> int main() { char charArr[4]; int i; for(i = 0; i < 4; ++i) { printf("Address of charArr[%d] = %u\n", i, &charArr[i]); } return 0; } #include <stdio.h> int main() { int i, classes[6],sum = 0; printf("Enter 6 numbers:\n"); for(i = 0; i < 6; ++i) { // (classes + i) is equivalent to &classes[i] scanf("%d",(classes + i)); // *(classes + i) is equivalent to classes[i] sum += *(classes + i); } printf("Sum = %d", sum); return 0; } Output Enter 6 numbers: 2 3 4 5 3 4 Sum = 21
https://www.programiz.com/c-programming/c-pointers-arrays
CC-MAIN-2016-50
refinedweb
134
50.5
Hi, I am going through SCJP material and I have a doubt regarding a question I came across in that material:, Yes the answer is B alright. Here is how it is : Let us say A1 is the object that is referenced by a1, A2 is the one referenced by a2. Similarly, B1 for b1 and B2 for b2 a1.b1=1 and a2.b1=1 are static fields. so even when a1=null, a2.b1 holds a reference to B1 And a2.b2=b2 holds the reference to the object B2. So only A1 has no reference ( A2 is referenced by a2). Thus only A1 is eligible for garbage collection Hi Pradeep, Thanks for your reply Here the a2.b1=b1, is not assigned in the code. But still since a1.b1 is static and is made to point to b1, the object of B1, are you saying that the b1 will stay alive until the class ends? I can understand the reason why the object of b2 will stay alive, since a2.b2 is made to point to it. Thanks Code is hard to read. Following is broken down with comments that explain each reference. class Beta { } class Alpha { static Beta b1; // Reference X1 Beta b2; // Reference X2 } public class Tester { public static void main(String[] args) { Beta b1 = new Beta(); // Reference Y1 - Instance 1 Beta b2 = new Beta(); // Reference Y2 - Instance 2 Alpha a1 = new Alpha(); // Reference Y3 - Instance 3 Alpha a2 = new Alpha(); // Reference Y4 - Instance 4 a1.b1 = b1; // Reference X1 set to Instance 1 a1.b2 = b1; // Reference Y3.X2 set to Instance 1 a2.b2 = b2; // Reference Y4.X2 set to Instance 2 a1 = null; // Reference Y3 clear b1 = null; // Reference Y1 clear b2 = null; // Reference Y2 clear // At this point there are 4 instances. // Result // Y4 is still set // Thus Instance 4 is still referenced // Thus Y4.X2 still set // Thus Instance 2 is still referenced // X1 is still set // Thus Instance Instance 1 is still referenced. // In the above 3 instances are still referenced // Thus there is one instance, Instance 3, which // is eligible. // do stuff } } yes that's right... the class memory is shared by all of its objects. so a1.b1 is the same as a2.b1 .
https://community.oracle.com/message/11089105
CC-MAIN-2014-10
refinedweb
378
75.91
On Thursday, January 06, 2011, Jiri Slaby wrote:> On 01/06/2011 04:57 PM, Rafael J. Wysocki wrote:> > On Thursday, January 06, 2011, Jiri Slaby wrote:> >> When ioremap fails (which might happen for some reason),> > > > If it happens, something is seriously wrong (see below).> > I agree that something is broken, however ioremap may fail for dozen of> reasons. Ignoring the retval is a *bad* idea and it took me a while to> sort out what is wrong. Especially if one has no console like throughout> suspend. If it was handled properly, I would know immediately. (There> should be a message printed out which I forgot to add.)It wasn't handled, because it _never_ failed previously. The ACPI mappingchange apparently revealed a deeper problem.I'm not saying the patch isn't useful, though, and I'm going to take itfor 2.6.38 (perhaps with minor modifications).> > BTW, to keep things in context, please post fixes like this in the same thread> > in which you reported the problem. At lease please retain the CC list from> > there.> > I actually did, there is:> In-Reply-To: <201101060028.43342.rjw@sisk.pl>> and it successfully threaded to the conversation for me in TB.But you trimmed the CC line, didn't you? Which caused my filter to put thepatch into a different folder. :-)> >> we nicely oops in suspend_nvs_save due to NULL dereference by memcpy in> >> there. Fail gracefully instead.> >>> >> Signed-off-by: Jiri Slaby <jslaby@suse.cz>> >> Cc: "Rafael J. Wysocki" <rjw@sisk.pl>> >> ---> >> drivers/acpi/sleep.c | 5 ++---> >> include/linux/suspend.h | 4 ++--> >> kernel/power/nvs.c | 8 +++++++-> >> 3 files changed, 11 insertions(+), 6 deletions(-)> >>> >> diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c> >> index c423231..f94c9a9 100644> >> --- a/drivers/acpi/sleep.c> >> +++ b/drivers/acpi/sleep.c> >> @@ -124,8 +124,7 @@ static int acpi_pm_freeze(void)> >> static int acpi_pm_pre_suspend(void)> >> {> >> acpi_pm_freeze();> >> - suspend_nvs_save();> >> - return 0;> >> + return suspend_nvs_save();> >> }> >> > >> /**> >> @@ -151,7 +150,7 @@ static int acpi_pm_prepare(void)> >> {> >> int error = __acpi_pm_prepare();> >> if (!error)> >> - acpi_pm_pre_suspend();> >> + error = acpi_pm_pre_suspend();> >> > >> return error;> >> }> >> diff --git a/include/linux/suspend.h b/include/linux/suspend.h> >> index c1f4998..3ac2551 100644> >> --- a/include/linux/suspend.h> >> +++ b/include/linux/suspend.h> >> @@ -262,7 +262,7 @@ static inline bool system_entering_hibernation(void) { return false; }> >> extern int suspend_nvs_register(unsigned long start, unsigned long size);> >> extern int suspend_nvs_alloc(void);> >> extern void suspend_nvs_free(void);> >> -extern void suspend_nvs_save(void);> >> +extern int suspend_nvs_save(void);> >> extern void suspend_nvs_restore(void);> >> #else /* CONFIG_SUSPEND_NVS */> >> static inline int suspend_nvs_register(unsigned long a, unsigned long b)> >> @@ -271,7 +271,7 @@ static inline int suspend_nvs_register(unsigned long a, unsigned long b)> >> }> >> static inline int suspend_nvs_alloc(void) { return 0; }> >> static inline void suspend_nvs_free(void) {}> >> -static inline void suspend_nvs_save(void) {}> >> +static inline int suspend_nvs_save(void) {}> >> static inline void suspend_nvs_restore(void) {}> >> #endif /* CONFIG_SUSPEND_NVS */> >> > >> diff --git a/kernel/power/nvs.c b/kernel/power/nvs.c> >> index 1836db6..57c6fab 100644> >> --- a/kernel/power/nvs.c> >> +++ b/kernel/power/nvs.c> >> @@ -105,7 +105,7 @@ int suspend_nvs_alloc(void)> >> /**> >> * suspend_nvs_save - save NVS memory regions> >> */> >> -void suspend_nvs_save(void)> >> +int suspend_nvs_save(void)> >> {> >> struct nvs_page *entry;> >> > >> @@ -114,8 +114,14 @@ void suspend_nvs_save(void)> >> list_for_each_entry(entry, &nvs_list, node)> >> if (entry->data) {> >> entry->kaddr = ioremap(entry->phys_start, entry->size);> > > > I wonder what happens if you simply change the ioremap() here to> > ioremap_nocache() without any other modifications?> > ioremap *is* ioremap_nocache on x86. And that's the conflict it> complains about I guess? Don't you mean ioremap_cache?Yes, I meant ioremap_cache(), sorry. Using ioremap_cache() here fixes theproblem for Len (he's seeing the same issue on his test machine).The question is why it helps, though. My theory is that we have mapped thesame area already using ioremap_cache() and now we're trying to map it againusing ioremap_nocache(), hence the conflict. I need to confirm this.> > It _really_ shouldn't fail here, because the NVS pages are known to be present.> > It fails because of conflicting maps as can be seen in the photo. At> least I think so.Yes, I think so too. Which is _suspicious_.Thanks,Rafael
https://lkml.org/lkml/2011/1/6/248
CC-MAIN-2017-17
refinedweb
666
51.55
On Wed, Aug 04, 1999 at 11:53:50AM -0700, costin@dnt.ro wrote: > Ok, but besides the fact that it will work in ANY language, on ANY > platform and support ALL existing protocols, I haven't heard anything > about the API itself. We haven't gotten this far, but we really need to start, as James Todd has said. > Will the client pool the config information using a get method or will the > config service push the data ( or both )? Most likely both, as more options at the outset aren't bad. > Will it support notification or will poll for changes ( and how can you > detect the individual changes if you use XML file) ? I think we need to support both of these. It seems to me that it'd be nice to have event notification, but also is necessary to have polling ability (with checking (ie. diffs) done on the client side). > Will it support a federated namespace or everything will be agregated into > a XML file? Good question. And I don't know the answer. > If you want to use Corba/IDL, how it will be different from the existing > services for management or naming (that solve the same problem) ? > How it will be different/better from existing Java APIs that deal with > management ( JMX ) or with naming ( JNDI ). Again, good question, one I can't answer yet. > Sure, but Tomcat ( as a client ) will need to use a clear API to access > the service. That's missing. Even if it's a "generic" API, it still have > to be defined, to have some methods that can be called. As I said above, we need to start working on an API. Defining that API is the goal of this discussion. > > > -. > > Well, JNDI is the Java way to access directory services. There are C APIs > for that, Perl APIs for that, etc. If the _API_ will be based on a > namespace, it would be a bad ideea to invent another way for Java. Yep, but that has to be decided. > > >). > > Wait! Do you mean "take the data out of LDAP and create a XML file" ??? > Then you loose all advantages of LDAP... You'll need to invent a new > replication protocol ( in case you have multiple servers ), and add a > security layer that will be hard to enforce. ( in LDAP you can define > users and permissions at the context level - while in a XML file you > can't specify that a certain tag is readable only be a certain user). > You also loose the event notification in LDAP. No, what I was thinking here is that the LDAP entry for a certain configuration file contain the appropriate XML, not generate it.?) - Troy
http://mail-archives.apache.org/mod_mbox/tomcat-dev/199908.mbox/%3C19990804150935.D13832@usite.net%3E
CC-MAIN-2017-04
refinedweb
454
81.02
Devices Develop an app that's ready to connect to a wide range of wired and wireless devices that allow users to enjoy the mobility and flexibility of a Windows 8.1 device when they are enjoying or creating content at home or at work. New or updated in Windows 8.1 - Human Interface Device (HID) support - Point of Service (PoS) device support - USB device support - Bluetooth device support - 3D printer support - Scanning support Human Interface Device (HID) support [Get the Custom HID Device Access and Sample motion-sensor, firmware and Windows Runtime app for HID samples now.] The Windows.Devices.HumanInterfaceDevice API lets your Windows Store app access devices that support the. The new API is intended for two distinct audiences: The hardware partner who has created a HID peripheral and now needs a Windows Store app that lets Windows 8.1 users access or control that device. (Hardware partners can declare one app as automatically acquired when a user connects their peripheral.) The app developer who wants to create an app for one of these peripherals. The hardware partner is already familiar with HID but needs to understand the requirements of Windows Store apps. The app developer will probably need to learn the protocol. If you are new to HID, see Introduction to HID Concepts in the HID driver documentation on MSDN. But before you use the new API, review the Limitations section to find out whether your device falls within the supported categories. Note The target device for the following examples is the SuperMUTT, a test device that you can order from JJG Technologies. Connecting to a HID This code example shows how a Windows Store app, built with XAML and C#, uses the HidDevice.GetDeviceSelector method to create a selector for a specific HID device. Then the app uses the HidDevice.FromIdAsync method to open a connection to that device. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using Windows.Devices.Enumeration; using Windows.Devices.HumanInterfaceDevice; using Windows.Storage; using Windows.Storage.Streams; namespace HidSampleCS { class Enumeration { // Enumerate HID devices. private async void EnumerateHidDevices() { UInt32 vendorId = 0x045E; UInt32 productId = 0x078F; UInt32 usagePage = 0xFF00; UInt32 usageId = 0x0001; // Create a selector that gets a HID device using VID/PID and a // VendorDefined usage. string selector = HidDevice.GetDeviceSelector(usagePage, usageId, vendorId, productId); // Enumerate devices using the selector. var devices = await DeviceInformation.FindAllAsync(selector); if (devices.Count > 0) { // Open the target HID device at index 0. HidDevice device = await HidDevice.FromIdAsync(devices.ElementAt(0).Id, FileAccessMode.ReadWrite); // At this point the device is available to communicate with, // so we can send/receive HID reports from it or // query it for control descriptions. } else { // There were no HID devices that met the selector criteria. this.NotifyUser("MUTT HID device not found"); } } } } Retrieving data from a HID Apps retrieve data from a HID device by using input reports. This example demonstrates how an app uses the HidInputReport.GetNumericControl method to retrieve a numeric value from a SuperMUTT device. In the example, the connected HID device is represented by the DeviceList.Current.CurrentDevice object. private async Task GetNumericInputReportAsync() { var inputReport = await DeviceList.Current.CurrentDevice.GetInputReportAsync(SuperMutt.ReadWriteBuffer.ReportId); var inputReportControl = inputReport.GetNumericControl(SuperMutt.ReadWriteBuffer.NumericUsagePage, SuperMutt.ReadWriteBuffer.NumericUsageId); var data = inputReportControl.Value; rootPage.NotifyUser("Value read: " + data.ToString("X2", NumberFormatInfo.InvariantInfo), NotifyType.StatusMessage); } Limitations of the HID API Consider the following when considering implementation of this API set. The Windows.Devices.HumanInterfacDevice API supports most HID devices. However, it blocks the top-level app collection represented by these In-box device drivers only In addition to blocking support for the previous list of usage pages, the new API also requires that your app runs by using the in-box device drivers that come with Windows 8.1. The API does not support vendor-supplied device drivers. Peripheral device support only The HID API is designed primarily for apps that access peripheral devices. The app-developer scenario described earlier applies only to such devices. Although the API can be used to access internal (non-peripheral) devices, access to these devices is limited to privileged apps that only the device manufacturer creates. App developers cannot access internal devices. No support for Control Panel apps Apps created with the HID API are per-user apps. This means that they can't save settings, which is typically a requirement for a Control Panel app. Point of service (POS) device support [Get the Barcode scanner and Magnetic stripe reader samples now.] Windows 8.1 introduces a new Windows.Devices.PointOfService namespace for specialized point-of-service (POS) devices. This release supports barcode scanners and magnetic stripe readers. Use the manufacturer-neutral POS API to write Windows Store apps that can access POS devices from various makers and enable a many-to-many mapping between POS apps and POS devices. This namespace is based on the industry-standard Unified Point of Service (UPOS) specification. For more info, see the UnifiedPOS website. For the barcode scanner, device creation occurs by static activation using the GetDefaultAsync method, which gets the first available barcode scanner connected to the tablet (if there is more than one scanner). You can also create a specific device by using the FromIdAsync method, which gets a barcode scanner from a DeviceInformation ID. ClaimScannerAsync gets your app exclusive access to the device and prevents other apps from using it. And EnableAsync gets the device ready for a DataReceived event. The same pattern and API elements apply to the magnetic stripe reader. These code examples show how to get a barcode scanner that's connected to a tablet, and how to enable it to receive data. // Creates the barcode scanner, claims it for exclusive use, and enables it to receive data. var _scanner = null; var _claimedScanner = null; function startReceivingData() { Windows.Devices.PointOfService.BarcodeScanner.getDefaultAsync().then(function (scanner) { if (scanner !== null) { _scanner = scanner; scanner.claimScannerAsync().done(function (claimedScanner) { if (claimedScanner !== null) { _claimedScanner = claimedScanner; claimedScanner.isDecodeDataEnabled = true; claimedScanner.addEventListener("datareceived", onDataReceived); claimedScanner.enableAsync().done(function () { document.getElementById("btnStartReading").disabled = true; document.getElementById("btnEndReading").disabled = false; }, function error(e) { // Failed to enable scanner. }); } else { // Could not claim the scanner. } }, function error(e) { // Could not claim the scanner. }); } else { // Barcode scanner not found. Connect a barcode scanner. } }, function error(e) { // Asynchronous method failed. }); } For more info about these methods, events, and properties, see the Windows.Devices.PointOfService reference. This API provides an easy migration path for POS developers. You can turn your desktop apps using Microsoft POS for .NET into Windows Store apps using the Windows Runtime and running on tablets. The API model is similar to POS for .NET with some modifications. USB device support [Get the Custom USB device access sample now.] A new Windows 8.1 namespace offers app support for USB devices: Windows.Devices.Usb. You can use it to write a Windows Store app that talks to a custom USB device. "Custom" in this context means a peripheral device for which Microsoft does not provide an in-box class driver. The official USB specification is the industry standard for hardware manufacturers who make USB peripherals for PCs. Windows includes in-box drivers for most of those devices. For devices that do not have an in-box driver, users can install the generic in-box Winusb.sys driver provided by Microsoft. As they make new peripherals, manufacturers can provide their own custom driver or use Winusb.sys. If they choose Winusb.sys, you can easily write accompanying apps that let users interact with the device. In earlier versions of Windows, such apps were desktop apps, written by using WinUSB Functions. In Windows 8.1, Windows Store apps can be written by using the new Windows.Devices.Usb namespace. When to use the new USB API You can use the new API if the following are all true: The device driver is the Microsoft-provided Winusb.sys driver. The namespace does not support manufacturer-supplied device drivers. When you plug in the device, Windows may or may not install Winusb.sys. Select WinUsb Device and click Next to install the driver. You provide the info about your device as device capability declarations in the app manifest. This associates your app with the device. For more info, see Updating the app manifest package for a USB device. The device belongs to one of the device classes supported by the namespace. Note that a custom device can belong to a predefined USB device class or its functionality can be defined by the manufacturer. Use the Windows.Devices.Usb) When not to use the new USB API You can't use the new API if either of the following are true: You want your app to access internal devices.Windows.Devices.Usb is for accessing peripheral devices only. A Windows Store app can access internal USB devices only if it is a privileged app that is explicitly declared by the OEM for that system. Your app is a Control Panel app. Apps that use the namespace must be per-user apps. That is, they can communicate with the device but cannot save settings data outside their scope—functionality that's required by many Control Panel apps. Don't use the Windows.Devices.Usb namespace for these USB device classes: - Audio class (0x01) - HID class(0x03) - Image class (0x06) - Printer class (0x07) - Mass storage class (0x08) - Smart card class (0x0B) - Audio/video class (0x10) - Wireless controller (such as wireless USB host or hub) (0xE0) The namespace blocks these USB device classes to prevent conflict with other APIs. For these classes, use other relevant APIs instead. For example, if your device conforms to HID protocol, use Windows.Devices.HumanInterfaceDevice. Discovering and connecting to a USB device This example code shows how to search for a USB); } Get started writing a USB-capable Windows Store app by studying the CustomUsbDeviceAccess sample. This sample shows how to communicate with a USB device by using the Windows.Devices.Usb namespace. Bluetooth device support [Get the Bluetooth Rfcomm Chat and Bluetooth Generic Attribute Profile samples now.] For Windows 8.1, Windows Store apps can use the new RFCOMM and GATT (Generic Attribute Profile) Windows Runtime APIs to access Bluetooth devices. These APIs provide access to the Bluetooth BR/EDR and Bluetooth LE transports. Bluetooth Classic and Bluetooth Smart devices must be first discovered and paired via the Windows 8.1 PC settings UI (PC & devices>Bluetooth) before being accessible via the Windows Runtime APIs for Bluetooth. You provide info about your device as device-capability declarations in the app manifest. This associates the app with the device. Here are some key details about the new APIs: Bluetooth RFCOMM—Windows.Devices.Bluetooth.Rfcomm The API lets Windows Store app developers implement Bluetooth profiles built on the RFCOMM protocol—for example, Serial Port Profile (SPP). Client and server roles are provided. Remote Service Discovery Protocol (SDP) records can be accessed and local SDP records can be published. The Sockets.ControlChannelTrigger class is not available for RFCOMM-based sockets. - The RFCOMM API prevents access to the following in-box and invalid services: Bluetooth GATT—Windows.Device.Bluetooth.Gatt The API lets Windows Store app developers implement GATT client profiles for collecting data from low energy (LE) sensors. A Bluetooth 4.0 radio is required to use the GATT API. - The GATT API prevents access to the following in-box and invalid services: - The GATT API provides read-only access to the following in-box and invalid services: Note The Windows Runtime APIs for RFCOMM and GATT are not intended for use in Control Panel apps. Two scenarios give you more info about how to use the Windows.Devices.Bluetooth.Rfcomm API: Three scenarios give you more info about how to use the Windows.Device.Bluetooth.Gatt API: - Retrieve Bluetooth LE Data - Control a Bluetooth LE Thermometer Device - Control Presentation of Bluetooth LE Device Data 3D printer support [Get the 3D Printing sample now.] Printing 3D content with Windows 8.1 is similar to printing 2D content. In fact, we've simply extended the IXpsOMPackageWriter and IXpsDocumentPackageTarget interfaces to provide this feature. To send 3D content to a printer from an app in Windows 8.1, your app must access Windows printing and provide formatted 3D content to print. 3D printing in Windows 8.1 involves creating 3D content and passing it through the pipeline of Windows spooler and driver filters to the 3D manufacturing device, such as a 3D printer. Two interfaces—IXpsDocumentPackageTarget3D and IXpsOMPackageWriter3D—are included in the 3D printing API. IXpsDocumentPackageTarget3D represents a print queue and job details. IXpsOMPackageWriter3D provides methods for sending content into the Windows print pipeline. This interface passes 3D content as opaque streams through spooler and driver filters to the 3D manufacturing device. The 3D printing interfaces have these characteristics: They support submitting 3D content in Open Packaging Conventions format for printing. They support submitting XPS content for 2D printing, in addition to the 3D content. The 3D content is limited to one 3D model part linking zero or more texture parts and zero or one print ticket parts. The 3D model and texture data are considered an opaque stream by the API, and there is no validation or parsing of any kind. For a feature overview, go to Supporting 3D printing, and see Quickstart: 3D printing to learn how to add 3D printing to your app. Scanning support [Get the Scan sample now.] You can now scan content from your Windows Store app using a flatbed, feeder, or auto-configured scan source. The new Windows.Devices.Scanners namespace is built on top of the existing WIA APIs, and is integrated with the Device Access API. Note A Scan app is built into Windows 8.1. For a feature overview, see Scanning (JavaScript and HTML) or Scanning (C#/C++/VB and XAML).
http://msdn.microsoft.com/en-us/library/windows/apps/bg182882.aspx
CC-MAIN-2014-35
refinedweb
2,304
50.23
Just before the Chrissy break, I ran through (at the Victoria .NET DevSIG) the first incarnation of my .NET 3.0 "End to End" demo that I have been wanting to build for some time. While this ended up being a fair bit more then a simple Hello World example, it did turn out to be a pretty simple starting point for what will one day hopefully be my über demo for all things app-plat. The Bits I have posted the first (i.e. buggy, unstable, etc. etc. ) version of the code here. To get this to work, you will also need to: public class ProductsDB : IProductsDB { const string connectionString = "server= (local);database=AdventureWorks;Integrated Security=true"; ...} public class ProductsDB : IProductsDB { const string connectionString = "server= (local);database=AdventureWorks;Integrated Security=true"; ...} The Overview This is my original concept "design" A brief explanation of the above: This manifests into the AdventureWorks Solution that has 5 projects: Enjoy !!! - Over the next few days I will pull out some of the more interesting integration bits / issues and blog about them. Published Friday, January 05, 2007 11:00 AM by grahame If you would like to receive an email when updates are made to this post, please register here RSS Jason Haley Graham Elliott is posting a series of articles on an über demo of .netfx 3.0 featuring WPF, WF, WCF and frankarr - an aussie microsoft blogger One of the minor hurdles I came across when building out the first version of the über demo was how to Graham Elliott Getting a service unavailable on the link to the source code zip file. Been happening for some time. Shawn Cicoria Hi Shawn... Should all be good now. Email me if you still can't access it. grahame After receiving several emails asking me where to find the AdventureWorks sample database and images I did manage to finally locate all the stuff except every image file was named something_small and the Microsoft samples had everything named something_large. Just another pain in the... So I just deleted all images so that it would at least compile. Next I ran the SVCHOST which dumped on me because of the certificate. Too much difficulty tring to just get the thing to work to be of much use as an example. Though I have to hand it to you, you said it is buggy and unstable. You weren't kidding. Nice try though. Thanks Larry Larry Aultman I get the same problem Larry did - an error due to X.509 certificate problems. Is this related to the fact that I've never ran any of the adventure works samples on this machine? How do I fix this? Mark Faulcon Hi elliot, a question, where i can download de the images? Cacho I have had a few questions recently about whether I am planning to update the .NET 3.0 über demo to: i would really like to run this example but get the X.509 issue. can you give more insite as to where i can get the cert or how to create it. Also, where can i just download the images. Rob Park Hi, I try to test your code but I haven't the photos and I can't find them.please could you tell me where they are? Hermann Dausque Hey Grahame, You forgot to include the images .. Cheers Ashish. Ashish Do you have any updates to the code? Where can I download the latest code? Venkat where can i download this sample Hari
http://blogs.msdn.com/graham_elliott/archive/2007/01/05/net-3-0-end-to-end-example-wpf-wf-wcf-cardspace-jolly-good-fun.aspx
crawl-002
refinedweb
590
73.98
Get NIR Images¶ Gets NIR image that matches VIS image plantcv.get_nir(path, filename) returns nir_path - Parameters: - path - path to base image (vis image) to match - filename - filename of base image (vis image) to match Context: - This is a function that is likely only useful for those with multiple camera types. We use this function to find the matching NIR image to a VIS image, that is found in the same directory but which contains multiple images (regex). Would need to be modified for different file naming structure / image types / file structures. Example use: - Use in VIS/NIR Tutorial from plantcv import plantcv as pcv # Set global debug behavior to None (default), "print" (to file), or "plot" (Jupyter Notebooks or X11) pcv.params.debug = "print" # Get NIR image nir_path = pcv.get_nir("/home/images/sorghum/snapshot1", "VIS_SV_90_z300_h1_g0_e85_v500_86939.png")
https://plantcv.readthedocs.io/en/latest/get_nir/
CC-MAIN-2019-18
refinedweb
137
58.62
I am back to C++ after many, many years' absence. I'm trying to create a simple vector class (non-interesting stuff removed): template <class T> class Vector3 { public: T length(); public: T x, y, z; }; I want to specialize the length method so that I can select the correct version of sqrt to call based on type of T. The best I can come up with is the following. "Best" is really a misnomer, since it doesn't compile. Can someone enlighten me what the best way to do this would be? template <> void Vector3<int>::length() { return (T) sqrtl((long)(this->x * this->x) + (long)(this->y * this->y) + (long)(this->z * this->z)); } template <> void Vector3<long>::length() { return (T) sqrtl((this->x * this->x) + (this->y * this->y) + (this->z * this->z)); } template <> void Vector3<float>::length() { return (T) sqrtf((this->x * this->x) + (this->y * this->y) + (this->z * this->z)); } template <> void Vector3<double>::length() { return (T) sqrt((this->x * this->x) + (this->y * this->y) + (this->z * this->z)); } template <class T> T Vector3<T>::length() { return (T) sqrt((double)(this->x * this->x) + (double)(this->y * this->y) + (double)(this->z * this->z)); } Thanks in advance! Signed, Clueless about C++
http://www.gamedev.net/topic/644606-how-to-specialize-template-for-method/?setlanguage=1&langurlbits=topic/644606-how-to-specialize-template-for-method/&langid=1
CC-MAIN-2016-36
refinedweb
208
66.07
Save UI data to pdf/hard copy I have made a complex UI composed of many parts tables images etc- one of which containing the bulk of information being a scroll view ie bigger than the screen. If I wanted a method to save the contents of the scroll view UI as it appears on screen and/or print it to say PDF etc what would be best way of approaching this? Just looking for some pointers. Thanks Rich @rb not sure I correctly understood. If not, sorry and forget this import ui from PIL import Image import io def ui2pil(ui_img): return Image.open(io.BytesIO(ui_img.to_png())) sv = ui.ScrollView() sv.frame = (0,0,400,400) sv.content_size = (1200,1200) iv = ui.ImageView() iv.frame = (0,0,1200,1200) iv.image = ui.Image.named('test:Peppers') sv.add_subview(iv) sv.present('sheet') with ui.ImageContext(sv.width,sv.height) as ctx: sv.draw_snapshot() ui_image = ctx.get_image() pil_image = ui2pil(ui_image) if pil_image.mode == "RGBA": pil_image = pil_image.convert("RGB") pil_image.save('x.pdf',"PDF",resolution=100.0) Thankyou I think you did ! I’ll give it a go :)
https://forum.omz-software.com/topic/6006/save-ui-data-to-pdf-hard-copy
CC-MAIN-2022-21
refinedweb
188
62.54
11 January 2011 18:45 [Source: ICIS news] TORONTO (ICIS)--US chemical railcar shipments rose by 9.6%, or 131,127 carloads, in 2010 from 2009, but have not yet regained their 2008 level, a rail industry trade group said on Tuesday. With 1,497,095 carloads, chemical railcar shipments accounted for 10.1% of the 19 high-volume commodity categories tracked by the Association of American Railroads (AAR), the group said. However, despite the increase, chemical railcar shipments were still down from their 2008 level when the ?xml:namespace> Likewise, overall carloads for the 19 commodities rose 7.3% to 14.8m carloads in 2010 - the largest year-over-year increase since the Nevertheless, 2010 marked the second lowest total in annual carloads on record, behind 2009, the group said. “Like the economy in general, rail traffic in 2010 recovered some lost ground, but not nearly all of it,” said “That being said, monthly rail traffic increases were broad based, supporting the idea that economic recovery likewise is broad
http://www.icis.com/Articles/2011/01/11/9424989/us-chem-railcar-traffic-rises-9.6-in-2010-but-still-below-2008.html
CC-MAIN-2015-22
refinedweb
170
62.27
Creating an app using React Native is easy. Creating one that looks attractive, polished, and professional? Not so much. Even if you have a designer on your team, translating designs to real apps that look good on all screen sizes and feel native on both Android and iOS is always an uphill task. - Mobile App9 Best React Native App Templates of 2019Nona Blackman - React Native5 React Native UI Kits, Themes, and App TemplatesKyle Sloka-Frey CodeCanyon has lots of premium React Native app templates and component libraries that can make your life easier, though. Antiqueruby React Native is, in my opinion, the most comprehensive of them all. Developed by Alian Software, this massive template offers hundreds of Material Design-compliant layouts you can use in your apps. Furthermore, it's very easy to integrate it with your WordPress blogs and WooCommerce sites. And if you're interested in monetization, it has support for AdMob ads built into it. In this tutorial, I'll show you how to install Antiqueruby React Native and use some of its components and layouts. Prerequisites To be able to follow along, you'll need: - an Envato account - the latest versions of Node.js and the React Native CLI - the latest version of Android Studio - a device or emulator running a recent build of Android - a basic understanding of the React Native framework 1. Setting Up the Template Antiqueruby React Native is a bestseller on CodeCanyon. To get it, log in to your Envato account and purchase a license for it. Once you do so, you'll be able to download the template as a ZIP file named codecanyon-zBpcGaL5-antiqueruby-react-native.zip. Because its size is nearly 1.6 GB, the download might take a while to complete if you have a slow Internet connection. When you extract the file, you'll see that it contains two more ZIP files: Documentation_V2.12.zip, which has all the documentation for the template, and Antiqueruby_Code_V2_12.zip, which has the actual code. For this tutorial, you just need to extract the latter inside a new directory. To do so on Linux or macOS, you can run the following commands: mkdir my_project && cd my_project unzip ~/Downloads/Antiqueruby_Code_V2_12.zip Next, you must use npm to install all the Node.js packages the template depends on. npm install Lastly, open the android/local.properties file using a text editor and update the value of the sdk.dir property so it points to the location where you have your Android SDK installed. sdk.dir=/home/me/Android/Sdk 2. Running the Project To see what the template looks like on your mobile device, first fire up the Metro Bundler by running the following command: react-native start This can take a minute or two because the template has thousands of files. On some computers, you might even encounter an ENOSPC error, saying that the system limit for file watchers is reached. To fix the error, you can try excluding a few intermediate files by adding the following code to the metro.config.js file: const blacklist = require('metro-config/src/defaults/blacklist'); module.exports = { resolver: { blacklistRE: blacklist([/intermediates\/.*/]) } }; Alternatively, you can increase the maximum number of file watchers by updating the value of the fs.inotify.max_user_watches property in the /etc/sysctl.conf file. Once the bundler has finished loading the dependency graph, you can go ahead and run the template. react-native run-android If your Android development environment is up to date and configured correctly, you should now be able to see this on your device: 3. Exploring Available Layouts You can use the Antiqueruby React Native template to create many different types of apps. For each app category, it has several layouts available. You can take a look at all of them on your device right now. For instance, you can press the General Material UI button to take a look at all the generic layouts available. By default, you'll be presented with options to view all the beautiful layouts available for sign-in pages. By clicking on the hamburger button, though, you can open a menu that lets you pick other types of layouts. For example, you can click on the Sign Up option to look at layouts for sign-up pages. You don't have to limit yourself to just the generic Material Design layouts. The template also includes layouts that are ideal for specific types of apps. For instance, if you're trying to create an app for your WordPress blog, press the WordPress Blog button on the home screen. This template also offers lots of domain-specific layouts. With them, you can effortlessly create dating apps, food delivery apps, cryptocurrency-related apps, social apps, and more. You'll be able to take a look at these layouts too from the home screen. For instance, you can press the Food Material UI button to look at layouts that are usually needed while building food delivery apps. 4. Understanding the Project Structure Before you can use an Antiqueruby React Native layout or component in your own app, you need to understand the structure of the template. In addition to all the common files and directories React Native projects have, this template has a directory named App. This is where most of its reusable code resides. ~/my_project/App/ |-- Components |-- Containers |-- Themes |-- Fonts -- Images The Components directory inside it, as its name suggests, contains various Material Design components, such as alert messages, buttons, calendars, and charts. You can use these components to create your own custom layouts from scratch. The Containers directory is where you can find the code for all the premium, hand-crafted layouts. For instance, the SignIn directory inside it has the code for all the sign-in layouts we looked at earlier. Similarly, the Blog directory contains the code for all the WordPress-related layouts. The Themes directory contains JavaScript files that allow you to alter the overall look and feel of the layouts. Using it, you can change details such as fonts, colors, and margins. Lastly, the Fonts and Images directories contain assets that are used in the layouts. 5. Using Layouts By default, the template loads the App component, which does nothing but showcase all the available layouts. It's what you saw when you ran the project in an earlier step. To create your own app with the template, you need to change this behavior. So open the index.js file and empty its contents. Feel free to create a backup of the file before you do so. Then, as usual, import the React framework and React Native components by adding the following import statements to it: import * as React from 'react'; import * as RN from 'react-native'; Next, let's say we want to display a sign-in layout in our app. Of all the 14 such layouts offered by the template, let's use the third one: import Signin_03 from './App/Containers/SignIn/Signin_03'; Then create a new component by extending the React.Component class and overriding its render() method. Inside the method, all you need to do is return the export default class MyApp extends React.Component { render() { return ( <Signin_03/> ); } } Lastly, don't forget to register your new component by calling the registerComponent() method. RN.AppRegistry.registerComponent('Antiqueruby', () => MyApp); At this point, if you run your app, you should be able to see the sign-in screen directly. Of course, to change the contents of the layout, you must make changes in the index.js file present in the App/Containers/SignIn/Signin_03 directory. 6. Using Components Importing and using a component is just as easy as importing and using a layout. For instance, if you want to use the Calendar component in your app, first import it as follows: import Calendar from './App/Components/Calendar/CalendarStrip'; Then, inside the render() method, add the <Calendar> component to the component tree. Optionally, you can place it inside a <View> component and give it a few styles. <RN.View> <Calendar style={{ height: 200 }} calendarHeaderStyle={{ color: "#555555" }} dateNumberStyle={{ color: "#333333" }} highlightDateNumberStyle={{ color: "#FF0000" }} highlightDateNameStyle={{ color: "#990000" }} /> </RN.View> With the above code, you should see an interactive calendar strip that looks like this: Conclusion You now know how to use the Antiqueruby React Native template to quickly create React Native apps that look good and perform well on both Android and iOS. Using the basics you learned in this tutorial, you should be able to work with all the layouts and components available in the template. This template also comes with comprehensive documentation you can refer to. Furthermore, if you are having any trouble with it, you are free to contact the developer directly on CodeCanyon. And if you're looking for more React Native templates, I suggest you refer to these articles: - Mobile App9 Best React Native App Templates of 2019Nona Blackman - React Native5 React Native UI Kits, Themes, and App TemplatesKyle Sloka-Frey - React NativeCreating eCommerce Apps With the MStore Pro React Native TemplateAshraff Hathibelagal - React9 React Native App Templates for You to Study and UseEric Dye Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/beautiful-material-design-apps-with-the-antiqueruby-react-native-template--cms-33796
CC-MAIN-2019-47
refinedweb
1,539
54.52
The QtSoapMessage class provides easy access to SOAP messages. More... #include <qtsoap.h> List of all member functions. With this class, you can create and inspect any SOAP message. There are convenience functions available for generating the most common types of SOAP messages, and any other messages can be constructed manually using addBodyItem(). Use setMethod() and addMethodArgument() to construct a method request. The return value of a method response is available from returnValue(). Use setFaultCode(), setFaultString() and addFaultDetail() to construct a Fault message. To inspect a Fault message, use faultCode(), faultString() and faultDetail(). To add items to the body part of the SOAP message, use addBodyItem(). To add items to the header, use addHeaderItem(). toXmlString() returns a QString XML representation of the SOAP message. clear() resets all content in the message, creating an empty SOAP message. QtSoapMessage message; message.setMethod("getTemperature", ""); message.addMethodArgument("city", "Oslo"); // Get the SOAP message as an XML string. QString xml = message.toXmlString(); QtSoap provides a partial implementation of version 1.1 of the SOAP protocol as defined in. See also QtSoapType, QtSoapQName, and QtSoapHttpTransport. This enum describes all the supported SOAP Fault codes: Warning: setMethod() must be called before calling this function. Example: google.cpp. Adds an argument called name with a uri of uri. The type of the argument is QtSoapType::String and its value is value. Adds an argument called name with a uri of uri. The type of the argument is QtSoapType::Boolean and its value is value. The dummy argument is used to distinguish this function from the overload which takes an int. Adds an argument called name with a uri of uri. The type of the argument is QtSoapType::Integer and its value is value. Example: google.cpp. Example: google.cpp. Example: google.cpp. If the import fails, this message becomes a Fault message. Parses the XML document in buffer. Imports the document if it validates as a SOAP message. Any existing message content is replaced. If the import fails, this message becomes a Fault message. This function must be called before calling addMethodArgument(). Example: google.cpp. Sets the method name to name and uri to uri. See also QDomNode::toString(). This file is part of the Qt Solutions.
http://doc.trolltech.com/solutions/qtsoap/qtsoapmessage.html
crawl-003
refinedweb
369
61.22
On the input I have a signed array of bytes barr f barr barr val int.from_bytes val def multiply(barr, f): val = int.from_bytes(barr, byteorder='little', signed=True) val *= f val = int (val) val = cropInt(val, bitLen = barr.__len__()*8) barr = val.to_bytes(barr.__len__(), byteorder='little', signed=True) return barr def cropInt(integer, bitLen, signed = True): maxValue = (2**(bitLen-1)-1) if signed else (2**(bitLen)-1) minValue = -maxValue-1 if signed else 0 if integer > maxValue: integer = maxValue if integer < minValue: integer = minValue return integer Pure Python is rather innefective for any numeric calculations - because due to each number being treated as an object, each operation involves a lot of "under the hood" steps. On the other hand, Python can be very effective for numeric calculation if you use the appropriate set of third party libraries. In your case, sice performance matters, you can make use of NumPy - the de facto Python package for numeric processing. With it the casting, multiplication and recasting will be done in native code in one pass each (and after knowing better NumPy than I do, probably with even less steps) - and should give you an improvement of 3-4 orders of magnitude in speed for this task: import numpy as np def multiply(all_bytes, f, bitlen, signed=True): # Works for 8, 16, 32 and 64 bit integers: dtype = "%sint%d" % ("" if signed else "", bitlen) max_value = 2 ** (bitlen- (1 if signed else 0)) - 1 input_data = np.frombuffer(all_bytes, dtype=dtype) processed = np.clip(input_data * f, 0, max_value) return bytes(processed.astype(dtype)) Please not this example takes all your byte-data at once, not one at a time as you pass to your original "multiply" function. Threfore, you also have to pass it the size in bits of your integers. The line that goes dtype = "%sint%d" % ("" if signed else "", bitlen) creates the data-type name, as used by NumPy from the number of bits passed in. SInce the name is just a string, it interpolates a string adding or not an "u" prefix, depending on the datatype being unsigned, and put the number of bits at the end. NumPy datatypes can be checked at: Running with an array of 500000 8bit signed integers I get these timings: In [99]: %time y = numpy_multiply(data, 1.7, 8) CPU times: user 3.01 ms, sys: 4.96 ms, total: 7.97 ms Wall time: 7.38 ms In [100]: %time x = original_multiply(data, 1.7, 8) CPU times: user 11.3 s, sys: 1.86 ms, total: 11.3 s Wall time: 11.3 s (That is after modifying your function to operate on all bytes at a time as well) - an speedup of 1500 times, as I've stated on the first draft.
https://codedump.io/share/4f1KwAXKrLMX/1/efficient-byte-by-float-multiplication
CC-MAIN-2017-09
refinedweb
462
63.19
Start Learning for Free Join over 1,000,000 other Data Science learners and start one of our interactive tutorials today! Pandas Tutorial: DataFrames in PythonOctober 21st, 2016 in Python Next to Matplotlib and NumPy, Pandas is one of the most widely used Python libraries in data science. It is mainly used for data munging, and with good reason: it’s very powerful and flexible, among many other things. It makes the least sexy part of the "sexiest job of the 21st Century" a bit more pleasant. Besides this, there's even a better thing about it! The Pandas library has the broader goal of becoming the most powerful and flexible open source data analysis and manipulation tool available in any language. That's all the more reason for you to get started on working with this library and its expressive data structures straight away! One of these structures is the DataFrame. With this tutorial, DataCamp wants to address 11 of the most popular Pandas DataFrame questions so that you understand -and avoid- the doubts of the Pythonistas who have gone before you. The Beginning: What Are Pandas Data Frames? Before we start off, let’s have a brief recap of what data frames are.. Data frames in Python are very similar: they come with the Pandas library, and they are defined as a two-dimensional labeled data structures with columns of potentially different types. In general, you could say that the Pandas data framearrays, lists, dictionaries or Series. Note that np.ndarray is the actual data type, while np.array() is a function to make arrays from other data structures. Structured arrays allow users to manipulate the data by named fields: in the example below, a structured array of three tuples is created. The first element of each tuple will be called ‘foo’ and will be of type int, while the second element will be named ‘bar’ and will be a float. Record arrays, on the other hand, expand the properties of structured arrays. They allow users to access fields of structured arrays by attribute rather than by index. You see below that the ‘foo’ values are accessed in the r2 record array. - Besides the data that your DataFrame needs to contain, you can also specify the index and column names. The index, on the one hand, indicates the difference in rows, while the column names indicate the difference in columns. We will see later that these two components of the DataFrame are handy when you’re manipulating your data. If you’re in doubt about Pandas DataFrames and how they differ from other data structures such as the NumPy array or a Series, you can watch the small presentation below: Note that in this post, most of the times, the libraries that you need have already been loaded in. The Pandas library is imported in as pd, while the NumPy library is loaded as np. Remember that when you code in your own environment, you shouldn’t forget this import step! Do you still remember how to do it? import numpy as np import pandas as pd Awesome! Now that there is no doubt in your mind about what data frames are, what they can do and how they differ from other structures, it’s time to plunge into your questions. 1. How To Create a Pandas DataFrame Obviously, making your DataFrames is your first step in almost anything that you want to do when it comes to data munging in Python. Maybe you want to start from scratch to make a data frame, but you can also convert other data structures. Note that the data inputted to the data frame can vary! This section will only cover making a Pandas DataFrame from other data structures, such as NumPy arrays. To read more on making empty dataframes that you can fill up with data later, go to question 7.Among the many things that can serve as input to make a ‘DataFrame’, a NumPy ndarrayis one of them. To make a data frame from a NumPy array, you can just pass it to the DataFrame()function in the dataargument. Note. Do you still remember how subsetting works in 2D NumPy arrays? You first indicate the row that you want to look in for your data, then the column. Don’t forget that the indices start at 0! For the data, you go and look in the rows at index 1 to end and you select all elements that come after index 1. You end up with selecting 1, 2, 3 and 4. This approach to making data frames frame, you might want to know a little bit more about it. You can use the shapeproperty or the len()function in combination with the .indexproperty: Note how these two options give you slightly different information on your DataFrame: the shape property will give you the dimensions of your DataFrame. So frames in Python. It will cover the basic operations that you can do on your newly made DataFrame: adding, selecting, deleting, renaming, … You name it! Later, you will need these operations to go and do the more advanced wizardry with Pandas DataFrames. 2. How To Select an Index or Column From a Pandas DataFrame Before you start with adding, deleting and renaming the components of your DataFrame, you first need to know how you can select these elements. So, how do you do this? Well, in essence, selecting an index, column or value from your DataFrame isn’t that hard. It’s really very similar to what you see in other languages that are used for data analysis (and which you might already know!). Let’s take R for example. You use the [,] notation to access the data frame’s values. In Pandas DataFrames, this is not too much different. Let’s say you have a DataFrame like this one A B C 0 1 2 3 1 4 5 6 2 7 8 9 And you want to access the value that is at index 0, in column ‘A’.Well, here are the various options that exist to get your value 1back: suffices. Adding Rows to a DataFrame Before you can get to the solution, it’s first a good idea to grasp the concept of loc and how it differs from other indexing attributes such as .iloc and .ix: locworks on labels of your index. This means that if you give in loc[2], you look for the values of your DataFrame that have an index labeled 2. ilocworks on the positions in your index. This means that if you give in iloc[2], you look for the values of your DataFrame that are at index ’2`. ix. Note that we here used an example of a DataFrame that is not solely integer-based as to make it easier for you to understand the differences. Int his case, you clearly see that passing 2 to loc or iloc/ ix does not give back the same result! We know that locwill go and look at the values that are at label 2. The result that you get back, will be 48 1 49 2 50 3 We also know that ilocwill go and look at the positions in the index. When you pass 2, you will get back: 48 7 49 8 50 9 Since the index doesn’t only contain integers, ixwill! As a consequence of what has just been explained, you understand that the general recommendation is that you use .loc to insert rows in your DataFrame. If you would use df.ix[], you might try to reference a numerically valued index with the index value and accidentally overwrite an existing row of your DataFrame. You better avoid this!Check out the difference once more in the DataFrame below: You can see why all of this can be confusing, right? Adding a Column to Your DataFrameIn some cases, you want to make your index part of your DataFrame. You can easily do this by taking a column from your DataFrame or by referring to a column that you haven’t made yet and assigning it to the .indexproperty, just like this: In other words, you tell your DataFrame that it should take column A as its index. However, if you want to append columns to your DataFrame, you could also follow the same approach as adding an index to your DataFrame: you use loc or iloc. loc: Note that the observation that was made earlier about loc still stays valid also for when you’re adding columns to your DataFrame! Resetting the Index of Your DataFrame When your index doesn’t look entirely the way you want it to, you can opt to reset it. This can easily ben done with .reset_index(). in one of the next sections. Now that you know how to remove an index from your DataFrame, you can go on to removing columns and rows! Deleting a Column from Your DataFrameTo get rid of (a selection of) columns from your DataFrame, you can use the drop()method: You might think now: well, this is not so straightforward; There are some extra arguments that are passsed to the drop() method! - The axisargument is either 0 when it indicates rows and 1 when it is used to drop columns. - You can set inplaceto True to delete the column without having to reassign the DataFrame. Note that you can also delete duplicate values from column with drop_duplicates(): Removing a Row from Your DataFrame You can remove duplicate rows from your DataFrame by executing df.drop_duplicates(). You can also remove rows from your DataFrame, taking into account only the duplicate values that exist in one column. drop()method, where you use the indexpropertyFrameTo your first questions about Pandas’ DataFrames have been addressed, it’s time to go beyond the basics and get in your DataFrame. Keep on reading to find out what the most common Pandas questions are when it comes to formatting your DataFrame’s values! Replacing All Occurrences of a String in a DataFrame To replace certain Strings in your DataFrame, you can easily use replace(): pass the values that you would like to change, followed by the values you want to replace them by. regexargumentRemoving unwanted parts of strings is cumbersome work. Luckily, there is a solution in place!FrameSplitting your text into multiple rows is quite complex.. Next, you see that your Series is stacked. 0 0 23:44:55 1 0 66:77:88 2 0 43:68:05 1 56:34:12 That is not ideal either. That is why you drop the level to line up with the DataFrame: 0 23:44:55 1 66:77:88 2 43:68:05 2 56:34:12 dtype: object Transform your Series to a DataFrame to make sure you can join it back to your initial DataFrame. However, to avoid having any duplicates in your DataFrame, you can delete the original Ticketcolumn. Applying A Function to Your Pandas DataFrame’s Columns or Rows You might want to adjust the data in your DataFrame by applying a function to it. Let’s begin answering this question by making our own lambda function: doubler = lambda x: x*2 either you target the index or the columns. Or, in other words, either a row or a column. If, however,. There are several ways in which you can use this function to make an empty data frame.Firstly, you can use numpy.nanto initialize your data frame with NaNs. Note that numpy.nanhas type float. numpy.nanhas type float, the data frame will also contain values of type float. You can, however, also force the data frame to be of a certain type by adding the attribute dtype. (Honestly, who has never had this?). Now onto the how of reshaping your DataFrame. There are three ways of reshaping that frequently raise questions with users: pivoting, stacking and unstacking and melting. Keep on reading to find out more!. pivot_tablemethod! Good news, you already know why you would use this and what you need to do to do it. To repeat, “Splitting Text Into Multiple Columns”aping Your DataFrame With Melt() Melting is considered to be very useful for whenYou can iterate over the rows of your DataFrame with the help of a forloopting a DataFrame to CSV To output a Pand, … No worries.') To use a specific character encoding, you can use the encodingargument: import pandas as pd df.to_csv('myDataFrame.csv', sep='\t', encoding='utf-8') Furthermore, you can specify how you want your NaNor missing values to be represented, whether or not you want to output the header, whether or not you want to write out the row names, whether you want compression, … Read up on the options here. Writing a DataFrame to Excel Very similar Pandas DataFrames That’s it! You've successfully completed the Pandas DataFrame tutorial! You’re on your way to becoming a master in Pandas DataFrames..
https://www.datacamp.com/community/tutorials/pandas-tutorial-dataframe-python?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=revue
CC-MAIN-2017-34
refinedweb
2,167
70.33
- Custom controls, panel dynamic loading tree - Opening a link in a same window. - read simple xml into Sencha - document.location Ext.Ajax.request - Select a node in a tree view - Where to find this docu-api component - How use HTML in displayfield xtype - Scrolling to new tree item - EXT JS 4.1.1 javascript files - Extjs MVC with .Net MVC - Display HTML from database in panel or textarea or text or something! - Problems using MultiSelect control - Load the store of combobox while intializing the window - Store contents of tab in a variable - DirectStore with parameters.. - Ext.encode and UTF-8 characters - Cross Domain Session Managment - Datefield setValue\getValue functions - how to use ptype instead of pluginId in Ext.AbstractComponnet.getPlugin() ? - Bar Chart ExtJS 4 - How to get the name of every title from accordion - How to use different Store for a single table.? - Expand node does not fire callback - Remove column headers in dynamic grid - Partial tree reload/refresh inserts entire tree into the selected branch? - How to center the editor in a cell - Multiple instance of a controller/view - How to get a chart in order to save it as Jpeg file? - MVC Panel Click event - Xtemplate - Detect Tomcat shutdown from ExtJS4 - Convert Store with Nested Stores to Plain Old JavaScript Object (or JSON) - Multiple Group Grids in Accordion Doesn't Work - Keyboard Navigation - tab-bing into a Grid Panel? - form.submit timing out, can it be submitted asynchronously? - How to get value of a combobox in a proxy store? - MessageBox.alert OK onclick? - How to create a pie chart from 3 fields - Proper fill for grid panel empty space beneath last row - strange change of grid due to double click - change displayMsg in pagingtoolbar ,dynamically - create temporary store, use it to fill combobox - grid and the CellEditing plugin - Find the flex width - Mask whilst App Loading (Architect) - Destroy a panel not working - Can not reuse view? - Method resize in image field - Select listener not fired on disabled dates from DatePricker - Paging Toolbar - How to adjust column height with respect to y axis? - Scroll in border layout - Set Checked Check Boxes. - Ext.get() method basic - combobox displayField duplicate Value Problem - Change gridpanel style without affecting treepanel - Uncaught TypeError: Cannot read property 'internalId' of undefined - Vtype - How to override / extend Node.Interface with new function? - Unable to make toolbar transparent Toolbar in IE - Extjs Project Examples - Access custom component from view port and how to access stores in custom component - Using Examples - Control that used in Displaying repeated record - Parsing Nested Json - Showing an uploaded image.... - radio buttons in a message box - [EXT 4] Ext.tree.Panel drag-drop scrolling not working - replace content of tab panel - IE nextSibling - Ext.data.TreeStore expand one level - Padding is not getting applied for radiogroup - new Ext.Window on ExtJS 4.0 vs ExtJS 4.1 - build for production - Help on Rendering Values - Weird license term - need clarification - Novice question - Next Open Source release of ExtJS ? - transfer extjs grid data from client side server side (asp.net + vb) - Tree node not expandable after appending children to it. - autosize chart - Grid groupingsummary header localize - waitMsg in form submit - Proper way to create MVC application with different access levels - JSON POST IN ISO-8859-1 - How to fire an click event for an xtemplate - where to get source code of sencha index.html () - ExtJS Bar Chart x-axis to be distributed uniformly. - Ext control back date issue.. - Reset/clear grid completely - Reload a node in a dynamic treestore/treepanel - Show View from top when click on the button on same View. - Theming In Extjs 4 - Saving Component State in query string - DIfferent models for TreeNodes (Nested data associations) - Opening custom control as a model pop up - [ TREE ] Big volume - calling view page on window from controller after button click - Maximum and Minimum value of Numeric Axis works weirdly for Stacked bar - Ext Js combo items filtering out and decreasing in number - EXTJS 4.0.2 Grid Filter Type Numeric Symbols not appearing - combobox dynamic data - How to Change depth inside pannel child elements - Editable grid with combo box get key..... its urgent - login screen - tactil usage - How to create Extjs MVC with php Extensions. - Unable to get value of the property 'heightModel': In IE 8 and under. - Is cdn for ext js 4.1.3 avaialble ? - Getting the browser to remember the username and password on a login form - customize visual appearance of Extjs components - Combobox Style - XTemplate variables - Extjs 4.1 deploy application by minfied JS files - Sencha Cmd - CardPanel Cannot read property 'dom' of null - Base on panel labels how to create dynamic columns editable grid its urgent plzzzzz - Kretik to make a complaint - Border Layout - Does Center resize when you resize north/south? - Server side crud with php? - Posting 'true-false' on 'check' and 'uncheck' of checkbox - Url is not loaded in Ext.Window - URL is not loaded in Ext.Window - Store global load event in MVC - ux.multiselect does not resize correctly - [ASK] Method to hiding a box - Loding nested data from xml to model (or store) - Disable automatic sort on sort event handler for a column - ToolTip Rendering problem - Tree node hyperlink - TreeGrid height - Some problems with hidden panel that doesn't show again on show() call - extjs 4 grid header background color - SyncRequire in IE8 - how to disable row selection of checkboxmodel of grid keeping previous selection - Generated 'app.scss' File Is Misleading As To Where Var Should Be Placed - Using the page-analyzer tool to optimize loading of my app - Difference between getEl() & getCmp() ? - Ext Js 4.1 How to get selected records text from combo box? - Adding combobox in grid cell on cell click event - How to fire a scroll event for a container - How to change border color of panel excluding panel header - Dynamically load items for radiogroup - groupingsummary minmax-function - Remote store filter with comparison parameter - Adding DOM object to ExtJS compoment - ExtJS 4.1 Ext.data.Record.create "no method substring" - wich event i have to listen for all components rendered - Ways to attach a container to cursor - Extjs3 to Ext4 migration - Audio playback - Locking Grid column menu - grouptabpanel - Events in columns after DOM is created? - Problem with - Hiding of all hideable columns not allowed? - Tooltip in fieldset title - Is it expected than any Calendar code become supported, outside of ux namespace? - Extjs 4.0.7: Sencha SDK minified app still loading individual JS files. - Grid cell tooltips for Ext.grid.column.Column convenience subclasses? - requiring a file with . in the filename - Periodically loading a Store with JSON data from Ajax request - ie problem with Viewport - How to activate listeners in renderer function in a grid column - how to add listeners to check box? - How to export panels content to an image - IE Lockup issue IN EXTJS 4.1 - Renderer Action Column - How to add row number column in a grid panel - Ext.data.Operation - How to mask whole body except one panel ? - event slideout animation - Not Working on IE - Downloading image/text file using 'iframe' and 'Ext.core.DomHelper' - SelectionModel showing empty array in addition to selected items - Issue while loading a chart in starting application - Display text on mouse enter/over the display field of the form - Server-side updates - Extending Ext.data.Model - fieldlabel autosize in textfield - Center window inside panel - Scatter chart giving half circle when only one data exists - Sencha Command Not Generating All Images - HtmlEditor's word wrap when the word is longer than the line - Question about loading a view from modular Controller - Displaying only specific types of files in the filefield control - "Stop running script error" encountered when upgrading from ExtJS 4.1.2 to 4.1.3 - paging grid with static data - Drag horizontal scrollbar in gridpanel - Axis Range with only whole number ? how can i do ? help - How to add charts in a panel with tabular layout - How to indent the space before the child in a tree grid - Problem selecting all the checkbox inside combo box as default. - Cursor blinking on ExtJs window - Need to set MIn & Max value for xType - numberField dynamically - Animated CSS requires a xtype: label otherwise, not done correctly - Grey Boxes and Scrollbars in the tabs - Equivalent method for config "html" - JSON Nested : Current ExtJS data modeling not scalable? - Apply decodeURIComponent() for every AJAX response - Add a click listener to chart axis labels? - Ext equivalent to the jQuery method .live() - Control styles totally in external css file? - Paging Issue in Ext js. I am having trouble with paging. My start is not changing - height and width in percentage - Line chart : how to hide/display a line serie - [4.1.1a, TabPanel] Separate TabBar from Tabs - Reading json values after loading to a store - Ok, so I'm a n00b. MVC and PHP backend problems - How to constrain resizable items in container with hbox layout? - Panel click event - stop Datefield error message - Multi level grouping grid - Passing controller to view problems - Pushing data from java to extjs - Cookies !! - Sending parameter to js - LoadMask exceeds it's panel when partially hidden from scrolling - Moving the roweditor to another grid - Overriding Ext.panel.Panel DOM - Upload binary file using form.submit() - combo multiselect - urgent help - Hide expander - if no data - How Do I Add An Item To An Existing Toolbar? - What Is Your Development Process? - Application sending JSON on demand, not on creation (Ext-JS 4) - update pie fields after store load - Using jQuery in Ext.Window - SOLVED: Google geocode and doRequest with Ext.data.proxy.Ajax? - Text Field with Syntax Highlighting component? - requires not working - TypeError: d is undefined, on line record.set('rpmEmployeeID',responseObj.rpm_Employe - how to move floating objects? - TypeError: t is null in ext-all.js line 38 - numberField in Popup Window - Cursor is disappearing after the tool tip message - Change the default classes applied to the elements of datepicker..!! - Successful post, but infinite update loop after - Sencha Menu - Combo Box non-standard behavior for arrow down. - HtmlEditor insertAtCursor bug - Grid Pagination issue - Ext.tip.Tooltip: the size of side triangle. - Can someone please confirm tabIndex behavior with Ext.form.field.Base ? - How can I access a date typed field from an ExtJS JSonStore? - Content of the AJAX json post - extjs4 theming for roolbar does not work in IE - "Architecting Your App in Ext JS 4" tutorials missing images / duplicated articles - TimeField select event - Problem while rendering the combo box into the dynamic editable grid - How to use Themes - Standard html for empty TabPanel - Rendering delay
https://www.sencha.com/forum/archive/index.php/f-87-p-42.html?s=fb4aeba72d22b233e988a9439566ef69
CC-MAIN-2018-22
refinedweb
1,748
53.51
How to shuffle selected lines (random sort) in Editorial ? I have, e.g., the following lines in the middle of a larger text Beijing Berlin London Moscow New York Paris How do I shuffle that list (and not the rest of the text) ? Thanks in advance First, you need to know how to find where the list starts and ends. Methods: - User selects the text she wants shuffled - Text is between --- and --- (some character or group of characters that will be unique) - Text will be on lines 5 thru 10, etc. Second, you have a Python action that does something like: import random my_list = my_text.split('\n') random.shuffle(my_list) my_text = '\n'.join(my_list) Here's a very simple workflow that shuffles the selected lines: The Python code is obviously very similar to @ccc's. First of all, thank you But, second, I get an error, which most certainly originates in the fact that I am an ignoramus in Python, and do not know how to adapt your answer to my needs. Thank you again nevertheless Thank you very much for youe answer, which works perfectly. And thank you too for Editorial, which is a truly beautiful app (NB: on Mac OSX, my editor of choice is Vim, not for programming, but for writing) Side question (only answer if you have got time) : how to sort that same list on the n-th column ? And of course, best wishes for 2017
https://forum.omz-software.com/topic/3797/how-to-shuffle-selected-lines-random-sort-in-editorial
CC-MAIN-2017-26
refinedweb
241
77.27
Web Farm with a Firewall Topic Last Modified: 2005-05-24 The following figure illustrates a Web farm scenario. Front-end and back-end topology in a Web farm A corporation is deploying Outlook Web Access to 200,000 users. The goal is to have a single namespace (for example,) in which users can reach their mailboxes. Additionally, for performance reasons, the corporation wants to avoid having a bottleneck at the front-end server or a single point-of-failure, so they want to spread the load over multiple front-end servers by using Network Load Balancing (NLB). This scenario is referred to as a "Web Farm." For detailed setup instructions, see How to Set Up a Front-End and Back-End Topology with a Web Farm Behind a Firewall. For information about how to set up Network Load Balancing, see the Windows online documentation. Configuring Exchange on the front-end servers does not require any special steps.
https://technet.microsoft.com/en-us/library/bb124160(d=printer,v=exchg.65).aspx
CC-MAIN-2015-22
refinedweb
158
52.9
Search FAQ / Boolean search / PDF files // Close window CITATION BROWSING If you know the citation (volume and page number), go to the Back issues page. This is currently available for the Pharmaceutical Journal ONE MINUTE GUIDE TO SEARCHING type one or more words into the search box, to find words anywhere example "nhs plan" Capitalise initial letter of word(s) to do exact search example "nhs Plan" example National Health Plan or use CAPITALS, example NATIONAL HEALTH PLAN do a Boolean search by separating individual words with AND OR NOT and grouping with ( ) example (nhs and Plan) not 2000 example (NATIONAL AND HEALTH AND PLAN) NOT 2000 You do not need to click the Boolean check box. Just enter a boolean search phrase and press return (or click the GO button) If the Boolean search box is checked, and you want a normal search, just enter it. A Boolean search will only be done if the words are separated by AND OR NOT Back to Top SEARCH FAQ How do I search? Enter a few words into a search box and press return (or click the GO button). Use lower case letters. I want to search for a phrase Surround the phrase with double quotes such as "nhs plan" Can I search for a phrase plus other words? You can but it is not advisable. You can find dozens or even hundreds of matches. Instead do a Boolean search, for example nhs and plan and 2001 Why are there so many results? Each word you type is examined for variations. For example, tablet will also find tabulate and tabernacle. You can look for a specific word by capitalising the first letter: Tablet this will also find tablets. You can use initial capital letters with one or more words, including those in phrases, for example, "nhs Plan" How do I search for a page number? To search for an article that begins with a known page number enter, eg, p576. There is no space between the p and the page number. Narrow the search by including words, phrases or volume number, such as p576 266 There are still too many results! Entering a single word will return numerous results. Try entering a phrase or additional words to narrow the search. Boolean searches are useful. Can I search for future events? There is a notice-board section on PJ Online (see the top of the home page) with branch meetings, future events and conferences. Does the search include PDF files? The search automatically includes every PDF file. For more on PDF files click here I cant find an article PJ Online contains the contents of The Pharmaceutical Journal from August, 1999, onwards. It also contains the Hospital Pharmacist from January, 2000, every issue of Primary Care Pharmacy plus Medicines Management, Pharmacy Assistant and Tomorrows Pharmacist. For articles before these dates the Societys library operates a photocopy service. telephone +44 (0)20 7572 2300 For more details on the Photocopy and Reprint services, please refer to the Site map for their links. Back to Top BOOLEAN SEARCH What is a Boolean search? Boolean searching is an alternative way of searching and provides means of finding specific combinations of individual words. You can use the following operators between words in a Boolean query: AND, OR, NOT NOT is used before each word you wish to specifically exclude. A NOT can only be used as part of a Boolean query; it cannot be used on its own. Round brackets can be inserted around parts of the query to control the order in which the operators are evaluated. Here are some examples. Note the initial capital letters to ensure those words only are searched for. (Homer or Marge) and Simpson This will return documents containing Homer and Simpson as well as documents containing Marge and Simpson. Homer or (Marge and Simpson) will return documents containing just Homer as well as documents containing Marge and Simpson. (Homer Simpson) or (Marge Simpson) will return documents containing Homer and Simpson as well as documents containing Marge and Simpson. (Homer or Marge) and Simpson not Bart will return documents containing Homer and Simpson as well as documents containing Marge and Simpson, but it will exclude documents containing Bart. Note Boolean syntax does not support phrase searching Back to Top PDF FILES Whatever kind of search you do boolean or otherwise the contents of PDF files will also be searched. On the search results page, PDF files are indicated with a logo similar to this When you click on the PDFs link and open it with Acrobat Reader, the word(s) you searched for will not be highlighted. Instead you can use Acrobat Readers own search facility to look for words or phrases in the PDF file. Ways to search a PDF using Acrobat Reader include clicking the binocular icon pressing CTRL-F (PC) pressing command-F (Apple Mac) Regular users of PJ Onlines search facilities may notice fewer results (matched files). Previously individual PDF files would appear more than once if words/phrases being looked for were on more than one page. This has been changed so that a PDF will be listed only once. Back to Top Close window
http://www.pharmj.com/popup/searchHelp.html
crawl-001
refinedweb
870
69.31
Nate Cavanaugh Nate Cavanaugh YUI, Liferay, and the future Nate Cavanaugh 2014-08-30T03:13:21Z 2014-08-29T21:34:29Z <p class="p1"> As many of you <a href=""><span class="s1">may have read</span></a>, Yahoo is immediately stopping development on the YUI library.</p> <p class="p1"> This decision, while it will have a significant impact, is <a href=""><span class="s1">not</span></a> <a href=""><span class="s1">news</span></a> <a href=""><span class="s1">to</span></a> <a href=""><span class="s1">us</span></a>. Given our close relationship with the YUI team, we knew this was coming, and have been discussing for a couple of months different plans of action.</p> <p class="p1"> In the spirit of transparency, I do want to say that we don't yet have an official direction decided, but we are looking at all of our options, as well as discussing with our partners, and other companies/organizations with a vested interest in keeping YUI alive.</p> <p class="p1"> I'd like to address two things in this blog post: what this means for existing EE customers, and what options are we looking towards.</p> <p class="p1"> </p> <p class="p2"> <b>What does this mean for existing EE customers?</b></p> <p class="p1"> Nothing will change for our existing customers. We have long had a fork of YUI that we apply our patches and changes to, and we will continue to deliver bug fixes and needed changes for as long as that version of EE is supported.</p> <p class="p2"> </p> <p class="p2"> <b>What options are we looking towards?</b></p> <p class="p1"> I'd like to list off a few options that we're discussing, partially to reinforce that we are seriously thinking about it, but also to solicit feedback and ideas regarding your needs or concerns.</p> <p class="p1"> Of course, the most obvious possibility is to take over stewardship of YUI, whether that is us on our own, or with any number of the other large companies that leverage YUI heavily.</p> <p class="p1"> Another possibility is to of course migrate off of YUI, and while still keeping AlloyUI as the wrapping library, but move to some other library internally, and still keep the functionality as close as possible.</p> <p class="p1"> One more idea we've discussed is to possibly take our fork of YUI, and branch off a next generation version, like a YUI4, that cleans out the legacy code, streamlining and simplifying the library.</p> <p class="p1"> Ultimately, we want to take the path that best serves our community and clients, and helps deliver amazing experiences as quickly and easily as possible.</p> <p class="p1"> As we get more info, and have more info to provide, we'll make sure to keep you in the loop on the path we're going to go with.</p> <p class="p1"> If you have ideas, thoughts, concerns or questions, don't hesitate to let us know.</p> Nate Cavanaugh 2014-08-29T21:34:29Z The Nitty-Gritty: Theme Improvements and Bootstrap in Liferay 6.2 Nate Cavanaugh 2014-01-14T21:14:48Z 2014-01-10T20:18:13Z <p>.</p> <p> <a href="">Jorge's</a> discussed a lot of the benefits, and the feedback we've gotten from the community has definitely been great.</p> <p> If I had to sum-up, here are the most common questions I've gotten from developers about Bootstrap in 6.2:</p> <ul> <li> Why did you choose version 2.3.2 instead of 3? <li> How do I use my Bootstrap theme? <li> Do you support Bootstrap's JavaScript plugins? <li> Why do all of the Bootstrap rules have .aui in front of them? </ul> <p> You may in fact have wondered those same things. Or maybe you didn't, but now that I've mentioned it, it's eating a hole in your brain. In order to alleviate your burning curiosity, I'll answer these questions first.</p> <h3> Why did you choose version 2.3.2 instead of 3?</h3> <p>:</p> <ol> <li> It was released on August 19th, 2013, roughly a month and a half before we were planning on releasing. Trying to cram it in at the last minute would have led to nothing but major bugs, weeping, gnashing of teeth, etc. <li> It completely dropped support for IE7 and below. While in Liferay 6.2 we provided limited support for IE7 and below, it's just not feasible yet for this version to completely drop support for everyone across the board. </ol> <p> Hopefully that makes sense, and technically, you could still use Bootstrap 3 in your own theme and portlets (I'll go more into how this may be possible below).</p> <h3> How do I use my Bootstrap theme?</h3> <p> A common case is that someone has taken a generated theme (from a site such as <a href="">Bootswatch</a>) and want to use it inside of Liferay. If you're a theme developer, here's the easiest way you could accomplish that. I'm assuming you're using the plugins SDK and are familiar with placing your files in the _diffs/ directory):</p> <ol> <li> Inside of your theme's _diffs/css/ directory, create a file called aui.css. <li> Open the aui.css file and do a find/replace with the following values: <strong>find</strong>: <code>../img/</code> <strong>replace</strong>: <code>../images/aui/</code> (and of course, deploy your theme). </ol> <p>:</p> <pre> <code> "; } </code></pre> <h3> Do you support Bootstrap's JavaScript plugins?</h3> <p> <a href="">Modal</a>, <a href="">Tooltips</a>, <a href="">Pagination</a>, <a href="">Popovers</a>, <a href="">Tabs</a>, and <a href="">more</a>. If there are ones you would like, <a href="">please let us know</a>, and we'll definitely prioritize getting them in :)</p> <h3> Why do all of the Bootstrap rules have .aui in front of them?</h3> <p> This is one of those changes that doesn't seem like much, but is actually really powerful.</p> <p>.</p> <p> Bootstrap is a very opinionated framework, which is what many people love, and many people can be frustrated by.</p> <p> Previously, we always prefixed our CSS classes with <code>.aui-</code>, which is by far the safest. But this seemed wrong to do with Bootstrap's CSS classes. For one thing, it made it so that you couldn't easily just copy/paste the examples from the Bootstrap documentation and use it.</p> <p> What we decided to do instead was to place a selector of <code>.aui</code> before all of Bootstrap's rules, so that the rules look like <code>.aui .btn</code>, etc.</p> <p> What this does is allows you to not only easily remove Bootstrap from affecting the page, but even allows you to only apply Bootstrap selectively to different portions of the page (and this applies to Bootstrap's normalize.css rules as well).</p> <p> For instance, let's imagine you want to only apply Bootstrap to the portlet's, but not touch anything else on the page. You would simply remove the aui CSS class from the <code>$root_css_class</code> variable, and edit your portlet.vm file and add it there.</p> <p> You can take this CSS class and apply it anywhere (maybe you want everything only in one specific layout column, or only on one specific page, etc).</p> <p> This is actually really exciting for theme developers and system integrators, but also allows casual users the ability to use Bootstrap without having to do any crazy workarounds.</p> <h2> What else is new in 6.2?</h2> <h3> Bootstrap is completely controllable from the theme level</h3> <p>).</p> <p>).</p> <h3> An easier way to do media queries</h3> <p> We have also added a mixin called "respond-to" that allows you to easily target certain types of devices without having to remember the media queries. For example:</p> <pre> <code>/*; } } </code></pre> <p> What's cool about this is that you can use it at any level in your SCSS. For instance, you can either group a set of rules like so:</p> <pre> <code>@include respond-to(phone) { body { background: red; } input { width: 100%; } } </code></pre> <p> or, you can use it in the middle of an already deeply nested SCSS structure, such as:</p> <pre> <code>body { input { background: white; @include respond-to(phone) { background: blue; } } } </code></pre> <h3> A new Dockbar display mode</h3> <p> One of the things you may have noticed about the Classic theme in 6.2 is that the Dockbar doesn't appear to be a bar, exactly. If you're not sure what I mean, here is what it looks like in 6.2:</p> <p> <img alt="image" src=""></p> <p> It may not be obvious, but in this mode, the dockbar is actually split, as you can see here:</p> <p> <img alt="image" src=""></p> <p> One of the things we wanted to do was to find a way to make the dockbar appear a little less intrusive to the overall design. We also wanted this to be something that all themes could use if they wanted to.</p> <p> So what we did was add a new mode for the Dockbar. Basically, if you add a CSS class onto your body element called <code>dockbar-split</code>, it will trigger the new display mode.</p> <p> In order to add this CSS class, you don't even need to overwrite your portal_normal template. Simply create a file _diffs/templates/init_custom.vm and in there add this code:</p> <pre> <code>#set ($</p> <p> And here's what the Split Dockbar looks like with no customizations:</p> <p> <img alt="image" src=""></p> <p> As you can see, it's much less targeted to any one specific design.</p> <h3> We are bundling in FontAwesome by default</h3> <p> If you haven't heard of <a href="">FontAwesome</a>, you should definitely check it out. It offers a wide range of icons and a lot of flexibility.</p> <p> But the main benefits can be summed up as:</p> <ul> <li> Completely scaleable icons, even for high resolution devices <li> Easily change the size and the color of the icons via CSS </ul> <h2> Final words</h2> <p> Overall, we've addressed a lot of the issues we've seen theme developers run into, as well as add features that solve many of their common business goals.</p> <p> As always, feel free to ask any questions you may have here, or in the forums :)</p> Nate Cavanaugh 2014-01-10T20:18:13Z Liferay.com, mobile sites and responsive layouts Nate Cavanaugh 2011-05-23T05:40:52Z 2011-05-23T05:06:14Z <p> <a href="">Bryan</a> recently <a href="">blogged about the new site design</a>.</p> <p style="text-align: center; "> <img alt="Liferay.com can resize from desktop to mobile dynamically" src="" title="Liferay.com can resize from desktop to mobile dynamically" /></p> <p> Because we like to keep our ear to the ground on all things front end, we’ve actually been following the discussion/debate about responsive web design since <a href="">Ethan Marcotte</a> published his <a href="">article</a>.</p> <p> As far as mobile strategies go, it’s probably the easiest to implement, and there are a lot of quick wins with it. However, <a href="">Jon Neal</a> pointed out to me when we were first discussing this idea that there’s very little difference between using CSS media queries and just using Javascript to simulate the same thing.</p> <p>).</p> <p>.</p> <p> What do those numbers mean? 960px is based on the uber-popular <a href="">960 grids</a>.</p> <p>.</p> <p> If your target device is 600x800, you can still target that device with the css classes.</p> <p> With what Jon prototyped, I’ve created an Alloy module that codifies this so that it’s super easy to use (while at the same time, staying out of the way for users who don’t wish to use it).</p> <p> For a quick and simple demo, open up <a href="">the viewport demo</a> (if you view it in Chrome or Safari, Firefox or Opera, you’ll see some cool CSS3 generated content/animation, but the demo works in any browser).</p> <p> To get the module to run, all you need to do is call: <code>AUI().use('aui-viewport')</code> in a script tag anywhere on your page.</p> <p> Now you can target your site design to specific device sizes. Let’s say we want to give our navigation items to sit side by side when we view on the ipad or larger, but that we want to have them stack on smart phones:</p> <pre> <code>#navigation li { display: inline; float: left; } .aui-view-lt720 #navigation li { display: block; float: none; } </code></pre> <p> or let’s say we don’t want to make sure our images don’t exceed a certain width on the iPad in portrait mode</p> <pre> <code>.aui-view-720 img { max-width: 300px } </code></pre> <p> Or perhaps we want to target just portrait mode of smartphones and tablets:</p> <pre> <code>.aui-view-720 body, .aui-view-320 body { background: #ccc; } </code></pre> <p> You can even combine this with our other browser selectors to target very specific browsers:</p> <pre> <code>.touch.aui-view-lt720 {} /* Touch based smartphones */ .webkit.aui-view-lt960 {} /* Webkit based tablets and smartphones */ .win.aui-view-720 {} /* Smaller browser views on just Windows */ </code></pre> <p> Now, how is the logic applied? If the screen is equal to or between any of our defined widths, it will get this CSS class: <code>aui-view-{WIDTH}</code>. And, if the screen is greater than any of our widths, it will also get: <code>aui-view-gt{WIDTH}</code>. Lastly, if the screen is less than any of our widths, it will also get: <code>aui-view-lt{WIDTH}</code>.</p> <p> So a window size of 1900x1200 would get:</p> <pre> <code>aui-view-gt320 aui-view-gt480 aui-view-gt720 aui-view-gt960 aui-view-960 </code></pre> <p> whereas a window size of 800x600 would get:</p> <pre> <code>aui-view-gt320 aui-view-gt480 aui-view-gt720 aui-view-720 aui-view-lt960 </code></pre> <p>.</p> <p> <strong>Caveats</strong></p> <p> Life is not all <a href="">cheese and bananas</a> with this system, as there are some caveats to be aware of:</p> <ul> <li> <p> You are delivering the same content to all devices.</p> <p>.</p> </li> <li> <p> Designs can require more planning for each of the design layouts</p> <p>.</p> </li> </ul> <p> <strong>How can you get this module?</strong></p> <p>.</p> <p> <strong>Conclusions</strong></p> <p> Overall, the response has been great, and the design/development process was surprisingly smooth. I would recommend it as a general principle for your design, as it helps our designs fit the fluid nature of the web.</p> <p> <strong>More information</strong></p> <p> <a href="">Responsive Web Design by Ethan Marcotte</a><br /> <a href="">MediaQueri.es</a><br /> <a href="">The practicalities of CSS Media Queries, lessons learned.</a></p> Nate Cavanaugh 2011-05-23T05:06:14Z Using jQuery (or any Javascript library) in Liferay 6.0 Nate Cavanaugh 2010-07-21T15:21:24Z 2010-07-20T16:03:16Z <p> One of the biggest feature requests from Liferay 5.2 was the ability to upgrade the included version of jQuery. Many users would like to use third-party plugins, and most of those require the latest jQuery library (1.4.x as of this writing).</p> <div>So for 6.0, we solved this a couple of different ways. First, we no longer include jQuery by default. We have rebuilt our UI to run off of AlloyUI which is built on top of YUI3.</div> <div>By moving off of jQuery, it's also allowed us to step out of the way of developers who wish to use any version of jQuery that they need without worrying about conflicts with the core portal javascript.</div> <div>The other way we solved this for the future was by creating our own namespace. Since we're still using a Javascript library (YUI3), we would still have the same risk of conflicts.</div> <div>So instead of calling YUI() in the portal, we created AUI(). By creating the "AUI" namespace, we are able to guarantee that our environment won't conflict with someone who wants to upgrade their version of YUI3 in the future.</div> <div> </div> <div>But even though we believe strongly in AlloyUI and YUI3, there are existing applications with codebases on jQuery and porting them over is not always possible.</div> <div>Or perhaps there is some other Javascript library (such as YUI2, Dojo, qooxdoo, ExtJS, etc) that you need to include for the same reason.</div> <div> </div> <div>So today, I want to show a couple of ways to include the third-party javascript library into Liferay that you want. I'll be using jQuery, and I'll be using the URL to their "production" version:</div> <div> </div> <div>There are a couple of ways you can include jQuery onto the page.</div> <div><a href="">Jonas</a> has covered a great way in his blog post on <a href="">building jQuery portlets in Liferay 6</a>.</div> <div> </div> <div>First, using the same basic principle, is including it in your portlet.</div> <div>In your liferay-portlet.xml add this entry:</div> <pre> <header-portlet-javascript></header-portlet-javascript><br /></pre> <div>That will add jQuery 1.4.2 onto the page wherever that portlet happens to be rendered.</div> <div> </div> <div>Second, the other way is to add it into your theme. Inside of your theme's templates/portal_normal.vm you would add this line in the head of your theme:</div> <pre> <script src=""></script><br /></pre> <div>This will make jQuery available for everywhere in Liferay, including plugins that you deploy.</div> <div> </div> <div>Third, you can even use AlloyUI to load up other JS files. This is useful if you can't or don't want to edit either the liferay-portlet.xml or the HTML.</div> <div> </div> <div>In any Javascript that gets added to the page, you can do:</div> <pre> AUI().use('get', function(A){ <meta charset="utf-8"><span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space: pre; "> </span>A.Get.script('', {</meta></pre> <p><meta charset="utf-8" /></p> >onSuccess: function(){</pre> <pre><meta charset="utf-8">>// jQuery() can be used in here...</meta></pre> >}</pre> <pre><span class="Apple-tab-span" style="white-space:pre"> </span>}); });</pre> <div> </div> <div.</div> <div> </div> <div>This allows people who don't want to upgrade the JS portion of their app, they can easily include the previous version.</div> <div>The way that would look different is that it would just point to the different path, like so:</div> <pre> <header-portlet-javascript>/html/js/jquery/jquery.js</header-portlet-javascript><br /></pre> <div>The path to the previous version is:</div> <div><em>/html/js/jquery/jquery.js</em> and the <em>/html/js/jquery/</em> directory contains all of the plugins from 5.2 that work with jQuery 1.2.6.</div> <div> </div> <div>I hope that is helpful, and much thanks to Jonas for his blog post about the sample jQuery plugin. And of course, please let me know if there are any questions :)</div> Nate Cavanaugh 2010-07-20T16:03:16Z AlloyUI - Working with Widgets, Part 1 Nate Cavanaugh 2010-04-14T15:33:35Z 2010-04-14T15:00:00Z <p><strong> What is an Alloy widget?</strong></p> <div.</div> <div>That would be a pretty complex widget.</div> <div>A simple widget would be the Tooltip widget, which just shows users some content when they hover over an item on the page.</div> <div> </div> <div>In Alloy, there's a large collection of widgets, as well as a robust framework for building your own.</div> <div> </div> <div.</div> <div> </div> <div>So let's go ahead and get our sandbox ready:</div> <div> </div> <pre> AUI().use('', function(A){ <span class="Apple-tab-span" style="white-space:pre"> </span>// Our code will go here }); </pre> <div>I'll reference the name of the widget above the code so you can see the module that we'll be using in order to run it.</div> <div> </div> <div>So let's go ahead and create a TreeView of a user directory that has drag and dropping of elements, and expandable folders.</div> <div> </div> <div>Here's the demo: <a href="">TreeWidget</a></div> <div>Here's the code you would have to write:</div> <div> </div> <div>You would use the <code>aui-tree-view</code> module</div> <div> </div> <pre> var users = new A.TreeViewDD({ <span class="Apple-tab-span" style="white-space:pre"> </span> children: [{ <span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space:pre"> </span>label: 'Users',<br /></pre><pre>>label: 'Nate Cavanaugh',<br /></pre><pre><span class="Apple-tab-span" style="white-space: pre; "> </span> : 'Documents',: 'Downloads',: 'Movies',: 'todo_list.txt'}<br /></pre><pre><span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space:pre"> </span>]<br /></pre><pre><span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space:pre"> </span>}] <span class="Apple-tab-span" style="white-space:pre"> </span>}] }).render();</pre> <div> </div> <div>Let's look at what's going on here. First we're doing a traditional object construction in JavaScript, which is creating a new object. You're saying give me a new TreeView, and let's call it "users".</div> <div> </div> <div>So what's with that <code>.render()</code> at the end?</div> <div> </div> <div>That <code>render()</code> does not have to be called until you're absolutely ready to display the widget. In fact, there are many times where you may wish or need to configure a widget, but only render it under certain circumstances, or after a certain period of time has passed.</div> <div> </div> <div>But if you don't care about waiting, you can just do it all inline (render will still return a reference to the widget).</div> <div> </div> <div>We've just created a tree widget, and have this users variable, so what? Why is this exciting if you're just *using* widgets?</div> <div> </div> <div>Because even if you're just using widgets, you can still do VERY interesting stuff with these widgets. Remember how we talked about <a href="">working with elements</a>?</div> <div> </div> <div>The widget API is VERY similar to the Node API, meaning that in the same way that you can do <code>get/set</code> for properties on a Node, you can also do that on a Widget. And in the same way that you can use <code>.on()</code> for nodes, you can also do the same thing for Widgets.</div> <div> </div> <div>But before we jump into that, let's look at another widget, something like Tooltip.</div> <div> </div> <div>So let's go ahead and create a Tooltip. We'll use the <code>aui-tooltip</code> module:</div> <pre> var tooltip = new A.Tooltip({ <span class="Apple-tab-span" style="white-space:pre"> </span>trigger: '.use-tooltip', <span class="Apple-tab-span" style="white-space:pre"> </span>bodyContent: 'Hello World!' }).render();</pre> <div> </div> <div.</div> <div> </div> <div>So now we have our tooltip object, what can we do to it? Well, all of those things we passed into it, like <code>trigger</code>, and <code>bodyContent</code> can all be read or changed using <code>get/set</code>, and not only that, but you can listen in to when they're changed as well.</div> <div> </div> <div>This might take a second to realize just how cool this is, but take my word for it, it's insanely powerful.</div> <div> </div> <div>Let's take a look. We'll go ahead and change the message of the tooltip to instead say "Hey there Earth!".</div> <div> </div> <pre> tooltip.set('bodyContent', 'Hey there Earth!');</pre> <div> </div> <div>Now when you hover over it, the tooltip will say 'Hey there Earth!'.</div> <div> </div> <div>Moderately cool, but the real power comes from the change events that are used with this. Whenever you use get or set, the widget fires a custom event for that change.</div> <div>So in this case, the event that got fired was "<code>bodyContentChange</code>". Every attribute on the widget fires a "change" event when it gets changed, and you can listen to it in two phases.</div> <div> </div> <div>You can basically listen to the attribute get changed before the value is changed, and prevent it if you want, or after it gets changed.</div> <div> </div> <div>This took me a few minutes to sink in, but here are the benefits:</div> <div> </div> <div>Even as a person using the widgets, you have the ability to listen to interesting moments of what's going on and manipulate widgets on the page.</div> <div> </div> <div>Here's an example: </div> <div>All widgets have an attribute called "<code>visible</code>", and when this attribute is changed, the widget hides or shows. Usually you just call <code>widget.hide()</code> or <code>widget.show()</code>.</div> <div> </div> <div>But let's say on our tooltip, we want to listen before it shows, and if a certain element is missing on the page, we will prevent the tooltip from being shown.</div> <div> </div> <div>We would do:</div> <div> </div> <pre> tooltip.on('visibleChange', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>if(!A.one('#myEl')){ <span class="Apple-tab-span" style="white-space:pre"> </span>event.preventDefault();<br /></pre><pre> <span class="Apple-tab-span" style="white-space:pre"> </span>} });<br /></pre> <div>So now, the tooltip will only show up if an element with the id of myEl is on the page.</div> <div> </div> <div>Or here are some more practical examples:</div> <div> </div> <div>Let's say you're creating a Dialog, and you toggle the visibility of the dialog based on if a separate ButtonItem widget is active?</div> <div> </div> <div>You would use the <code>aui-button-item</code> widget, and the <code>aui-dialog</code> widget, and do:</div> <div> </div> <pre> var dialog = new A.Dialog({title: 'Hello', bodyContent: 'Lorem Ipsum'}).render(); var buttonItem = new A.ButtonItem({active: true, label: 'Toggle Dialog'}); buttonItem.after('activeChange', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>dialog.set('visible', event.newVal); }); <br /></pre> <div>Or what about if you have a ColorPicker widget, and want to update the value of a text input when the hex code changes? You can even pass in the events when you create the widget.</div> <div> </div> <div>You would use the <code>aui-color-picker</code></div> <div> </div> <pre> new A.ColorPicker({ <span class="Apple-tab-span" style="white-space:pre"> </span>after: { <meta charset="utf-8"><meta charset="utf-8"> <span class="Apple-tab-span" style="white-space:pre"> </span>hexChange: function(event){ <meta charset="utf-8"><meta charset="utf-8"> <span class="Apple-tab-span" style="white-space:pre"> </span><meta charset="utf-8"> <span class="Apple-tab-span" style="white-space: pre; "> </span>A.one('#myInputNode').val(event.newVal);</meta></meta></meta></meta></meta></pre><p><meta charset="utf-8" /></p><p><meta charset="utf-8"><meta charset="utf-8" /></meta></p><pre><meta charset="utf-8"> <span class="Apple-tab-span" style="white-space:pre"> </span>} <meta charset="utf-8"> <span class="Apple-tab-span" style="white-space:pre"> </span>} }).render();<br /></meta></meta></pre> <div>Another thing I mentioned last time was that Widgets can be plugged just like Nodes. Well, remember our handy dandy <a href="">IO plugin</a>? We can plug any widget with it, and it will automatically know what elements it should grab internally.</div> <div> </div> <div>For instance, let's create another dialog again, and let's plug it: </div> <pre> var dialog = new A.Dialog({title: 'Hello World'}).plug(A.Plugin.IO, {uri: 'test.html'}).render();</pre> <div>The plugin will smartly grab the area where the content goes, but not the area of the titlebar, and add a loading mask, and insert the content of the ajax request into there.</div> <div> </div> <div>A couple of notes:</div> <div> </div> <div>The <code>.render()</code> method, will, by default, add the widget into the body element. But if you pass it either a selector or a element, it will render the widget into that element on the page.</div> <div> </div> <div>So I could have easily done this:</div> <pre> new A.ColorPicker().render('#myContainer');</pre> <div> For Widgets, if you attach a listener "<code>on</code>" that event, it fires before the change has been made, so you can think of it as a "before" stage. It's kind of confusing, but it works.</div> <div>If you listen to the "<code>after</code>" stage, it's after the work has already been done.</div> <div>In most cases, you'll only care about being notified after, unless you want to prevent something from happening, or want to know before it happens.</div> <div>90% of the time, I use "<code>after</code>" to listen to widget events.</div> <div> </div> <div>Anyways, that's it for now on using the widgets. Next time, we'll talk about how you can make your own widget, and what a widget is actually made of.</div> <div>See you then!</div> Nate Cavanaugh 2010-04-14T15:00:00Z AlloyUI - Working with Plugins Nate Cavanaugh 2010-03-24T15:55:48Z 2010-03-24T15:39:19Z <p> I mentioned last time that I would talk about the IO Plugin that we have that makes one of the most common tasks of updating content.</p><div>So first, I'll show you how to use it, then we'll talk more about how to make your own plugin.</div><div> </div><div>So let's set up our sandbox:</div><pre> AUI().use('io-plugin', function(A){ <span class="Apple-tab-span" style="white-space:pre"> </span>// Our code will go here });</pre><div> </div><div>So the IO plugin is essentially everything that A.io.request is. All the same arguments, same behavior, but what it does for you is kinda cool.</div><div> </div><div>There is a common pattern we kept seeing in our ajax calls which was:</div><div> </div><ol><li>Insert loading icon into region</li><li>Do ajax call</li><li>On success, update region with new content</li></ol><div>Here is what the code would look might like before:</div><div> </div><pre> var contentNode = A.one('#contentNode'); if(contentNode) { <span class="Apple-tab-span" style="white-space:pre"> </span>contentNode.html('<div class="loading-animation"></div>'); <span class="Apple-tab-span" style="white-space:pre"> </span> var myRequest = A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span> on: { <span class="Apple-tab-span" style="white-space:pre"> </span> success: function(event, id, xhr) { <span class="Apple-tab-span" style="white-space:pre"> </span> contentNode.html(this.get('responseData')); <span class="Apple-tab-span" style="white-space:pre"> </span> } <span class="Apple-tab-span" style="white-space:pre"> </span> } <span class="Apple-tab-span" style="white-space:pre"> </span> }); } </pre><div>It's a trivial process, but when you see it so often, you start to think that it's one of those patterns that could imagine yourself sitting in a padded room chanting over and over.</div><div> </div><div.</div><div> </div><pre> var contentNode = A.one('#myNode'); if(contentNode) { <span class="Apple-tab-span" style="white-space:pre"> </span>contentNode.plug(A.Plugin.IO, { uri: 'test.html' }); }<br /></pre><div>And that's it.</div><div> </div><div>Here's what it looks like:<br /><img alt="" src="" /></div><div> </div><div>It basically will mask any existing content, and add a loading indicator that is centered above it.</div><div> </div><div>Here is some of the cool stuff about the plugin (and all plugins in Alloy):</div><div> </div><h3>1. It has it's own namespace</h3><div>.</div><div>If all of that was placed on the main object, then it would conflict with any methods that might exist already on that node, or maybe another plugin.</div><div>So instead, it's placed in a namespace, and you can access that like so:</div><div> </div><pre> contentNode.io</pre><div> </div><div>So if you want to set the dataType on the plugin to json, you can do:</div><div> </div><pre> contentNode.io.set('dataType', 'json');<br /></pre><div>or if you want to stop the connection:</div><pre> contentNode.io.stop();<br /></pre><h3>2. Plugins can be "unplugged"</h3><div>This is incredibly useful if you're writing a plugin that should do some clean up work when a user is finished with it (for instance, if you have a plugin that adds in some children elements or adds on some class names to a container).</div><div> </div><div>You would just call:</div><pre> contentNode.unplug(A.Plugin.IO);</pre><div> </div><h3>3. Plugins can be plugged to NodeLists as well as Nodes</h3><div>So this would work as well:</div><div> </div><pre> var contentNodes = A.all('.content-nodes'); contentNodes.plug(A.Plugin.IO, { uri: 'test.html' });</pre><div> </div><div>Then we could grab the first item in the NodeList and access the plugin namespace</div><pre> contentNodes.item(0).io.set('cache', false);</pre><div> </div><h3>4. Plugins can also be on Widgets</h3><div>I'll cover widgets more next time, but the same exact process applies, and in fact, the IO plugin is written in such a way that it knows whether it's in a Node or a Widget and will behave accordingly.</div><div> </div><h3>5. Plugging doesn't have to be a per instance affair.</h3><div>You can do this:</div><pre> A.Node.plug(A.Plugin.IO, {autoLoad: false, uri: 'test.html'});</pre><div>Now you could do:</div><pre> var contentNode = A.one('#contentNode'); if(contentNode) { <span class="Apple-tab-span" style="white-space:pre"> </span>contentNode.io.start(); }</pre><div> </div><div>The difference is that since we called <code>A.Node.plug()</code> (which is a static method on the Node class), it plugs all newly created instances with that plugin.</div><div> </div><div>I recommend doing it on a per instance basis, however, simply because 1, you'll consume less resources, and two, you don't have to worry about if your existing objects have been plugged.</div><div> </div><div>6. You can plug with multiple plugins at once.</div><div>So for instance, you can do this:</div><div> </div><pre> contentNode.plug([ <span class="Apple-tab-span" style="white-space:pre"> </span>{ fn: A.Plugin.IO, cfg: {uri: 'test.html'} }, <span class="Apple-tab-span" style="white-space:pre"> </span>{ fn: A.Plugin.MyOtherPlugin } ]);<br /></pre><div>If that looks confusing, feel free to ignore it, but it simply is a way to pass in mutliple plugins and their configurations (if they need one) all at once.</div><div> </div><h3><strong>Creating a plugin</strong></h3><div> </div><div>What's the simplest way to get started creating a plugin? Well here's what's to remember: A plugin, at the very least, is a function, with a property on it called <code>NS</code> which will be it's namespace.</div><div> </div><div>So for this example, I'm going to create a plugin that takes an input field, and inserts a "<code>defaultValue</code>". When you focus the field, if the value matches the "<code>defaultValue</code>", it will empty the field, and allow the user to enter their value. When the user moves away from the field, if they haven't entered anything new, it will add in the default text.</div><div> </div><div>If you wish to jump to the demo, go ahead and take a look here: <a href="">Plugin Demo</a>.</div><p> </p><div> </div><div>I'm going to start with this markup:</div><div> </div><pre> <input data-</pre><div> </div><div>HTML5 allows for custom attributes if you prefix the attribute with "data-", so you'll notice I added a new attribute called "<code>data-defaultValue</code>", which our plugin will read.</div><div> </div><div>So I'll create the javascript:</div><div> </div><pre> var defaultValuePlugin = function(config) { <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>var node = config.host; <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>var defaultValue = node.getAttribute('data-defaultValue'); <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>var startingValue = node.val(); <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>if (!startingValue) { <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>node.val(defaultValue);<br /></pre><pre><span class="Apple-tab-span" style="white-space:pre"> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span>}<br /></pre><pre> <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>node.on('focus', function(event) { <span class="Apple-tab-span" style="white-space: pre; "> </span> <span class="Apple-tab-span" style="white-space:pre"> </span>var value = node.val(); <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>if (value == defaultValue) {<br /></pre><pre> <span class="Apple-tab-span" style="white-space:pre"> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span>node.val('');</pre><p><meta charset="utf-8" /></p><pre><span class="Apple-tab-span" style="white-space:pre"> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span>}</pre><p><meta charset="utf-8"><meta charset="utf-8" /></meta></p><pre> <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>}); <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>node.on('blur', function(event) { <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span>var value = node.val();</pre><p><meta charset="utf-8" /></p><pre> <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span> if (value == '') { <span class="Apple-tab-span" style="white-space:pre"> </span> <span class="Apple-tab-span" style="white-space: pre; "> </span> node.val(defaultValue);</pre><p><meta charset="utf-8"><meta charset="utf-8" /></meta></p><pre> <span class="Apple-tab-span" style="white-space: pre; "> </span><span class="Apple-tab-span" style="white-space:pre"> </span> } <span class="Apple-tab-span" style="white-space:pre"> </span>}); }; defaultValuePlugin.NS = 'defaultValue'; </pre><div>Now all we have to do to get it working is simply plug it onto a node:</div><div> </div><pre> A.one('#myInput').plug(defaultValuePlugin);</pre><div> </div><div>I'll go over some points of the code above.</div><div> </div><div>One is that, notice that the first line points to <code>config.host</code>. The argument <code>config</code> is the configuration object that is passed into the plugin, but by default the <code>host</code> is always passed into the plugin, so you always have access from the plugin to whatever is being plugged.</div><div>It's like a magic link to whatever item you're plugging.</div><div> </div><div>The next lines I'm doing the basic work getting an attribute, setting a value if one hasn't been set, and in the bulk of it, attaching focus and blur listeners to do the checking for the value.</div><div> </div><div>On the last line, I'm attaching a property called <code>NS</code> to the function that we created. This is the namespace that this plugin will live under, and even if we don't need to access anything specifically, it's there so we can plug something without worrying about it colliding with any other plugins.</div><div> </div><div <a href="">YUI3</a> page also offers a lot more info if you would like to investigate further as well.</div><div> </div><div>Until next time, see you guys later!</div><p> </p> Nate Cavanaugh 2010-03-24T15:39:19Z AlloyUI - Working with Ajax Nate Cavanaugh 2010-03-18T06:21:18Z 2010-03-18T06:12:26Z <div>Ajax is one of those patterns that are a must have with a UI framework, so let's go ahead and jump right into doing some Ajax requests, and then we'll dive into the more complex cases.</div><div> </div><div>Let's prep our sandbox, but this time, the module we're going to use is called "aui-io-request".</div><div> </div><pre> AUI().use('aui-io-request', function(A){ <span class="Apple-tab-span" style="white-space:pre"> </span>// Our code will run here });</pre><div> </div><div>The simple of the simple, let's assume we're just going to be making a simple ajax request to a plain html file, called "test.html".</div><div> </div><pre> A.io.request('test.html');</pre><div> </div><div>That's all there is to it. However, that's not very interesting because it doesn't do anything.</div><div> </div><div>Let's say want to send a POST request to the server:</div><div> </div><pre> A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span> method: 'POST', <span class="Apple-tab-span" style="white-space:pre"> </span>data: { <span class="Apple-tab-span" style="white-space:pre"> </span> key1: 'value' <span class="Apple-tab-span" style="white-space:pre"> </span> } });</pre><div> </div><div>How about responding to the server? There are 5 possible callbacks: <code>start</code>, <code>complete</code>, <code>success</code> (or) <code>failure</code>, and <code>end</code>.</div><div> </div><div>If I wanted to alert the response from the server, here's what I would do: </div><pre> A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span>on: { <span class="Apple-tab-span" style="white-space:pre"> </span>success: function() { <span class="Apple-tab-span" style="white-space:pre"> </span> alert(this.get('responseData')); <span class="Apple-tab-span" style="white-space:pre"> </span> } <span class="Apple-tab-span" style="white-space:pre"> </span> } }); <br /></pre><div>What is <code>this.get('responseData')</code>? It's basically a normalized property of what is returned from the server. It's useful because A.io.request supports having different types of data returned by the server and automatically handled.</div><div>For instance, if your server returns JSON like <code>{"myProperty": 2}</code>, you could do something like:</div><div> </div><pre> A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span> dataType: 'json', <span class="Apple-tab-span" style="white-space:pre"> </span> on: { <span class="Apple-tab-span" style="white-space:pre"> </span> success: function() { <span class="Apple-tab-span" style="white-space:pre"> </span> alert(this.get('responseData').myProperty); //alerts 2 <span class="Apple-tab-span" style="white-space:pre"> </span> } <span class="Apple-tab-span" style="white-space:pre"> </span> } });</pre><div> </div><div>You can also work with XML that way. Assuming your server returns something like: <name>AlloyUI</name> you could do:</div><pre> A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span> dataType: 'xml', <meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span>on: { <meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span>success: function() { <meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span>alert(A.all(this.get('responseData')).all('name').text()); // alerts AlloyUI <meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span>} <span class="Apple-tab-span" style="white-space:pre"> </span> } }); <br /></pre><div>You can also submit all of the data in a form via ajax as well. Here's the simplest version: </div><pre> A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span> form: {</pre><p><meta charset="utf-8" /></p><pre><span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space: pre; "> </span> id: 'myFormId'</pre><p><meta charset="utf-8" /></p><pre><meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span>} }); </pre><div>That will serialize all of the data in the form, and send it to "test.html".</div><div> </div><div>One other handy feature of this is that you can define an ajax connection once, and reuse it multiple times, and start and stop it later on.</div><div>Here's an example: </div><pre> var myAjaxRequest = A.io.request('test.html', { <meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span><span class="Apple-tab-span" style="white-space:pre"> </span>method: 'POST', <meta charset="utf-8" /><span class="Apple-style-span" style="line-height: 18px; font-size: 12px; "><span class="Apple-tab-span" style="white-space: pre; "> </span> </span><span class="Apple-tab-span" style="white-space:pre"> </span>data: { <span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space: pre; "> </span> key1: 'value1'</pre><p><meta charset="utf-8" /></p><pre><span class="Apple-tab-span" style="white-space:pre"> </span><span class="Apple-tab-span" style="white-space: pre; "> </span> }</pre><p><meta charset="utf-8" /></p><pre>}); <br /></pre><div>Now later on, if I want to make that same ajax call again, all I have to do is call:</div><div> </div><pre> myAjaxRequest.start();</pre><div> </div><div>But what if I want to just define the call, but not execute it the first time (for instance, you know you want to run it later, but you don't want to update the server), you can do:</div><div> </div><pre> var myAjaxRequest = A.io.request('test.html', { <span class="Apple-tab-span" style="white-space:pre"> </span>autoLoad: false, <span class="Apple-tab-span" style="white-space:pre"> </span>... });</pre><div> </div><div>What's cool about this is that if later on, you want to change one of the properties before you send the request, you can do that as well. For instance, let's say you want to disable caching before you start the connection again:</div><div> </div><pre> myAjaxRequest.set('cache', false);</pre><div> </div><div>Or if you wanted to change from POST to GET</div><div> </div><pre> myAjaxRequest.set('method', 'GET');</pre><div> </div><div>Or change the dataType to JSON:</div><div> </div><pre> myAjaxRequest.set('dataType', 'json');</pre><div> </div><div>Or even change the URI at the last moment:</div><div> </div><pre> myAjaxRequest.set('uri', 'new_test.html');</pre><div> </div><div>Then when you're ready you would call:</div><div> </div><pre> myAjaxRequest.start();</pre><div> </div><div>And if at any time after you have started the request, you want to stop the whole request, you can call:</div><div> </div><pre> myAjaxRequest.stop();</pre><div> </div><div.</div><div> </div><div>One of those plugins is called A.Plugin.IO, and it's incredibly awesome, because it simplifies the extremely common task of not only loading content into a node or a widget, but adding a loading indicator to that node and automatically parsing the javascript for you.</div><div> </div><div>I'll go into more details in the Plugins post, but it's really handy.</div><div> </div><div>See you then!</div> Nate Cavanaugh 2010-03-18T06:12:26Z AlloyUI - Working with elements and events Nate Cavanaugh 2010-03-16T16:21:51Z 2010-03-15T06:02:33Z <h3>Getting Started</h3> <p>Welcome to our first post in talking about Alloy. I'm going to jump right in, but the only piece of info I want to cover beforehand, and you'll see me do it in every post, is the idea of "sandboxes". Because AlloyUI is built on top of YUI3, it has the concept of the sandbox. What this means is simply a callback where you run your code.</p> <p>The way it's constructed is that you declare the packages that you want to use, and then, inside of your sandbox, you use them.</p> <p>The benefit to this is that it allows your code to run as lean as possible, and loading only what it needs, without having to load a lot of stuff on the page first.</p> <p>How do you create a sandbox?</p> <p>Simple:</p> <pre><code>AUI().use(function(A) { <span class="Apple-tab-span" style="white-space:pre"> </span>// Your code goes here });</code> </pre> <p>Let's look at that real quick.</p> <p><code>AUI()</code> is a function call, and you attach a <code>.use</code> on it. Inside of that <code>.use()</code>, you can pass 1-n number of arguments, but the last argument *must always be a function*.</p> <p>You'll notice that the callback gets an argument passed to it called "A". That A is *the* Alloy object. It's where all of Alloy objects and classes are stored.</p> <p> </p> <p>Most of the time you'll be setting up your sandbox using at least one or two packages. Here's how that would look like using the event and node packages:</p> <pre> AUI().use('event', 'node', function(A) { <span class="Apple-tab-span" style="white-space:pre"> </span>// Your code goes here }); </pre> <p>When you see me write code samples where I do something like:</p> <pre> A.one('body');<br /></pre> <p>assume that I am inside of the sandbox.</p> <h3>Working with elements and events</h3> <p>The most common task you're most likely to come across in web development is working with elements on the page, and doing something with them. AlloyUI, because it's built on top of YUI3, has two objects for working with elements on the page, Node and NodeList. These two objects are closely related, and in fact, you can think of them as almost the same. The only difference is that Node is a single item, and NodeList is a collection of Nodes.</p> <p>There are two methods you can use to get elements on the page, and each will return something different depending on the method.</p> <p>The methods are called <code>A.one()</code> and <code>A.all()</code>.</p> <p><code>A.one()</code> will return a Node object if it finds the element on the page (null otherwise), and <code>A.all()</code> will always return a NodeList (if it doesn't match any elements, it will return an empty collection).</p> <p> </p> <p>Here's a few examples of how <code>A.one()</code> would be used:</p> <pre> var el_1 = A.one('#myCustomElement1'); var el_2 = A.one('.custom-element'); var el_3 = A.one('input[type=checkbox]');<br /></pre> <p>Notice how <code>A.one()</code> will accept selector strings that are not just for ID elements? What if there are multiple elements matching the selector? It will just return the first one. This is useful in many situations, and has an impact on performance.</p> <p>If the selector cannot be matched on the page, then <code>A.one()</code> will return null. This means that in order to operate on the Node element, you have to first do an if check on the variable then do the work.</p> <p>For instance:</p> <pre> var el_1 = A.one('#myCustomElement1'); if(el_1) { <span class="Apple-tab-span" style="white-space:pre"> </span>el_1.setStyle('height', 50); } </pre> <p>This can seem a bit verbose to some people, so it could be avoided if you wish. You could write the above like so:</p> <pre> A.all('#myCustomElement1').setStyle('height', 50); </pre> <p>without risk of throwing an error.</p> <p>So why do I prefer <code>A.one()</code>? Mainly because of performance. <code>A.one()</code> will run about 2-4x faster to grab the element, but it also helps me write out clearer code (I know that if I'm not updating a block of code it's because it didn't find an element, whereas trying to debug long chains of code is a nightmare).</p> <p>But both methods are there for you.</p> <p>What kind of selectors are available? By default, anything in CSS2, which covers 98% of the cases and most of the selectors I've needed to write.</p> <p>However, there is a CSS3 module, and if you need to do something like:</p> <p><code>A.all('.custom-element > div:nth-of-type(even)')</code>, just add the "selector-css3" module to your sandbox, and the selectors are available to you.</p> <p>It's pretty rare that we've actually *needed* these selectors, though, but again, they're there if you need them.</p> <p>That covers the basics on getting the elements, what about doing something with them?</p> <p>So, let's cover some common tasks:</p> <h3>Setting styles</h3> <p>I'm going to grab my element:</p> <pre> var nodeObject = A.one('#myElement');<br /></pre> <p><strong>Setting a background color:</strong></p> <pre> nodeObject.setStyle('backgroundColor', '#f00'); //Sets the background color to red<br /></pre> <p><strong>Setting a border </strong></p> <pre> nodeObject.setStyle('border', '5px solid #0c0'); //Sets a large green border<br /></pre> <p>But what if I want to set multiple styles all at once? Just simply use the setStyles method (notice the "s" on the end of the name?).</p> <pre> nodeObject.setStyles({ <span class="Apple-tab-span" style="white-space:pre"> </span>height: 200, <span class="Apple-tab-span" style="white-space:pre"> </span>width: 400 }); </pre> <p>You can also get the current style for an element by doing something like:</p> <pre> nodeObject.getStyle('border');<br /></pre> <p>One common task I think we've all done is to try to position a box somewhere on a page? Usually we'll just set the styles on the element, including the positioning.</p> <p>For instance, let's say we wanted to move something to exactly 100px from the left, and 200px from the top.</p> <p>Usually we might do something like:</p> <pre> nodeObject.setStyles({ <span class="Apple-tab-span" style="white-space:pre"> </span>left: 100, <span class="Apple-tab-span" style="white-space:pre"> </span>position: 'absolute' <span class="Apple-tab-span" style="white-space:pre"> </span>top: 200 }); </pre> <p>But then, what happens if it's inside of a positioned container? It will be relative to the container, then your offset will be off.</p> <p>Instead, here's how you would do it now:</p> <pre> nodeObject.setXY([100, 200])<br /></pre> <p>And it will automatically calculate the parents positioning for you guaranteeing that it's at the spot absolutely on the page that you want it. Much shorter code and much more accurate.</p> <p>But what is really cool that is related to this is, often times you just want to center an item on the page absolutely. Here's how you would do it:</p> <pre> nodeObject.center();<br /></pre> <h3>Working with class names</h3> <p>All of the most convenient ways of working with class names, and then some, are here:</p> <pre> nodeObject.addClass('custom-class'); nodeObject.removeClass('custom-class'); nodeObject.toggleClass('custom-class'); nodeObject.replaceClass('custom-class', 'new-class'); nodeObject.hasClass('custom-class'); nodeObject.radioClass('custom-class');<br type="_moz" /></pre> <p><meta charset="utf-8">In that last line, <code>radioClass()</code> will remove the class name from all of the sibling elements, and add it only to the current item, similar to how a radio button would behave. </meta></p> <p> </p> <h3>Manipulating elements</h3> <p><strong>Appending a new element to the nodeObject:</strong></p> <pre> nodeObject.append('<span>New Text</span>');<br /></pre> <p><strong>Appending the nodeObject to another element already on the page:</strong></p> <pre> nodeObject.appendTo('body');<br /></pre> <p><strong>Updating the innerHTML of an element:</strong></p> <pre> nodeObject.html('<b>new text</b>');<br /></pre> <p><strong>Removing an element:</strong></p> <pre> nodeObject.remove();<br /></pre> <p><strong>Creating a brand new node from scratch:</strong></p> <pre> var newNodeObject = A.Node.create('<div id="myOtherElement">Test</div>');<br /></pre> <h3>Moving up and down the elements</h3> <p>Often you need to jump around to different elements relative to the current one you're on (for instance, to find a parent of a current item or a child/children).</p> <p><strong>Finding the first parent of nodeObject with the class name of .custom-parent:</strong></p> <pre> nodeObject.ancestor('.custom-parent');<br /></pre> <p><strong>Finding the first child with the class name of .custom-child:</strong></p> <pre> nodeObject.one('.custom-child');<br /></pre> <p><strong>Finding all children with the class name of .custom-child:</strong></p> <pre> nodeObject.all('.custom-child');<br /></pre> <p>It's interesting to note that most of the methods that are on Nodes are also on NodeList. The ones that aren't are usually just the getters where it wouldn't make sense for a collection of items to return the data from any one item.</p> <p>Meaning this: it makes sense to have a collection, like nodeListObject, which contains 5 div elements, and when you call <code>nodeListObject.setStyle()</code> for that style to be applied to all 5 elements, or if you call <code>nodeListObject.append('<b>test</b>')</code> for it to append a new b element to every item.</p> <p>But it doesn't make much sense to do: <code>nodeListObject.getStyle('backgroundColor')</code>. What should it return? The first item in the collection? The last item?</p> <p>And since it's insanely easy to do this instead:</p> <pre> nodeListObject.item(0).getStyle('backgroundColor')</pre> <p>it just makes more sense not to add the methods onto the NodeList to avoid confusion when getting data out of an element.</p> <h3>Getting properties</h3> <p>Now, here comes a really interesting part. Since nodeObject is a wrapped element, you can't just do nodeObject.id to get the id or nodeObject.parentNode. If you tried that, it would return undefined.</p> <p>Instead, we do <code>nodeObject.get('id')</code> or <code>nodeObject.get('parentNode')</code>.</p> <p>Here's what is REALLY cool about using the getter: <code>nodeObject.get('parentNode')</code> will return another wrapped Node object, and if it's a collection, like <code>nodeObject.get('childNodes')</code>, it will be a wrapped NodeList object.</p> <p>So all of the DOM properties are available.</p> <p>EVEN cooler:</p> <p>get will accept a dot (.) separated list of properties and traverse it for you. So let's say you know you have an item exactly three parents up, and want to set the background color to red:</p> <pre> nodeObject.get('parentNode.parentNode.parentNode').setStyle('backgroundColor', '#f00');<br /></pre> <h3>Interaction time</h3> <p>We've touched on how to wrangle the elements on the page. What about adding an event to it, such as doing something when a user interacts with it?</p> <p>It's actually pretty simple. Every Node and NodeList has a method called <code>on()</code> that let's you, appropriately enough, do something "on" that event.</p> <p>Let's say I want to alert "Hello" when a user clicks on the nodeObject</p> <pre> nodeObject.on('click', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>alert('hello'); }); </pre> <p>Or let's say we want to add a border when a user first moves their mouse over an item:</p> <pre> nodeObject.on('mouseenter', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>this.setStyle('border', '5px solid #555'); }); </pre> <p>Notice how the "this" object is used without wrapping it? It's automatically wrapped for you to be a Node object, which is incredibly convenient.</p> <p>But what if, on the off chance, you *must* get the original DOM object. You can do <code>nodeObject.getDOM()</code> and it will return you the underlying DOM element.</p> <p>This also applies to NodeList objects as well.</p> <p>So if you do <code>A.all('div').getDOM()</code> it will return you an array of plain DOM elements.</p> <p>What if about if you need to remove an event?</p> <p>Let's say you do this:</p> <pre> nodeObject.on('click', myFunc);<br /></pre> <p>you can detach the event by simply doing:</p> <pre> nodeObject.detach('click', myFunc);<br /></pre> <p>or you could even just remove all events by not passing a second argument, like this:</p> <pre> nodeObject.detach('click');<br /></pre> <p>What if you want to do some work on document ready?</p> <p>You can do:</p> <pre> A.on('domready', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>// More work here }); </pre> <p>Now, there are times when you want to both load some modules and fire your callback on DOM ready, so here is how you would do that in Alloy:</p> <pre> AUI().ready('event', 'node', function(A){ <span class="Apple-tab-span" style="white-space:pre"> </span>// This code will fire on DOM ready <span class="Apple-tab-span" style="white-space:pre"> </span>// and when this modules are ready }); </pre> <p>Here's an interesting example. Let's say you want to listen on the node for only a specific key combination. For instance, you want to only fire the event when the user presses the escape key, but only when holding down the shift key.</p> <p>Here's how you would listen to it:</p> <pre> nodeObject.on('key', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>// escape + shift has been pressed on this node }, 'down:27+shift'); </pre> <p>Now here's another use case some might be curious about. What if you want to prevent the default behavior of an event, for instance, if you want to stop a link's href from being followed?</p> <pre> nodeObject.on('click', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>event.preventDefault(); }); </pre> <p>In Javascript, events bubble, which mean that by default, an even on one element also happens on every element that contain it, so if you click on a link, it will also fire an event on the body element as well.</p> <p>You can stop your event from bubbling though, here's how:</p> <pre> nodeObject.on('click', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>event.stopPropagation(); }); <br type="_moz" /></pre> <p>You might notice that these are the same methods that exist in the W3C specification, but they're normalized to work the same in all browsers.</p> <p>But there's also a shortcut if you want to just preventDefault and stopPropagation, which is like so:</p> <pre> nodeObject.on('click', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>event.halt(); }); <br type="_moz" /></pre> <h3>Event delegation</h3> <p>Speaking of event bubbling, built into Alloy is event delegation. Event delegation is a technique that let's you can attach one event to a container but have it fire only on the children elements.</p> <p>Imagine you have a list, and a lot of LI elements inside of it. You could add a new event listener for each element, but as your list grows, the number of listeners will also grow, as well as memory consumption.</p> <p>And let's say you add elements via ajax, it's a pain to have to reattach events after every update as well.</p> <p>So let's say go through an example. Let's say that we have this HTML:</p> <pre> <ul id="myList"><li>Test</li></ul><br /></pre> <p>Here's how we would use delegation:</p> <pre> var myList = A.one('#myList');<br /></pre> <pre> myList.delegate('click', function(event){ <span class="Apple-tab-span" style="white-space:pre"> </span>alert(event.currentTarget.html()); }, 'li'); <br type="_moz" /></pre> <p>Notice a few things. One, we're calling a method called delegate, but it's very similar to "on". In fact, the only difference to the "on" method is that the third parameter is a selector that we will test to make sure the element matches before firing the function.</p> <p>But also notice that we're referencing event.currentTarget. This is a property that will always point to the element that you are currently listening for, even inside of the "on" method, so I recommend using it.</p> <p>But now that we've added our event, if you click on the list item, it will alert the contents of that item. Now let's try this:</p> <pre> myList.append('<li>Test 2</li>'); </pre> <p>It will add another list item, and when you click on this new item, it will alert "Test 2", without having to reattach the event.</p> <h3>Conclusion</h3> <p>Hopefully this helps show you some of the helpful ways you can work with elements on your page, and help get you up to speed.</p> Nate Cavanaugh 2010-03-15T06:02:33Z AlloyUI Nate Cavanaugh 2010-03-15T05:25:10Z 2010-03-15T05:14:27Z <p> Hi all, there has been quite a long delay since the last blog post. For those of you who weren't able to check out <a href="">the webinar</a>, the reasons behind the long silence in my blog postings has been due to the work on the AlloyUI framework.</p><p>Over the past 6 months, <a href="">Eduardo Lundgren</a> and I have been furiously working away building a unified UI library on top of the revolutionary YUI3.</p><p>So today, I wanted to answer a few questions about it (in case you have yet to check out the webinar), and to also prep for the coming weeks and blog posts.</p><p>The simplest way to describe Alloy is that it's a library of tools, a collection of frameworks, put together and built into one unit. We're taking years of building UIs and the problems we've kept solving and boiling that knowledge down, and releasing it as a separate project.</p><p>One of the most common questions I get is: "So what about jQuery and jQuery UI?" You will still be able to use jQuery and jQuery UI (or any javascript or front end library of your choice) in Liferay.</p><p>We have however stopped using it for our portlets and plugins, and are instead building everything on top of Alloy.</p><p>Liferay is a platform, first and foremost. As such, we want people building on that platform to use the tools they feel most comfortable building with, be it Icefaces, Vaadin, jQuery, dojo, etc.</p><p>Much in the same way that in OSX, developers can use other windowing and widget toolkits, such as Swing, to build applications. However, if you want to really leverage the power of the operating system, and really want to have the nicest looking applications, you're going to use Cocoa.</p><p>That's what we want to accomplish with Alloy.</p><p>Another question I've gotten is why YUI3? There are numerous other javascript libraries on the market, why build on top of one that is a relative newcomer?</p><p>One of the questions we definitely asked ourselves was how many existing components and widgets does the library have? But what was a much more important factor was how quickly could quality production level widgets be built?</p><p>How clear was the thinking behind the widgeting system? How much documentation was there for it?</p><p>These were questions that YUI3 had great answers for. Answers so great that Eduardo and I were able to build roughly 60 utilities and widgets in about 6 months time.</p><p>We also looked at the team behind the library, and the types of problems they were solving. Instead of being run by an ivory tower, or an unmanageably large committee, it's developed by a productive "<a href="">pizza-sized</a>" team thats renowned for leading front end innovation on the web.</p><p>They're truly solving problems ranging from the small to the large, which runs very closely to how Liferay works. We wanted a system that could be used on the small scale (let's say you only want to sprinkle in very simple interaction to a website for mainly displaying content), or on the large scale (as in an application interface).</p><p>YUI3 is designed to be stretched to those different scenarios seamlessly.</p><p>The other question I've gotten is "can it be used outside of Liferay?" and the answer is a resounding yes. We have actually developed Alloy in entirely different repository of our SVN, and maintain it as a third party project that Liferay consumes.</p><p>We're doing this because we feel that the patterns we're solving with this aren't specific to Liferay, but are common across the web, and are useful for multiple people.</p><p>But this also allows non-Liferay developers to get involved and contribute ideas and solutions so that the pool of ideas doesn't stagnate, but is continually refreshed with fresh input.</p><p>However, as great as all of this sounds, there is one area where people people may be concerned, which is documentation.</p><p>We are currently working on generated API documentation (another benefit of YUI is that they release many of their build tools, one of them being their documentation builder), and we're aiming to have those done this week.</p><p>We also have quite a few demos available in the downloadable zip that contains examples along with code on the demo page to get them to run, but in all honesty, we could do a lot to improve them (and are in fact working on them).</p><p>There's also the YUI documentation on which all applies to Alloy (since Alloy is built on top of YUI, Alloy only adds to YUI and doesn't take anything away).</p><p>However, that can be a lot to wade through, so over the next two weeks, I'm going to post a series of 10 blog posts going over how to do tasks you're familiar with, as well as some that are brand new.</p><p>So here's what we'll learn:</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>1. Working with elements and events</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>2. Ajax</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>3. Plugins</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>4. Widgets</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>5. Utilities</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>6. Animation, Drag & Drop</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>7. Layouts & Forms in CSS</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>8. Using Alloy taglibs in JSPs</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>9. Advanced beauties (we'll look at DelayedTask, AOP, OverlayManager, Queue & AsyncQueue, and DataSet)</p><p><span class="Apple-tab-span" style="white-space:pre"> </span>10. Odds and ends (a few like Char Counter, History, Cache, Profiler, and DataType parsing)</p><p>After all 10 posts, I'm hoping you'll have a more thorough understanding of Alloy, and hopefully some ideas on how it can help you.</p><p>These are going to be written from a very technical point of view, so if all of my talks so far have been too light on exact details, be prepared for actual code to make it's way in.</p><p>Looking forward to seeing you here!</p> Nate Cavanaugh 2010-03-15T05:14:27Z A new Liferay Wallpaper Nate Cavanaugh 2009-08-20T04:58:06Z 2009-08-19T15:56:44Z <p>I'm sorry it's been so woefully long between updates here. I've been furiously working away on something that I'll have more details about in the next couple of weeks, but I think you will find that interesting.</p><p>Today's post, however, is of a different bent. You may remember <a href=";jsessionid=E2AB43AA12BEAC4E3EA0EAA501FFF700">the wallpaper I created</a> back in December. It seems like it was pretty popular, at least internally (they're probably being nice ;), so I decided to create another one last night.</p><p>I was waiting for some code to compile and hopped into my Google Reader to check some feeds and try to clear down the unread feeds. Google stopped counting, but I'm guessing it's <strike>3-4000 unread items</strike> (try 5,552 by current count...).<br />As I was cruising through my design feeds, inspiration struck, and I started sketching out the idea. It's so rare that I get to create artwork anymore, that I couldn't help thinking about how I wanted it to look.</p><p>So last night I went in and got it done, and ended up staying up until 2.30am, which reminds me a lot of how college used to be :)</p><p>For lack of a better name, I'm just calling it Liferay of Life. No reason, just pure exhaustion at this point, but here it is:</p><p><a target="_blank" href=""><img alt="Liferay of Life wallpaper" src="" /></a></p><p>Clicking the image will take you to the Large WideScreen version, but if you want more options, you can click here:</p><p><a href="">Liferay of Life Wallpapers</a></p><p>This link includes a wide range of resolutions for different ratios and even one formatted for an iphone. I hope you enjoy it :)</p> Nate Cavanaugh 2009-08-19T15:56:44Z IE8 is out. What does it mean for your theme? Nate Cavanaugh- 2009-03-19T17:42:44Z 2009-03-19T17:19:50Z <p>If you're as cutting edge as our own <a href="">Jonathan Neal</a>, you might have downloaded the IE8 final that was released this morning at 9am PST.</p><p>IE8 is a HUGE improvement towards standards, and while it still has not caught up with Firefox, Safari, Chrome or Opera, when it comes to IE, we will take what we can get :)</p><p>However, since it's more accurate with the standards, you may have noticed that your theme could be off in IE8 compared to IE7.<br />If this is the case, there's a good chance that any IE hacks that existed in your theme to fix problems that were in IE7 and below. A good example of this would be using the .ie selector in your CSS.</p><p>The problem with applying a hack to ALL IE's via .ie means that when future versions, such as IE8 fix the problems you were hacking, these will break your theme for the new IE.</p><p>So what's the fix? It turns out there are 2 good fixes, one for the short term and one for the long term.</p><p>The long term fix is to of course change your selectors from being broad to being specific.<br />For instance, if you are using a selector like this:<br />.ie #wrapper {<br />}</p><p>And that selector is causing issues, you should change it to be:</p><p>.ie7 #wrapper, .ie6 #wrapper {<br />}</p><p>This will make sure that the selector only applies to specific versions of IE that have the broken functionality you're addressing.</p><p>But this might take some time to get around to, and we all are a bit busy, so there is a quick short term fix that will solve the issues.</p><p>In the <head> of your theme, you can add this:<br /><meta http-<br /><br />That is a new tag Microsoft has added to IE8 so that you can tell IE8 what rendering engine to use. In this case we're telling it to use the IE7 rendering engine instead of the more standards compliant IE8 engine.</p> Nate Cavanaugh 2009-03-19T17:19:50Z Changes in Liferay's front end Nate Cavanaugh 2009-02-19T02:42:47Z 2009-02-19T01:54:05Z <p>If you follow our trunk as ardently as I do (and really, who doesn't) you might have seen some front end changes, and there might have been some theme changes that are a little different from what you were expecting.</p><p>There are some big happenings going on in the front end of Liferay, and I'll have some more details for you shortly, but I just made a commit that I wanted to talk about that I think will make developers quite happy.</p><p>I added a file called <strong>deprecated.js</strong> to portal.properties. This file is a place where we will deprecate our changing Javascript so that that the upgrade path between major versions is a lot smoother.<br />This file will be updated any time something from the Javascript API has been changed from the previous version.</p><p>Here's how it will work:</p><p>As soon as a piece of script is changed from a previous version, such as a variable being removed, certain options changing, etc, we will place code inside of deprecated.js that will make the Javascript work for the next version.</p><p.</p><p>So assuming we move LayoutConfiguration to be accessed from Liferay.LayoutConfiguration, we would add code inside of deprecated.js that will keep the backwards compatibility for 1 major version.</p><p>So in 5.3, the deprecated.js might look like this:</p><p>//To be removed in 5.4<br /><br />// LPS-1234<br />LayoutConfiguration = Liferay.LayoutConfiguration;</p><p>// LPS-4567<br />themeDisplay = Liferay.ThemeDisplay;</p><p> </p><p>However, in Liferay 5.4, this entire block would be commented out.<br /.</p><p>In Liferay 5.5, this block would be removed completely.</p><p>So in any version from 5.3 on, you would be able to see what is scheduled to be removed and what, if anything, was removed in the previous version.</p><p>There are a couple of reasons for this change, and I wanted to talk about them.</p><p :)</p><p>One of the reasons we introduced <a href="">Liferay Enterprise Edition</a> was because people, especially enterprises, need a stable and robust way to get bug fixes, security patches and performance enhancements, without having to upgrade an entire revision number.</p><p>In that same vein, we're adding the deprecated.js, not just as a service to EE customers, but as part of the general process.<br />So we have spent some time now trying to come up with creative ways to allow innovation, while making sure that we make upgrades a lot smoother.<br />I'm not trying to portray this as the end goal in and of itself, but more or less a piece of the process and a goal we're striving for.</p><p>We still have a lot of areas where we can improve, but this is more or less to let you know we are, and the steps we're taking in getting there.</p><p>What are some other ideas? <a href="">Bryan Cheung</a> and I were talking earlier today about communicating to folks that don't have the time or mental bandwidth to scour through commit logs and LPS tickets. What we were wondering is, similar to <a href="">Liferay's twitter feed</a> what if we had a Liferay Developer twitter account? Is that something anyone would be interested in?<br />Basically, it would contain short snippets and bursts on stuff we're working so you can keep a loose idea of the happenings going on with the development team.</p><p.</p><p>Would this be helpful to anyone?</p><p>Also, in the lead up to Liferay 5.3, there will be some very big UI changes coming, and I will blog in more detail about that, but the other plan is that as everything progresses, I will be blogging about the individual changes and milestones.</p><p>Another option that has completely changed my life and sanity is that <a href="">Mike Young</a> recently installed <a href="">Fisheye</a> to watch our commits. I think the feature I have most used is the RSS functionality. I subscribe to commits by directory and also by committer. For instance, I am usually interested in a couple of commits. Anything committed by my friends <a href="">Eduardo Lundgren</a> or <a href="">Peter Shin</a>, anything committed to the <a href="">portal-web</a> directory and anything committed to the <a href="">themes</a> or to the <a href="">js</a> directories (I know there is some duplication there, but I'd rather have a bit more info).</p><p>If you want to stay on top of what people are doing, it's an awesome way to keep your ear to the ground.</p><p>So there are some ideas. I would love to have feedback, especially on ways to better communicate with you guys that extends beyond our normal routes.</p> Nate Cavanaugh 2009-02-19T01:54:05Z Even MORE Performance fixes Nate Cavanaugh 2009-01-13T04:36:45Z 2009-01-13T04:17:14Z <p>It's hard to contain my excitement about what I am going to share.<br /><br /><a href="">Brian Chan</a> (along with <a href="">Eduardo Lundgren</a>) have just recently committed a change that I (along with any other person who has to deal with themes) have been pestering him for quite some time, and it is my great pleasure to finally be able to tell you guys he has implemented it.<br /><br />What is it, and why should you care?<br /><br />One of the less friendly aspects of how themes and plugins work in Liferay is the packing of CSS and Javascript. From 4.3.x and later, there has been some files that have caused an unending amount of confusion and extra work for people.<br />I'm talking specifically about *_unpacked.* and *_packed.*<br /><br />Perhaps you've looked at your deployed theme and seen a everything_packed.css and everything_unpacked.css as well as seeing around files like packed.js, unpacked.js, everything_packed.js, etc.<br /><br />These files have been created at build time and they are the optimized versions of those files so that you don't make as many http requests and don't download unnecessary whitespace and characters.<br /><br />Well, the change that <a href="">Brian</a> committed now completely removes those files and they are handled for you automatically.<br /><br />Think about something that happened to my good friend <a href="">Ray Auge</a> not too long ago while we were doing some work for one of our critically acclaimed clients. I can't say who, but practically everyone has heard of them. Anyways, they have a theme hot-deployed and <a href="">Ray</a> made some changes to the theme and IMed me one weekend wondering why the changes weren't being picked up. He kept changing the CSS in custom.css and nothing happening.<br />Only because I had been touched by this little bug had I known what was causing it.<br />But anyone that knows <a href="">Ray</a> knows that he's one of the smartest guys in Liferay. There's no reason in the world why he should have been banging his head against this, and I wish I could say it was a lone incident.<br /><br /).<br /><br /.<br />Sometimes we have to do maintenance on a theme or plugin someone else has developed.<br /><br />So how is this fixed now?<br /><br /><a href="">Brian Chan</a> has made it so now those files are automatically created when the server starts up. How would Ray resolve this if he were to do this now?<br />All <a href="">Ray</a> would have to do is restart the server or if that was too much, and he had the original theme WAR file, he could just redeploy the theme.<br /><br />Either of those is far preferable to having to rebuild the theme from the source, and far less confusing.<br /><br /><a href="">Brian</a> and <a href="">Eduardo</a> have been really hard at work doing this and <a href="">even more performance fixes</a> over the past couple of weeks and I've been amazed at how snappy our website has become.<br /><br / <a href="">Liferay Enterprise Edition</a>. You can get long term bug fixes, security patches, and performance improvements like this in a safe, reliable manner with our Enterprise Edition, so I would highly recommend it.<br /><br />And when you get a chance, shoot by <a href="">Brian</a> and <a href="">Eduardo's</a> pages and thank them for getting this in.</p> Nate Cavanaugh 2009-01-13T04:17:14Z Liferay wallpaper Nate Cavanaugh 2008-12-29T03:14:03Z 2008-12-29T02:34:20Z <p>There are a ton of amazingly creative people here at Liferay, and I'm always stunned with the stuff that people create.</p><p>Last night I was doing a ton of coding and some inspiration struck to make some Liferay "fan art". I don't get the time to do as much artwork as I used to, so it was a nice treat for me.</p><p>So, it totally is not "corporate" and doesn't really follow the Liferay branding guidelines, but like I said, it's "fan art" and normal rules don't apply. I wanted to make something a bit edgier than we're used to having.</p><p>I've created a package with common wallpaper sizes that you can <a href="">download here</a>.</p><p>Here is a brief preview:</p><p><a href=""><img alt="" src="" /></a></p> Nate Cavanaugh 2008-12-29T02:34:20Z Is this site running Liferay? Nate Cavanaugh- 2008-11-21T03:02:28Z 2008-11-21T02:42:18Z <p>It's been a while, but trust me, we've been working like crazy to get 5.2 out for your using pleasure.</p> <p>But, I arrive with a bit of a gift, especially for the marketing folks (or anyone curious). Have you ever wondered if a website was running Liferay?<br /> <img alt="" style="width: 284px; height: 198px;" src="" /></p> <p>Well guess no more. I've written a Greasemonkey script that will tell you if a website is running Liferay or not. What's Greasemonkey you ask?<br /> Only about the absolute coolest extension for Firefox known to man (yes it requires Firefox).</p> <p>So what do you do?<br /> <br /> If you have Greasemonkey installed, you should just skip to Step 3.</p> <p>Step 1: Go to <a href="" target="_blank"></a></p> <p>Step 2: Click the Add to Firefox button, and install the plugin. It will notify you to restart firefox.<br /> <img alt="" src="" /></p> <p>Step 3: Once you've restarted, go here: <a href="" target="_blank"></a></p> <p>Step 4: Press the Install button at the upper right<br /> <img alt="" src="" /></p> <p>Step 5: Agree to the install<br /> <img alt="" src="" /><br /> </p> <p>Step 6: Visit a website (like <a target="_blank" href=""></a>)</p> <p> </p> <p>Step 7: Enjoy the info:<br /> <img alt="" src="" /><br /> <br />.</p> <p> </p> <p>Enjoi!</p> Nate Cavanaugh 2008-11-21T02:42:18Z The new Liferay.com Nate Cavanaugh 2008-10-09T05:32:37Z 2008-10-09T05:21:01Z <p>Hi all, it's been a while since I've blogged, and there has been good reasons. Liferay is taking off and we are growing like crazy, and as such, I've gotten less time to blog.<br />But there has been a special occasion for this blog, and that is the official release of Liferay.com, and I wanted to talk a bit about it.<br /><br />So, back in May we started talking about our current website internally amongst a few of us planning it out. We have long known internally that there were some issues with our different sites that no matter how much <a href="">Bmiller</a> would polish it up, they wouldn't go away. And this wasn't Bmiller's fault, always gave us more than we asked for, but there were always limitations of time and resources that would keep us from making deeper changes.<br /><br />So back in May, there wasn't any internal pressure to get a new website out, and we were content with the branding for the site, and instead of waiting until we HAD to redesign the site, <a href="">Alice</a>, <a href="">Bmiller</a> and I got on a conference call together on a cloudy May day and hashed out how we would design the site.<br />One thing we didn't like about the old site was that it would hit you with a lot of information when you first visited, and a common complaint was that people would visit and not quite sure what our company provided.<br /><br />Part of this was due to information overload, part was due to just general information architecture issues.<br /><br />So out of that first day came our initial wireframe for the front page.<br /><br /><a href=""><img width="500" height="375" src="" alt="" /></a><br /><br /.<br /><br />So, in short order, Bmiller delivered this to my inbox to show me the progress:<br /><br /><a href=""><img width="500" height="591" src="" alt="" /></a><br /><br />A little after this, <a href="">Bcheung</a> and <a href="">Cecilia</a> got involved and we started getting all sorts of pressure to make all kinds of changes.<br /><br />So one little tangent here that I'd like to touch on: Conflict is great.<br />In a lot of companies, organizations and even personal relationships, conflict is looked at as a bad thing, and avoided quite often. Let me just say Liferay is not that kind of company. Conflict can be constructive or destructive, depending on how it's handled.<br /><br />And for this site, there was a lot of (good) conflict. There were literally days where we would argue for a couple of hours over the tiniest of details, each of us representing a different opinion, each of us having our own unique perspective.<br />And what came out of it was a truly superior product. I know for a fact that I argued for different things that, had they been unquestioned, would have kept the site from being as amazing as it now looks.<br /><br />Or as Bchan told me during my interview almost 2 years ago: "The best idea wins."<br /><br /><a href="">Bcheung</a> and <a href="">Bchan</a>.<br /><br /><a href="">Bmiller</a>.<br /><br /><a href="">Alice<.<br /><br /><a href="">Ryan Park</a>.<br /><br />And of course, there is so many more people to thank, and so many contributed ideas, opinions and constructive criticisms that truly brought out an amazing site.<br />If I'm forgetting to give proper credit, please let me know.<br /><br />All I can say is awesome job everybody, this truly is the best version yet, and I am blown away with how great it is :)</p> Nate Cavanaugh 2008-10-09T05:21:01Z Oh yeah, about the auto-save Nate Cavanaugh 2008-05-23T21:57:40Z 2008-05-23T21:48:32Z <p>I just wanted to drop a couple of notes about the auto-save that I didn't mention before....</p><p><b>1. I didn't do the bulk of the work</b><br />That honor goes to <a href="">Jonathan Neal.</a> <a href="">Bchan</a> did a lot and I made a couple of javascript changes and tweaks, but the work load credit goes to Jon for actually laying the foundation and getting it done. Go hit up his <a href="">wall</a> and tell him great job :)</p><p><b>2. There was a race condition</b><br />There was a bug where you had to time your save JUST so, otherwise it could stay in the state of perpetual draft. We've fixed it in trunk, and will be pushing it live here to the site very soon.</p><p><b>3. We changed the interval time</b><br />Instead of 10 seconds, it now saves every 30. This seems a bit more realistic to me, but we're willing to hear you guys out if someone HAS to have it saving every 10 (and can make a good general case for it).</p><p><b>4. It now is smarter</b><br />It now checks if you're editing a draft, to save only if the content or title has actually been edited. Otherwise it will patiently wait until you have edited.</p><p>It's stuff like this that really just makes my day, getting to work with so many smart people who can develop rapidly and get stuff out the door.<br /> </p> Nate Cavanaugh 2008-05-23T21:48:32Z Autosave comes to blogs Nate Cavanaugh 2008-05-21T00:20:17Z 2008-05-20T23:14:53Z <p>This is a feature I know many of us have been desperately wanting for quite a while. In fact, it's something that many of us miss from other blogging apps, and we now have it.</p> <p>So, to kind of show off what we have, I'll include screenshots from this very blogging session. <br /> <img src="" alt="Isn't this awesome?!" /></p> <p.</p> <p>You know what else is great about it? Let's say you go back to your draft, and you decide the title isn't right, it automatically updates the friendly URL for you as well.</p> <p>Is that more bang for your buck or what?</p> <p>Now what happens if you go to see your blogs, how do you know which ones are your drafts? Wonder no longer. Take a look:</p> <p><img src="" alt="Man, Nate is one snazzy UI designer ;)" /></p> <p>See how it has a grey box around it with the dark grey text and the icon? That's how you know it's not published yet.</p> <p>We're contemplating a few ways to mark it as a draft manually, but we shall see.</p> <p>Another question that might come up, what happens if you're editing an already published blog entry, does it autosave that? <b>No.</b>.</p> <p>We have so much awesome stuff coming down the pipe, but I don't want to mention it just yet. But let's just say that I think it will just solidify even more why we're the number 1 open source portal.</p> <p> </p> Nate Cavanaugh 2008-05-20T23:14:53Z How can jQuery help me today? pt. 1 Nate Cavanaugh 2008-04-24T23:00:19Z 2008-04-24T22:53:33Z <p>So, when we adopted jQuery, I think I should have done more to evangelize it within Liferay. For most web developers in the world, it's really taken off in popularity because the concept is tied to an existing development paradigm, eg. CSS.<br /> .<br /> <br /> So first things first, I'll do a quick lay of the land with jQuery that will help you get up to speed, and then I'll launch into examples that can show you some useful tips that can help you get stuff done today.<br /> <br /> First, the concept behind jQuery is that you operate on the DOM (the structure that represents HTML elements on the page).<br /> In normal Javascript development you do everything on the DOM elements directly.<br /> For example:<br /> <code>document.getElementById('banner').style.display = 'none';<br /> document.getElementsByTagName('body')[0].className += ' classic';<br /></code> <br />.<br /> <br /> So the purpose behind jQuery is to "query" the DOM, and return you back a set of elements (even if its only one element) and operate on that collection.<br /> <br /> You can think of every jQuery object as a bucket of DOM elements, and every jQuery method you do on that object is automatically done on every element in the bucket.<br /> <br /> One thing commonly done in web dev is to get all elements that match some criteria (querying the DOM), and often, you want to grab everything with a certain class name.<br /> <br /> In CSS you would do it like so:<br /> <code>.liferay-element {}<br /></code> <br /> In normal JS you would do it like this:<br /> <br /> <code>var allElements = document.getElementsByTagName('*');<br /> var matchedElements = [];<br /> for (var i=0; i < allElements.length; i++) {<br /> var el = allElements[i];<br /> if (el.className.indexOf('liferay-element') > -1) {<br /> matchedElements[i] = el;<br /> }<br /> };<br /></code> <br /> And then you would have your collection of matched elements.<br /> <br /> So how would you do this in jQuery?<br /> <br /> <code>var matchedElements = jQuery('.liferay-element');<br /></code> <br /> Looks pretty familiar, right?<br /> <br /> So now that we have matched elements, what could we do with this? A whole ton of good stuff.<br /> <br /> Let's say we wanted to fire an alert box when we click on each of those elements, how would we do it?<br /> <br /> <code>matchedElements.click(function(){<br /> alert('you clicked me!');<br /> })<br /></code> <br /> Or what about adding another class to each element?<br /> <br /><code> matchedElements.addClass('new-class');<br /></code> <br /> Or wait, let's say that we have a collection, and we want to make all of them have a red border, BUT, if the element is a span, we want a blue border?<br /> Easy-peasy-lemon-squeezy.<br /> <br /> <code>matchedElements.css('border', '1px solid #f00');<br /> matchedElements.filter('a').css('border-color', '#00c');<br /></code> <br /> Notice the filter portion? The filter method reduces a current collection down to a smaller set based on a jQuery selector (or other things, but you can look at the documentation [] for more info).<br /> <br />!".<br /> <br /> So let's do stuff.<br /> <br /> One common thing that we've all done numerous times is use a checkbox to select all checkboxes in a set, sort of a select/deselect all option.<br /> <br /> So let's assume we have a group of checkboxes, that don't have a classname, don't have an id, and don't have the same name.<br /> But we know the name attrbiute all starts with the same thing, in this case:<br /> <b>"<portlet:namespace />check"<br /></b> <br /> So, we have our checkbox that acts as the trigger, and but it doesn't start with the same name.<br /> <br /> Our example HTML would be this:<br /> <br /><code> <input type="checkbox" id="<portlet: namespace />trigger" /><br /> <input type="checkbox" name="<portlet: namespace />check1" /><br /> <input type="checkbox" name="<portlet: namespace />check2" /><br /> <input type="checkbox" name="<portlet: namespace />check3" /><br /> <input type="checkbox" name="<portlet: namespace />check4" /><br /> <br /></code> Here is how we would toggle all of the checkboxes in jQuery:<br /> <br /> <code>var trigger = jQuery('#<portlet:namespace />trigger');<br /> trigger.click(<br /> function(event){<br /> jQuery('[@name^=<portlet:namespace />check]').attr('checked', this.checked);<br /> }<br /> );<br /></code> <br /> <br /> So let's go by that, line by line, so we know what we're doing:<br /> <br /> <code>var trigger = jQuery('#<portlet:namespace />trigger');<br /></code> <br /> The # sign in CSS signifies an ID, so in this case, we're getting an element by it's ID.<br /> <br /> <code>trigger.click(<br /></code> <br />.<br /> 2, however, is that the scope of the function is changed a bit so that "this" points to the element that you're working with. So in this case, this points to the DOM element of our trigger.<br /> <br /> <code>function(event){<br /></code> As mentioned above, here is the start of our function, with the event parameter.<br /> <br /> And here is where the magic happens:<br /> <br /> <code>jQuery('[@name^=<portlet:namespace />check]').attr('checked', this.checked);<br /></code> <br /> That's kinda nuts right?<br /> <br /> Well, jQuery lets you query objects based on parameters, and you can also do minor regular expressions in it. CSS also allows you to do this (in every browser, of course, except IE 6).<br /> It's not a direct port of CSS in this case, but of xpath, in that you have to use the @ sign. The newer versions of jQuery don't require the @ sign, but in Liferay pre-5.1, we have to use the @ sign.<br /> <br /> So I'll break this line up:<br /> <br /> <code>jQuery('[@name^=<portlet:namespace />check]')<br /></code> <br /> Find every element whose name attribute begins with <portlet:namespace />check. The begins with is done by this part:<br /> <code>^=<br /></code> if we wanted to say every element whos name ENDS with, we would do:<br /> <code>$=<br /></code> <br /> <code>.attr('checked', this.checked)<br /></code> This sets the checked attribute of every element we found to whatever the checked state is of the current element.<br /> So if the current element's checked attribute is set to false (unchecked) all these elements will be unchecked. If it is checked, so will all of those elements.<br /> <br />!"<br /> <br /> But what if we wanted to do an ajax call on a page that updated a div with the results and show a loading animation so the user isn't wondering what's going on?<br /> Well first, we need to make sure the URL that returning the HTML we need.<br /> Secondly, let's assume the div we want to update has an id of portletBox, and the link we're clicking points to the URL resource, and has an ID of linkTrigger.<br /> <br /> Here's our HTML:<br /> <br /><code> <div id="portletBox"><br /> Existing Text is here....<br /> </div><br /> <br /> <a href="" id="linkTrigger">Click me to update our text</a>.<br /></code> <br /> Here's how we'd do it:<br /> <br /> <code>var linkTrigger = jQuery('#linkTrigger');<br /> var updateDiv = jQuery('#portletBox');<br /> <br /> linkTrigger.click(<br /> function(event) {<br /> updateDiv.html('<div class="loading-animation"></div>').load(this.href);<br /> return false;<br /> }<br /> );<br /></code> <br /> Let's go down a bit at a time:<br /> <br /> This of course grabs our elements to work with.<br /> <br /> <code>var linkTrigger = jQuery('#linkTrigger');<br /> var updateDiv = jQuery('#portletBox');<br /></code> <br /> Now we'll add a click handler<br /> <code>linkTrigger.click(<br /> function(event){}<br /></code> <br /> This is where we do our work<br /> <code>updateDiv.html('<div class="loading-animation"></div>').load(this.href);<br /> return false;<br /></code> <br /> Let's analyze this a tiny bit. When we click the link, we're first grabbing updateDiv and replacing all of it's HTML with a div that handles the loading-animation.<br /> Right on the end of it, we're doing .load(this.href), which performs an AJAX call and updates the jQuery elements with the results of the AJAX call.<br /> <br /> Lastly, we have <b>"return false;"</b>. What does this do exactly?<br /> Well, in every browser event, there is a default action. In the case of a link, the browsers default action is to follow the link. However, in our case, we don't want to follow that link, but instead stay on the current page.<br /> When you return false, it prevents the default action from ever taking place.<br /> <br /> This also works with every event, for instance, with forms, if you want to do some stuff when you submit the browser, but want to prevent the actual form from submitting, you would return false.<br /> <br /> So, that about does it for right now. I'm going to think up some more (useful) examples of things jQuery can do to make your development life a lot easier.<br /> <br /> Is there anything you'd like me to cover, for instance, doing animations, or manipulating html elements, etc?</p> Nate Cavanaugh 2008-04-24T22:53:33Z China Nate Cavanaugh 2008-04-10T18:11:20Z 2008-04-10T18:11:20Z <p>Hey there, O faithful reader...<br /> Man, what a long time since actually posting. Since I last posted, quite a lot has happened, and included in that was the fact that my wife and I visited China with Brian and Caris, Bryan Cheung, Alice, and Dave, to go spend time with Mark, Ivan, and Shepherd.<br /> <br /> What an experience! There are a few times in everyones life when travel truly changes you in ways you were completely not expecting.<br /> The China trip was one of those times.<br /> <br /> To begin with, nothing really changed my view of flying during this trip. Except maybe that I need a height-ectomy. Flights both ways were pretty much par for the course of my normal flight experiences: painful.<br /> <br /> But the actual trip more than made up for it. I will also say that the pleasure of this trip is completely due to the grace of the people we were visiting with. Had Jessica and I gone by ourselves, we no doubt would have come back with a vastly different experience.<br /> We were made to feel completely at home, and even though our Mandarin is limited to Ni hao and Xie Xie (Hello and Thank you), we were able to get by because of the patience of the people we were with, and the patience of the Chinese :)<br /> <br /> The first day, I will say, was really rough. We landed with too little sleep, and the city was very overwhelming at first. Luckily though, we fell into sync with the local timezone the night we got there, and were not jet lagged at all. <a href="" title="Jan--March2008 072" class="flickr-image"><img alt="Jan--March2008 072" src="" /></a></p> <p><br /> <br /> This was also the very first time in my life that I've traveled internationally and did not get sick, and that is quite a big deal to me :)<br /> <br /> I got to me a lot of really great guys, and I will always remember Steven, Gavin, Sai, and Dale because their patience with the language barrier, and their hard work learning the Liferay theming system. They were incredibly friendly, and I do miss being out there.<br /> <br /> Surprisingly, I don't think my comfort level was stretched too far. More than anything, I was really curious about things, but the odd thing was that while I thought the whole personal space thing would cause me grief, it's strange how oddly liberating it is to be squashed in a moving metal can with other people and have no concept of moving your arms, let alone "personal space".<br /> <a class="flickr-image" title="DSC01246" href=""><img src="" alt="DSC01246" /></a> <br /> The food there was quite good. Some things, not so much, but for the most part, most everything I tried, I really enjoyed (of course, the same is said when I eat anywhere, which suggests I'm not so much a worldly foodie, but more just a person who likes eating).<br /> Of course, eating at a Chinese McDonalds and Pizza Hut were actually pretty strange.<br /> The food outside of the chains were where the best food was found, though. Dumplings, wontons, and the normal Chinese fare we're all so familiar with was there, but so was live sashimi (where they cut off the flesh of the fish and leave it there on the plate moving its mouth...), chicken-head-kabobs, and all sorts of different and exciting items.<br /> <a class="flickr-image" title="Jan--March2008 030" href=""><img src="" alt="Jan--March2008 030" /></a> <br /> <a class="flickr-image" title="DSC01283" href=""><img src="" alt="DSC01283" /></a> <br /> I DID manage to get my Diet Coke fix. It was glorious.<br /> However, the first day went a bit rough in getting it, but due to the persistence of our friends, we did manage to complete the quest, as you can see below. The first machine I tried took my money, but wouldn't give me the Diet Coke. I had to settle for a freakin Fanta! But we would prevail:<br /> <object width="425" height="355"><param name="movie" value=""></param><param name="wmode" value="transparent"></param><embed src="" type="application/x-shockwave-flash" wmode="transparent" width="425" height="355"></embed></object> <br /> Also, our office there is freaking AWESOME. I don't envy the internet speeds, but the office is so nice.<br /> <a class="flickr-image" title="dalian, China Mar08 042" href=""><img src="" alt="dalian, China Mar08 042" /></a> <br /> <a class="flickr-image" title="dalian, China Mar08 051" href=""><img src="" alt="dalian, China Mar08 051" /></a> <br /> I could go on and on with vivid detail, and I might in future posts, but I will say that I learned some really amazing things while I was there.<br /> <br /> One is that how I think I am coming across to people is often very different than how I actually am. I learned that when I interact with people, a lazy tongue and an eager ear go further than they're often given credit for.<br /> <br /> I also learned that the people I am lucky enough to not only call co-workers, but friends, are insanely patient, loving, and kind, which humbles me that they deign to consider me a friend as well.<br /> <a class="flickr-image" title="DSC01290 (2)" href=""><img src="" alt="DSC01290 (2)" /></a> <br /> Lastly, I learned that my wife is much more adventurous and patient than I really knew, and I'm not only glad she got to go for this trip, but just overall incredibly grateful for the opportunity to experience it.<br /> <a class="flickr-image" title="Jan--March2008 027" href=""><img src="" alt="Jan--March2008 027" /></a></p> Nate Cavanaugh 2008-04-10T18:11:20Z
http://www.liferay.com/fr/web/nathan.cavanaugh/blog/-/blogs/rss?_33_andOperator=true&_33_cur=2&_33_delta=20&_33_keywords=&_33_advancedSearch=false
CC-MAIN-2014-41
refinedweb
19,121
62.38
Synchronizing Across Process Boundaries Example of Producer and Consumer Problem Special Issues for fork() and Oracle Solaris Threads 7. Safe and Unsafe Interfaces 8. Compiling and Debugging 9. Programming Guidelines A. Extended Example: A Thread Pool Implementation Table 6-3 Similar Oracle Solaris Threads Functions The thr_create(3C) routine is one of the most elaborate of all routines in the Oracle_base. Contains the address for the stack that the new thread uses. If stack_base is NULL, then thr_create() allocates a stack for the new thread with at least stack_size bytes. stack_size. Contains the size, in number of bytes, for the stack that the new thread uses. If stack_size is zero, a default size is used. In most cases, a zero value works best. If stack_size is not zero, stack_size must be greater than the value returned by thr_min_stack(). In general, you do not need to allocate stack space for threads. The system allocates 1 megabyte of virtual memory for each thread's stack with no reserved swap space. The system uses the -MAP_NORESERVE option of mmap(2) to make the allocations. start_routine. Contains the function with which the new thread begins execution. When start_routine() returns, the thread exits with the exit status set to the value returned by start_routine . See thr_exit. The process exits when all nondaemon threads exit. Daemon threads do not affect the process exit status and are ignored when counting the number of thread exits. A process can exit either by calling exit() or by having every thread in the process that was not created with the THR_DAEMON flag call thr_exit(3C). Oracle <thread.h> #include <signal.h> int thr_sigsetmask(int how, const sigset_t *set, sigset_t *oset); thr_sigsetmask() changes or examines a calling thread's signal mask. Each thread has its own signal mask. A new thread inherits the calling thread's signal mask and priority. However, pending signals are not inherited. Pending signals. Use this behavior to inquire about the currently blocked signals. The value of how specifies the method in which the set is changed. how takes one of the following values. SIG_BLOCK. set corresponds to a set of signals to block. The signals are added to the current signal mask. SIG_UNBLOCK. set corresponds to a set of signals to unblock. These signals are deleted from the current signal mask. SIG_SETMASK. set corresponds to the new signal mask. The current signal mask is replaced by set. Oracle Solaris thr_join(), a join takes place when any non detached thread in the process exits. The departedid indicates the thread ID of the exiting thread. Oracle Solaris threads as thread-specific data is for POSIX threads. The synopses for the Oracle. In Oracle Solaris threads, a thread created with a priority other than the priority of its parents is created in SUSPEND mode. While suspended, the thread's priority is modified using the thr_setprio(3C) function call. After thr_setprio() completes, the thread resumes execution. A higher priority thread receives precedence over lower priority threads with respect to synchronization object contention. thr_setprio(3C) changes the priority of the thread, specified by tid, within the current process to the priority specified by newprio. For POSIX threads, see pthread_setschedparam Syntax. #include <thread.h> int thr_setprio(thread_t tid, int newprio) The range of valid priorities for a thread depends on its scheduling); thr_setprio() returns 0 if successful. When any of the following conditions is detected, thr_setprio() fails and returns the corresponding value. ESRCH.
http://docs.oracle.com/cd/E26502_01/html/E35303/sthreads-17757.html
CC-MAIN-2016-26
refinedweb
571
58.99
Want more? Here are some additional resources on this topic: Classes can inherit from another class. This is accomplished by putting a colon after the class name when declaring the class, and naming the class to inherit from—the base class—after the colon, as follows: public class A { public A() { } } public class B : A { public B() { } } The new class—the derived class—then gains all the non-private data and behavior of the base class in addition to any other data or behaviors it defines for itself. The new class then has two effective types: the type of the new class and the type of the class it inherits. In the example above, class B is effectively both B and A. When you access a B object, you can use the cast operation to convert it to an A object. The B object is not changed by the cast, but your view of the B object becomes restricted to A's data and behaviors. After casting a B to an A, that A can be cast back to a B. Not all instances of A can be cast to B—just those that are actually instances of B. If you access class B as a B type, you get both the class A and class B data and behaviors. The ability for an object to represent more than one type is called polymorphism. For more information, see Polymorphism (C# Programming Guide). For more information on casting, see Casting (C# Programming Guide). Structs cannot inherit from other structs or classes. Both classes and structs can inherit from one or more interfaces. For more information, see Interfaces (C# Programming Guide) Abstract and Sealed Classes and Class Members (C# Programming Guide) Polymorphism (C# Programming Guide) Interfaces (C# Programming Guide)
http://msdn.microsoft.com/en-us/library/ms173149(VS.80).aspx
crawl-002
refinedweb
297
68.81
Ok, so you are sending messages from tera term pro to Serial1. You know the link works because you have tested it with some code like: while (Serial1.available() ) Serial.print(Serial1.read()); that lets the Arduino IDE’s Serial Monitor echo whatever you send from tera term pro. Are you sure that tera term pro is sending carriage returns? Carriage returns must be sent to terminate a message. Please test the following code. Send either a “0” or a “1” followed by a carriage return to either Serial or Serial1. A 1 will turn on the LED at pin 13, a 0 will turn it off. Also, all data received by Serial will be echoed to Serial1, and the opposite is also true. #include <Messenger.h> Messenger message = Messenger(); Messenger message1 = Messenger(); void messageReady() { while ( message.available() ) { // Set the pin as determined by the message digitalWrite( 13, message.readInt() ); } } void messageReady1() { while ( message1.available() ) { // Set the pin as determined by the message digitalWrite( 13, message1.readInt() ); } } void setup() { // Initiate Serial Communication Serial1.begin(9600); Serial.begin(9600); // Attach the callback function to the Messenger message.attach(messageReady); // Serial message1.attach(messageReady1); //Serial1 } void loop() { int data; // Serial1 while ( Serial1.available() ) { data = Serial1.read (); Serial.print(data,BYTE); // Echo data on other port message1.process(data); // Process data } // Serial while ( Serial.available() ) { data = Serial.read (); Serial1.print(data,BYTE); // Echo data on other port message.process( data ); // Process data } } Please report any and all error messages that this code generates. Please also include what is received by both serial monitors.
https://forum.arduino.cc/t/messenger-library/7065?page=2
CC-MAIN-2021-25
refinedweb
260
53.37
From: saxon-help-bounces@lists.sourceforge.net [mailto:saxon-help-bounces@lists.sourceforge.net] On Behalf Of martin.me.roberts@bt.com Sent: 15 July 2008 15:26 To: saxon-help@lists.sourceforge.net Subject: [saxon] S9APIXpath and namespaces and variablesHi,I am trying to improve performance of my app by using the saxon API on JDOM documents rather than JAXEN.In the process I have a number of questions?1) Do I need to explicitly set up namespace context?2) If I do when should I do it? Can it be done before the load of the compiled xpath?3) When do I set up variables? Can I do this before the load()?The reason for wanting to do this before the load is that I want to reduce the amount of setup for each new XPATH as I have literally hundreds to process against the same document with the same namespace and variable contexts?Martin
https://sourceforge.net/p/saxon/mailman/attachment/C5061A0076FE4FE6BB1F69F90419A035@Sealion/1/
CC-MAIN-2017-09
refinedweb
157
66.74
Starting with ASP.NET 3.5, you can use Windows Communication Foundation (WCF) to build AJAX-callable services. Why weren't WCF services available in ASP.NET AJAX pages before ASP.NET 3.5? The reason is that before .NET 3.5 the WCF platform had no built-in support for taking JSON as input and returning it in output. The WCF platform that ships with the .NET Framework 3.5 comes with a new binding model aptly named "webHttpBinding." At the end of the day, this new binding empowers WCF to support JSON serialization over HTTP. In addition, it lets you map a URI of your choice to methods and set the format of the message's body and response. The WCF Web programming model is specifically designed to enable WCF calls from Web clients running JavaScript. But what happens on the server, exactly? ASP.NET and WCF are co-located in IIS, but it is ASP.NET to receive any calls directed at a WCF service method. Next, the ASP.NET runtime forwards WCF requests to the WCF stack. ASP.NET and WCF services live side by side within the same AppDomain inside of an instance of the worker process. Some ASP.NET features are not available to WCF services when these services are hosted in IIS. The reason is that WCF services behave independently of the hosting environment and transportation protocols, whereas ASP.NET is intentionally tightly coupled to the IIS environment and HTTP-based communication. The behavior of ASP.NET content is not affected by the presence of WCF; but the overall behavior of WCF changes if it has to take into account the presence of ASP.NET. To work in collaboration with ASP.NET, the WCF runtime must be configured to operate in compatibility mode. When a WCF service is not working in ASP.NET compatibility mode, it cannot access the HttpContext of the ASP.NET request. The object, in fact, is always null. At the same time, you should note that the WCF runtime supplies the OperationContext object with nearly the same purpose. In addition, no file-based authorization is possible on SVC files if not in compatibility mode and no web.config-based authorization. You can make up for this using the WCF-specific ServiceAuthorization behavior. WCF requests are intercepted immediately after authentication and never returned to the ASP.NET pipeline. Again, for ASP.NET to control the processing of the WCF request, you need to switch to compatibility mode. Finally, let's talk impersonation. A WCF request always runs through the IIS process identity regardless of ASP.NET impersonation. ASP.NET impersonation settings are taken into account only in compatibility mode. When in compatibility mode, though, WCF impersonation settings, if specified, take precedence. ASP.NET compatibility mode is helpful when you design a WCF service that you'll never host outside of IIS and always use to communicate over the HTTP protocol. The compatibility mode must be enabled at the application level in the configuration file; it is disabled by default. This setting enables compatibility mode but then this setting affect individual services in a different way. Each WCF service can require, allow, or refuse the compatibility. This is done through the AspNetCompatibilityRequirements attribute set on the service class. [AspNetCompatibilityRequirements( RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class MySampleService : ISampleContract { : } A WCF service that doesn't allow ASP.NET compatibility cannot be invoked via JavaScript in an environment where compatibility is enabled. If you try this, an exception is thrown. It should be noted that a service doesn't allow compatibility mode by default. If compatibility is enabled at the application level, any hosted WCF services must change the value of AspNetCompatibilityRequirements attribute to either Required or Allowed.
http://www.drdobbs.com/windows/wcf-web-programming-model/206903700
CC-MAIN-2015-11
refinedweb
623
50.94
At this point our application is complete and tested. At this point you can dive right in to the User’s Guide, but there are a few extra things we can add to further demonstrate some more of Ferris’ capabilities. Ferris uses Components as a way of organizing commonly used functionality for controllers. Ferris comes with a handful of built-in components and includes one for automatic pagination for list methods. It’s pretty easy to use. First import it: from ferris.components.pagination import Pagination Then add it to our controller’s Meta component list and set the limit: class Posts(Controller): class Meta: components = (scaffold.Scaffolding, Pagination) pagination_limit = 5 If you open up, you’ll see that it will show no more than five posts. However, we don’t currently have a way to move between pages. Luckily, the scaffolding macros can handle that. Add this to our list.html template right before the end of the layout_content block: {{scaffold.next_page_link()}} Now there is a paginator at the bottom of the page. Similar to components, Ferris uses Behaviors as a way of organizing commonly used functionality for models. A useful behavior is the Searchable behavior. First, we need to modify our model: from ferris.behaviors.searchable import Searchable def Post(BasicModel): class Meta: behaviors = (Searchable,) Note Any posts created before you made this change will not be searchable until you edit and resave them. Now we’ll use the Search component in our controller to use the behavior: from ferris.components.search import Search class Posts(Controller): components = (scaffold.Scaffolding, Pagination, Search) Now let’s the ability to search to our list action: def list(self): if 'query' in self.request.params: self.context['posts'] = self.components.search() elif 'mine' in self.request.params: self.context['posts'] = self.Model.all_posts_by_user() else: self.context['posts'] = self.Model.all_posts() Import the search macros into templates/posts/list.html: {% import 'macros/search.html' as search with context %} Finally, add these somewhere at the top of the inside of the layout_content block: {{search.search_filter(action='search')}} {{search.search_info()}} Now when we visit there should be a search box from which we can search through all posts.
http://ferris-framework.appspot.com/docs21/tutorial/7_extras.html
CC-MAIN-2017-13
refinedweb
364
50.63
[Alex, I have replied to commons-dev mailing list as the most appropriate place for the discussion] Inline... ----- Original Message ----- From: "Alex Blewitt" <Alex.Blewitt@ioshq.com> > I've just read the JavaDoc for EqualsBuilder V 1.0 on the Apahce > website, and have a few comments which I think you may like to take > into account: > > o You shouldn't use 'instanceof' in the test for equality of type. You > should instead use this.getClass() == other.getClass(). The simple > reason for this is the equals method is meant to be reflexive (i.e. > a.equals(b) == b.equals(a)) and using 'instanceof' it is possible to > break that contract. For example, classes A and B (extends A), then > a.equals(b) will return true (even if there are attributes of 'b' that > are added or different) and b.equals(a) will never be true, even if > there are no attributes of 'b' added. Note that this also works for > superclass equality; it is safe to use 'super.equals(other)' if there > are other tests that need to be done. Note that the rules, laid out by > Joshua Bloch, are generally regarded as false since it breaks the > assumptions of the equality method and have widely been ridiculed. > [However, note that you need to test for 'other==null' since null > instanceof X always returns false.] I think I agree about the instanceof check in the code. It probably should be class equality. I am unclear as to what you find exactly wrong with Josh Bloch's book. I have never heard of it being ridiculed, but then maybe I'm not in the right circles :-) > o You also comment that any field used in equality testing must be used > in hashcode, and vice versa. The reverse is not true. You can have a > hashCode method that returns a constant '0' (thereby not using any of > the fields) with an implementation of equals that works for any (or > all) fields. Correct, the javadoc is probably over harsh. > o You don't point out that static and non-transient fields should not > be used in the implementation of equals, which is required as per the > spec. I assume you mean transient. I reckon they don't harm if they are checked (static final and transient, static would harm). > o You don't give an example of how to integrate with a superclass. In > general, for classes that have an implementation of an equals method in > a (non-Object) superclass, you should also have a first line test > 'super.equals(other)' as well. > > --- 8< --- > > public class A { > private int ai; > private static int as; > public boolean equals(Object other) { > if (other == null || this.getClass() != other.getClass()) { return > false; } > A a = (A)other; > return this.ai = a.ai; > } > public int hashCode() { > return 0; > } > } > > public class B extends A { > private int bi; > public boolean equals(Object other) { > if (!(super.equals(other)) { return false; } > B b = (B)other; // checked in superclass > return this.bi = b.bi; > } > } > > --- 8< --- Yes, superclasses is an area that could probably do with some more thought. > Hope this is useful, > > Alex. If you fancy sending in a documentation patch, we'd love to take a look ;-) Stephen -- To unsubscribe, e-mail: <mailto:commons-dev-unsubscribe@jakarta.apache.org> For additional commands, e-mail: <mailto:commons-dev-help@jakarta.apache.org>
http://mail-archives.apache.org/mod_mbox/commons-dev/200210.mbox/%3C00ff01c276fa$ef0cf2e0$133329d9@oemcomputer%3E
CC-MAIN-2016-50
refinedweb
558
73.88
Back when Gravatar launched many years ago it was a game-changer. All of a sudden, I no longer had to change my profile picture on all those dozens of forums and sites I was active on but could do that in a central place. Years later I still consider it a great service but especially for internal sites problematic to have calls to some central external service that is not under your control that in the worst-case can see for what website an avatar has been requested. At the same time, sometimes you simply don’t want to give someone a picture of you just so that they can hand it over to someone else again. While playing with my Liberapay account I stumbled upon a service similar to Gravatar that I hadn’t heard of before despite it already being nearly a decade old: Libravatar. At the first glance, Libravatar is pretty much the same as Gravatar except that its source code is open. If you look a bit deeper, though, you’ll notice that it’s actually a federated system. There is an open protocol behind it that let’s you, for instance, run your own instance, and services that support Libravatar should check your own service first before doing any kind of fallback (e.g. through the central but struggling Libravatar.org server). Client implementation So how should a website (or a generic client implementation) retrieve a person’s profile picture given that user’s email-address? Let’s work on a little example function that does all that: import hashlib import dns.resolver import sys def get_avatar_url(raw_email): # We need the md5 sum of the lowercase'd email email = raw_email.strip().lower() hash = hashlib.md5() hash.update(email.encode('utf-8')) hash = hash.hexdigest() handler_base_url = get_handler(email) return f'{handler_base_url}{hash}' def get_handler(email): handler_base_url = '' emaildomain = email.split('@')[1] # Since browser complain about non-https content on https sites let's # prefer https here: srv_entries = ['_avatars-sec._tcp', '_avatars._tcp'] for entry_prefix in srv_entries: try: for answer in dns.resolver.query( f'{entry_prefix}.{emaildomain}', 'SRV'): host = str(answer.target).rstrip('.') if 'sec' in entry_prefix: return f'https://{host}:{answer.port}/avatar/' return f'http://{host}:{answer.port}/avatar/' except Exception as e: pass return handler_base_url if __name__ == '__main__': mail = sys.argv[1] print(get_avatar_url(mail)) So the first thing a website wanting to show an avatar should do is check the DNS records of the domain part of the email address. If this offers _avatars-sec._tcp and/or _avatars._tcp SRV entries, these should point to a server that implements the Libravatar protocol and can therefore be queried. Once we have such a base URL, all we have to do is append the MD5 hash of the complete email address to it and a couple of optional parameters in order to retrieve the image itself: size/ sfor the width/height of the image to be returned (default seems to be 80) default/ dfor the URL of a fallback image should the email not exist in the server’s database. This parameter also has a couple of reserved values for more generic fallback mechanism. You can find a complete list in the spec but here are the most useful in my opinion: 404: If the email could not be found in the database, then a HTTP 404 error should be returned instead of a fallback image mm/ mp: Generic silhouette icons The default option is actually a bit more complicated since most implementations are falling back to Gravatar before even considering the default option simply because Gravatar also supports this parameter. In cases where you explicitly want to retrieve the default image, use the forcedefault=y/ f=y parameter in combination with the default parameter. Just to make this clear: The example above is just that. An example. If you want to integrate Libravatar in your own project, please use one of the pre-existing libraries that are listed on libravatar.org! An experimental server implementation Since I learn protocols best by implementing them, I also gave this one a try. You can find the complete source code on. This implementation is mostly intended for personal use as you have to upload avatars and set the mapping between email and avatar file manually: $ microavatar server \ --addr localhost:8888 \ --cache-folder /var/cache/microavatar \ --email "me@email.com:/path/to/file.jpg" So far, I have this implementation running on avatars.zerokspot.com and set the respective SRV DNS entries accordingly for my various “@zerokspot.com” addresses: _avatars._tcp.zerokspot.com. 60 IN SRV 10 100 80 avatars.zerokspot.com. _avatars-sec._tcp.zerokspot.com. 60 IN SRV 10 100 443 avatars.zerokspot.com. Liberapay not the best example After all this work, I set my profile on Liberapay to use LibAvatar in the hope that they’d do the DNS lookup I described above. Sadly, they don’t. They just calculate the hash and then forward it to Libravatar.org. So me writing my own server and jumping through all these hoops to be a good federated avatar citizen seems to have been in vain. At least I had fun and learnt something new 😅 Updates: - 2020-05-27 13:17 +02:00: Added a link to existing libraries.
https://zerokspot.com/weblog/2020/05/26/avatars-without-gravatar/
CC-MAIN-2020-50
refinedweb
884
61.77
Place or shift all zeros to the extreme right of a number in C++ In this tutorial, we will learn how to shift or place all zeros to the extreme right of a number in C++. We can solve this problem in linear time complexity using String. So, let us see the approach first. Approach : First, we will store all the integers of the number in another string in the same sequence of its position in the number. Then, we will count the number of zeros present in the number. We will simply add that amount of zeros to the end of the new string. We will print the new string, that would be our New number. Let us see an example, 01204005 is a number. We will store the non-zero integers in another new string. So the new string would be ‘1245’. Now we will count the zeros in the number, which is 4 here. We will simply append 4 zeros at the end of the new string. So, now the new string is ‘12450000‘. C++ Code to place all zeros to the extreme right of a number in C++ Here is the code of the above approach. #include <bits/stdc++.h> using namespace std; int main() { char s[10001],s1[10001]={0};//size will be 10000(10^1000) cin>>s;// taking input of string long long int i,c=0,j=0; for(i=0;i<strlen(s);i++){ if(s[i]!='0'){ s1[j]=s[i];// store the non-zero integers in the new string s1 j++; } else{ c++;// cout the number of zeros present in the number } } for(i=0;i<c;i++){ s1[j]='0';// append that no of zeros at the end of the new string j++; } cout<<s1<<"\n";// print the new string that is our desired result } //Input : 012000345 //Output : 123450000 Time Complexity : The time complexity of this problem is O(n), where n is the length of the new string. I hope you have enjoyed it. Thanks for reading the article !!!! Also read:
https://www.codespeedy.com/place-all-zeros-to-the-extreme-right-of-a-number-in-cpp/
CC-MAIN-2020-24
refinedweb
343
79.7
A new version of RQuantLib (a package combining the quantitative analytics of QuantLib with the R statistical computing environment and language) is now out at CRAN and in Debian (where it depends on the 1.0.0 beta of QuantLib that is currently in the NEW queue with its new library version). This RQuantLib release works with either the current release 0.9.9 as well as with the just-released first beta of QuantLib 1.0.0. This versions brings a few cleanups due to minor Rcpp changes (in essence: we now define the macro R_NO_REMAP before including R’s headers and this separate non-namespaced functions like error() or length() out into prefixed-versions Rf_error() and Rf_length() which is a good thing). It also adds a number of calendaring and holiday utilities that Khanh just added: tests for weekend, holiday, endOfMonth as well dayCount, date advancement and year fraction functions commonly used in fixed income....
http://www.r-bloggers.com/rquantlib-0-3-2-released-2/
CC-MAIN-2015-32
refinedweb
157
55.07
Hi guys, I'm just starting out and I'm having a little problem. When I run an application, it closes by itself right away. A few days ago I went to the Local Library and picked out "Sam's Teach Yourself C++ in 24hrs - I know it's a lie" It came with a CD so that you can install Borland C++ - don't know what version it is, though it looks from at the least 2000. I had some trouble installing it and the interface looked pretty lame (old) so after searching through the forums I downloaded Dev-C++. And I'm having the problem that the #4 poster had. I copied and pasted what poster #5 did and it worked. However when I tried to compile it without the C-style comments, I couldn't get it to work. But the main problem I'm having is that when I run an app. It closes immediately. It's happened with two other tutorials in the book that I tried. I've come the conclusion that I'm doing them right since they open, I just can't get it to remain open for more than a second. Here are the two other tutorials that I tried. Again they compiled without errors, and the app. ran except it closes down instantaneously. Thanks callfunc.cpp #include <iostream> void DemonstrationFunction() { std::cout << "In Demonstration Function\n"; } int main() { std::cout << "In main\n" ; DemonstrationFunction(); std::cout << "Back in main\n"; return 0; } func.cpp #include <iostream> int Add (int x, int y) { std::cout << "In Add(), received " << x << " and " << y << "\n"; return (x+y); } int main() { std::cout << "I'm in main()!\n"; std::cout << "\nCalling Add()\n"; std::cout << "The value returned is: " << Add (3,4); std::cout << "\nBlack in main().\n"; std::cout << "\nExiting...\n\n"; return 0; }
https://www.daniweb.com/programming/software-development/threads/211938/noob-needs-a-little-help
CC-MAIN-2018-34
refinedweb
310
73.78
- Hierarchy wire display toggle? - global proc and memory - getting the location of a locator in an expression - strPos function (sharing) - Skin ToggleHold Attribute - need help about a command please... - Emitter rate - who can help me ?i have a question.. - Total Listing created Maya Windows - sqrt function problem in maya 8... - Working on calculator script - help please - Copy and Paste functions ??? - Fatal eror issue with shelfButton - default lambert.... - Moving an object along hilly surface - Moving an emitter along hilly surface - textfields updating on change - Moving pivot for selected faces - Using SWF files to personalize your custom UI's - selecting uvs by mel - reverse animation obj-to-camera - Vertex Weighting Tool - skinCluster.bindPreMatrix - animCurveTA connections - transferable custom marking menus - Key Viewport Change?? - layouts - Assigning one locator trans. to another - Assigning one locator trans. to another - Area render script - MEL for hotkey with MM - Controlling expression evaluation per frame - soft constraint (bi-directional) 1.2.0 - find joint orient axis.. - Base and tip color attributes for objects ??? - ewertb.com down... - Checking if an object is selected - Where to get help for MEL except from cgtalk and highend3d ??? - maya API: registering custom nodes with Autodesk - MAnimControl::stop(); - Get pixel color information from texture - Mel assistent needed ... - Avanced Twist Controls.. - Render Target Node? - Mel Noob needs help with a couple scripts :) - Find Axis Direction - How to get the read only state?? - Mel beginner - help please - uvlink syntax - Rotation plane - unparent comand?? - How to get the size of an image imported in maya as texture? - Anyone know why the move command must be executed twice? - FG off on Mult Objects - 3d-math: matrices/spaces - Point On Surface return - Maya8: What happened to the old select shell command?? - Assigning Shaders to render layers... - One problem stopping me from finishing my script - help please - Memory Leaks With Local Procs? - openGL text in viewport - edgeloop selection detection - getting [0] from an array - How to assign hotkeys to commands from scripts like MjPolyTools? - Can i call windows programs - like UltraEdit for example with a mel script? - New and undocumented? - menuItem for shelfButton - convertLightmap and AO? - Which particle id connects to the arrayMapper? - searching array for same number and return index. - Someone can solve this ? - How to check if vertices or edges or faces are selected? - Help with animCurve command please... - Can't understand dirname command... - how to do average ? - camera view - How to force a fps without using prefs? - textscrolllist, what the hell happened to -selectedItem - Help Me.... - mag command ? - Render Info - fprint a fileImport command? - sequencing of scripts - Novice question - Mel, ma, mb...Please help me - Looking for the Maya file that calls AETemplate Procedures - only deleting detachSurface1 input node ? - only deleting detachSurface1 input node ? - date and time return code ... - selecting graph points script help - run script on startup - Returning XYZ coords of a specific face. - HOW: 100 image sequences on 100 sprites? - attaching a color chooser to displayRGBColor - NEWBIE: How to read the name of a shader? - how to get UV infomation insdie Deformer node? - MEL -> API -> MEL -> API... - import into existing heirarchy? - Query Mouse Location - sorry: Layer Visibility - loop - move component tool - easy question on GUI building - How to set up object?! - seed not working in creation expression - faces together MEL - Middle Mouse Viewport Toggle - userSetup and maya 8 - Given a material name, find whats attached to the color channel - API Custom Node creation - Unselecting a radioButton - Modeling Script - Custom Menu - how to execute mel script dynamically - Simply starting Mel (help - begginer) - Quick Newbie question..... - menuItem + popupMenu issues - Custom UI Question - A smart undo? - melfunctions 0.2 - Find nearest vertices fast without comparing all of them - how to load script into script editor via mel? - Help with Array and interpolation - Adding springs using MEL takes much more memory!? - Reset RadioButtons - Output Window - Accessing alpha-numeric chars by index - Button creation in API - how to get the Border of uv "shells"... - New To MEL: Line Numbers! Help!! - UV coords to local space face coords? - Search and delete hypergraph node? - Gnomon's mel scripts - making mesh unselectable.. - locking size of textScrollList in FormLayout - a customed node that calculates distance! - From textScrollList to scrollField - Italicize specific lines of a textScrollList? - Quaternion Rotation and MEL - Absolute value - Write to image? Is it possible? - stroke...pressure. Not working - help - Consistent playblast size - API: creating and adding springs - Moving objects instead of shapenodes into SelectionSets - Passing string arrays between global procs? - Disco Dance Floor - disconnectAttr? - FileTextureManager on Mac: Help please! - I can't store a vector resulting from a command - solid choice menu needed - Plug-in cannot be unloaded because it is still in use - How do I make a script autostart? - API: How to get faces containing the vertex? - Expressions: namespaces and MEL-commands does not cooperate - Wait for a plugin to load - Maya API - Timeline Q ... - changing the color of text - Global ProcTastic! - vertex selection - MEL ordering problem - compound attributes - pre-render MEL in mental ray - API: compute problems - setting AdvancedTwist World-Up Obj with MEL? - connectControl and enumerated attribute not talking - rowSpacing comand - API: Noob here. Where the heck is windows.h? - WireFrameOnShaded - paint color/attribute: query stroke direction - export layer to file? - how to make a float or any control animatable? - how to control more atributes with one control? - Help me understand scripted panels please ... - Refresh BIN TAB - problems with my own "skin deformer" - Frame rate - finding and setting local vertex position - Query Menu Items In radioMenuItemCollection - how to find exact same model ni the scene? - How to find corner vertex in a poly plane - API: Custom Constraint Icon? - Need MEL to open a data file and create a scene - Precision with Curves - Find a pixel location on a textured sphere - Find Directory Maya Is Installed In - Please help with a script of mine - How to activate maya's create menu with MEL - How to create locator with a name specified? - How to get the progress window to work while maya is busy? - Isolate Select - turn on AutoLoad NewObjects - Help to create an Expression for Node Connections - connect attribute to multiple objects - 1k mel competition - Simple Distance Between Problem - Shelf Button Name Lost On Restart - Exporting animation data from maya to text files - Why won't my button fill the window - Reloading AEtemplates - Query joint name to use in text field - API: Reading vertex positions via attribute vrts - Expressions within scripts??? - Query attribute value...?? - reset joint axis - API: Where does the MStatus Status changes? Mystery inside! - Accessing a dynamic variable - overriding convert to file texture resolution limit - API : Distance Manipulator SCALE ? - Compiler for Maya 7 - Want info on dag and transformation matrices - correct rain spash balloons' creation translateY location - How to name the result of the annotate command? - Any chance MEL components placing to become visual in future? like in Delphi? - API: A problem to get Normals in world space - Getting unique MTypeID's from Autodesk - Very simple (but annoying) divide problem... - vectorArray Problem - Bring back "Mel How To" - Copying Uv's To Uv Sets For Multiple Objects - Invoke Marking Menu without LMB click? - Auto Excute Script on viewport change - API setDependentsDirty question: compound attrs - expression - relative scaling of objects - Changing Values with MEL - Implicit Object Reference in MEL Shading Expression - Noise / Turbulance on rotation - cycling visibility on a set of nodes - API Q: calling support functions from the compute method - UI snapShot?! - Mel, API, Python . . . - MEL game, Helping each other out - GI_Joe or alternative - query color? - mocap server for joystick - UI Color (under Linux) - Expression documentation inconsistencies - Number Logic... - Cannot find procedure "maya" error?? - Simple slider question - Mel Script to Track Time Worked? - Baking Hud info into Rendered Images - Maya unable to detect visual C++ 2005. Please help - Closest point on plane? - Problems loading image onto iconTextButton - HowTo: Get older MelScripts working with Maya8? - How to connect float attr to enum attr type? - API devkit - global proc - select all - mesh numbering - polyColorPerVertex - Mental Ray Batch Bake
http://forums.cgsociety.org/archive/index.php/f-89-p-15.html
CC-MAIN-2014-42
refinedweb
1,308
58.58
Non repeating character from a preset of characters first non-repeating character in a stream leetcode how to find all non repeated characters in a string in java first non repeating character javascript first non repeating character python given a string find its first non-repeating character in c least repeating character in java write a program to find the least repeating character in a given string. I would like to pick a random letter out of a list B, C, D at random order and would make sure that they do not repeat. I have tried this but it repeats the letters public class Test { static Random r = new Random(); static char pickRandom(char... letters) { return letters[r.nextInt(letters.length)]; } public static void main(String args[]) { for (int i = 0; i < 10; i++) { System.out.print(pickRandom('B', 'C', 'D')); } } } You should check, if character already taken, for example declare another one array which contains already taken character, so if character caontains in this array, try to select another character, OR you can simply use Collections.shuffle method(Collection knowledge required) List<Character> solution = new ArrayList<>(); solution.add('a'); solution.add('Y'); solution.add('Z'); Collections.shuffle(solution); Find the first non-repeating character from a stream of characters , Initialize all entries of inDLL[] as NULL and repeated[] as false. To get the first non-repeating character, return character at head of DLL. Following are steps to Finally, we loop through char_order until we find a character with a value of 1 in char_order and return it. Method 2: Using while loop s = "tutorialspointfordeveloper" while s != "": slen0 = len(s) ch = s[0] s = s.replace(ch, "") slen1 = len(s) if slen1 == slen0-1: print ("First non-repeating character is: ",ch) break; else: print ("No Unique Character Found!") This will keep track of the last character that was generated. The program will keep generating a new character until the one being generated is not the same as the last. char last = 0, next = 0; for (int i = 0; i < 10; i++) { do { last = next; next = pickRandom('B', 'C', 'D'); } while (next == last); System.out.print(next); } Given a string, find its first non-repeating character, Approach: A character is said to be non-repeating if its frequency in the string is unit. Now for finding such characters, one needs to find the frequency of all. You can do it as follows: import java.util.Random; public class Main { static Random r = new Random(); static int lastRandom; static char pickRandom(char... letters) { int newRandom=r.nextInt(letters.length); while(newRandom==lastRandom) newRandom=r.nextInt(letters.length); lastRandom=newRandom; return letters[newRandom]; } public static void main(String args[]) { for (int i = 0; i < 10; i++) { System.out.print(pickRandom('B', 'C', 'D')); } } } Sample output: CDCBDBDCDC First non repeating character in a stream, First non-repeating character in a stream of characters: Problem Description Given a string A denoting a stream of lowercase alphabets. You have to make new To get the first non-repeating character, return character at head of DLL. Following are steps to process a new character ‘x’ in a stream. If repeated [x] is true, ignore this character (x is already repeated two or more times in the stream) If repeated [x] is false and inDLL [x] is NULL (x is seen first time). First non-repeating character in a stream of characters, Note 3: Examples of characters are the accuracy control character, optical character, print control character, repeating character, shiftin character, shift-out string. character check: In error detection systems, a check in which (a) preset rules are used for the formation of characters and (b) characters that are not formed in First non-repeating character using one traversal of string | Set 2 Given a string, find the first non-repeating character in it. For example, if the input string is “GeeksforGeeks”, then output should be ‘f’ and if input string is “GeeksQuiz”, then output should be ‘G’. Fiber Optics Standard Dictionary, Note 1: In most communications systems, characters are represented as strings, i.e., character, null character, print control character, repeating character, shift-in character, a check in which (a) preset rules are used for the formation of characters, and (b) characters that are not formed in accordance with the rules are Get the index of the first non-repeating character by calling function findFirstNonRepeatedChar. It returns the index of the character and -1 if no element is found. 4. Print the character if found. 5. structure element consists of two integers count and index to store the total count and index of a character. Communications Standard Dictionary, 32 The DATA Declaration It is frequently useful to be able to preset or initialize 93x10 - 0 , INT with 49 , FIRST with the characters " JOHN " , LAST with the type may be preset with character information in the form of literal constants having no more than four characters . The pattern of slashes can be repeated , in which. This solution works on the fact that If the first index of the character is also the last index of that character then it means this character is not repeating – Anirudh Sharma Oct 9 '19 at 14:25 This is definitely not O (n), especially since you read the string once for each character. So we're still in O (n²) with this solution.
https://thetopsites.net/article/58468224.shtml
CC-MAIN-2021-25
refinedweb
892
50.06
Anatomy of a "Small" Software Design Change File this one away for the next time your boss comes in and asks, . The Setup. There were a couple rules in place for this to happen: - The controller class must inherit from the Controller class. - The action method must be annotated with the [ControllerAction]attribute applied to it.. The Consequences. The solution here is conceptually easy, we only look at public methods on classes that derive from our Controller class. In other words, we ignore methods on Controller and on. With this attribute, I can do this: public class MyReallyCoolController : CoolController { [NonAction] public override void Smokes() { throws new NotImplementedException(); } } Now MyReallyCoolController doesn’t smoke, which is really cool. Interfaces Another issue that came up is interfaces. Suppose I implement an interface with public methods. Should those methods by default be callable? A good example is IDisposable. If I implement that interface, suddenly I can call Dispose() via a request for /product/dispose.. Is This Really Going To Help You With Your Boss?. 36 responses
https://haacked.com/archive/2008/04/24/anatomy-of-a-design-change.aspx/
CC-MAIN-2021-21
refinedweb
172
57.16
Final keyword is used in three different way learn what are these. Use of final in Java language. Are final and finalize same? The Final keyword is used in two ways in Java, either to prevent inheritance or to make a field constant or unchangeable. The final keyword has three different meaning for three different type of elements in Java. As described below: Variable: the final keyword when applied to a variable makes its value constant. You can only assign its value once. But when you try to change it it will not change. As shown in the program below. Method: when final keyword is applied to a method this will prevent that method from being overridden in any of the subclasses of current class. Again refer to the program for demonstration. public class Final { final int a = 100; final int sum(int a, int b){ return a+b; } } class SubFinal extends Final{ SubFinal(){ System.out.println("value of a is :"+a); System.out.println("the sum is:"+sum(5,5)); //a = 200; } /*int sum(int a, int b){ return 0; }*/ public static void main(String[] args){ new SubFinal(); } } the output of this program is value of a is :100 the sum is:10 I have made a class by the name of Final and there is a variable and method which has been made final. Now I extend this Final class into another class which is SubFinal. In the constructor of the SubFinal I have accessed both, the variable and the method which was made final. And you may see the output of this. However you may notice the comments in the code, I commented this because this, if un commented will not allow me to compile the code. As in first case I am trying to reassign the value of a final variable and in second comment I am trying to override the final method which is against the rule of final keyword. Now that we know the final in terms of variables and method let us see what effect it makes on the class. final class SomeName{ //class definition } class SubClass extends SomeName{ //class definition } Now if you will try to compile this program this will not compile because you are trying to inherit a final class which is not allowed. Now I hope you have worked out these examples and understand the working of final. Note: final is not applicable to constructors and you must not confuse final with finalize as these two has completely different meaning.
http://www.examsmyantra.com/article/44/java/final-keyword-in-java-and-its-various-uses
CC-MAIN-2019-09
refinedweb
421
70.43
TL… When we work on a website we obviously have to take into consideration the “mobile” versions of our web project Assuming you know what CSS media queries are, let’s see how to use the SCSS features to create beautifully readable media queries that keep your code clean and legible, which can be quickly reused in our SCSS files, and that are easy to use. Here is the code to create the mixins that we will use in the files. // Breakpoints $desktop: 1024px; $tablet: 768px; @mixin media($keys...) { @each $key in $keys { @if… Inserting your own Twitter timeline into the site might be a good idea, and doing so in WordPress is an extremely quick and easy process. Personally, I’m not a big fan of the Twitter timelines visible on websites. Unless you’re really not very active and followed on twitter, in my opinion you could do without it. There are many other things that can take place on a website, and Twitter discussions can easily stay on Twitter. However, if you want to, I’ll show you how to do it with WordPress First, let’s put in the timeline. From the Appearance ->…. Action and Filter are two so-called “hooks”. Practically the action and filter hooks allow us to modify or add functionality to the core of WordPress, but without modifying… Here we are for the second part of the tutorial to configure Gulp for the optimal development of WordPress. If you missed the first part, where we have installed all the modules needed for the tasks we will need, you can find it here. I remind you that we will write our configuration file gulpfile.babel.js in JavaScript ES6, as I explained in the first part of this tutorial. Let’s start now, importing the downloaded modules into our file gulpfile.babel.js. import { src, dest, watch, parallel, series } from 'gulp'; import yargs from 'yargs'; import sass from 'gulp-sass'; import cleanCss from 'gulp-clean-css'; import gulpif from…… I am nothing but an observer. Forever grateful to be born in the digital age. Italian based in Germany.
https://dannyspina.medium.com/
CC-MAIN-2021-31
refinedweb
355
69.21
/02/2011, at 1:20 PM, Alan Silverstein wrote: > Drifting off topic but what the heck... Lol :) Its not off topic because we're proposing to rework the Judy build system :) BTW: I'm not proposing to use something wizz bang for this, Make should do just fine: Judy is basically just a bunch of C files that need to be compiled. >> Data flow is not a new idea, it's a subset of the REAL idea: category >> theory. > > OK, I'll have to read up on that... Category Theory (CT) is *the* theory of abstraction. It basically starts off by considering sets and functions, in programming we call these types and functions, in category theory we call them objects and arrows. The key idea is to abstract away the elements of the sets and talk about "structure" entirely in terms of the properties of the functions. For example we can explain "The function f:X->Y is 1-1", which is a set element style definition, in a categorical setting like this: for all g1, g2: U -> X, g1 .f = g2.f implies g1 = g2 This is easy to understand. Suppose f wasn't 1-1. Then you might have g1(x)=a and g2(x)=b, but f(a)=y and f(b)=y, so g1 and g2 can be different functions, but you can't tell because f "removes the different outputs by mapping them to the same value". This can't happen if f is 1-1, if g1 and g2 are different there is some value x for which g1(x) != g2 (x), and f must map these to two distinct values, so g1 . f != g2 .f. now the point here is to examine the formula again: we have defined "f is 1-1" without mentioning elements. In other words, the definition is *abstract*: written entirely in terms of functions, ignoring the sets and their values. In fact, we can throw out the sets entirely, replace X with the function identity: X -> X Anyhow, the relation to "data flow" is clear: the arrows are "channels down which data flows, possibly being modified along the way" :) And the key properties can be understood in the abstract without caring about the actual data types being processed. > >> Build systems must be driven bottom up. They're intrinsically >> imperative NOT functional. > > Would you elaborate on the difference? Do you mean the difference > between declarative (what) and functional (how)? No, I mean build systems aren't declarative, quite the opposite. They action based. Imperative, procedural, whatever. Functional/declarative models are all wrong. If you look at make, the *rules* are imperative: do this, do that. The declarative part .. the goal dependency stuff, doesn't work. Just for starters, many programs don't have a single output. This screws the basic concept up immediately. Some programs have side effects (no outputs as such) and some have multiple outputs. So you immediately have to add hacks to make, phonies and proxy files, just to make it work at all .. and thats doing really basic stuff. > >> When you change a source file, that should trigger rebuilding the >> system based on what depends on the source file you changed. This is >> completely the reverse of target driven building. > > Yes -- and no -- at least as I think of it. Viewing a build system as > an acyclical graph, it's a static (at any one point in time) we agree this is a gross oversimplification .. we will temporarily accept this to avoid confusion, but clearly it isn't so. Consider say "doxygen" .. surely, there are a fixed set of inputs, but who knows what the *.html files it generates are??? > set of > relationships between sources (files that have no arrows into them > within the build system, even if derived say from a version control > system) and constructed targets (some of which are deliverable, others > of which are intermediate, but that doesn't matter here). Given some > form of specification of these relationships -- sources, targets, rules, > dependencies/conditions -- then any time a source changes, all dependees > must be at least revisited if NOT updated/reconstructed, whether you > consider this to be targets-backwards or sources-forward. Yes .. given this information. The problem is how to get it. Specifying dependencies is intrinsically unreliable and entirely unnecessary, provided you can capture outputs. This is hard to understand but true. Consider a set of programs (compilers, document generators, linkers, etc etc). And some source files. Lets assume we know which programs to apply to which files. What order should we apply then programs in? When do we need to apply them? The answer is not what you'd expect. You can apply the programs IN ANY ORDER!! Surprised? Its true! It doesn't make any difference. Here is the build algorithm: Apply programs to files in any order. Step2: Do it again. If the results are the same, you're done. Otherwise repeat from step 2. This is the fixpoint algorithm: just keep trying until the build is stable. To make this work you need to be able to monitor the *outputs* of programs: you need to see every file that is touched or created, so you can check when the build is completed (reached a fix point). Well now, you can say "But that is horribly inefficient!!!" Yes yes, you'd be right. So optimise it. Specify dependency information AS A HINT. Now perhaps you see the point. This system does NOT require dependency information to work. Only to work efficiently. in particular it doesn't require all the dependency information and it still works if the dependency information is wrong. So here, the fundamentals are: (a) a set of build steps (b) output monitoring The "dependencies" are relevant only for optimisation. That's not unimportant!! But the point is, the system, at the core, is driven bottom up, and consists of a set of *actions*: there's nothing declarative in the core. The dependency relations are useful for performance, they're not of any conceptual significance. Certainly dependencies *exist*. Certainly there is workflow. But it isn't necessary to specify it, nor to even get it right! Interscript **literally** works this way! It repeatedly does actions until nothing changes (or a limit is reached, usually 2 passes is the limit :) > > By the way, that elaborate chip design system I mentioned had a neat > feature, where you could say "check to see if the target actually > changed as a result of the reapplication of the rule" and if not, don't > touch it, don't even change its modify time, meaning all downstream > targets (dependees of it) don't need rebuilding. Interscript does this automatically for all outputs, for that reason. Interscript itself doesn't care, since it compares the contents of files. But to do that, saves every output to a temporary first, so it is easy to just abandon the temporary if there is no change. [Actually it is more efficient to read/compare until there's a difference, then switch to write mode .. but I didn't implement that] >> You CANNOT specify goal driven building effectively, because it is not >> possible to get the dependencies right. This is a plain fact of >> reality. > > Can you please elaborate on that? Sure: tell me the names of all the files generated by a run of doxygen. [Doxygen is a C++ code documentation system] You can't. It makes them up using some hidden formula, they're just a set of web pages, all it cares about is that file1 correctly references file 2. All you know is the name of the "index page". but you cannot do a build depending on the output of doxygen by just examining the index page because it is likely to be unchanged even when the pages describing your functions have major changes in them. > Again if I imagine the DFD describing > a collection of source and constructed files, and their rules and > dependencies, it doesn't seem to matter much which way you look at the > arrows, it's the results that count. Yes, but the problem is you cannot specify the graph: its too hard. And it isn't necessary. > An even worse problem, usually not well understood, is a multi-rule > target. This is when several rules contribute to a single repository > (such as a message catalog), blurring the state of that target for its > dependees. Basically, in set theory and category theory we have things called products. Aka "Cartesian product" or "tuple" or even "struct" in C. What you're saying is that handling NORMAL arrows from products to products is hard: a,b,c -> d,e,f and that's my point. This is trivial basic stuff. Part of the problem is that people **incorrectly** think that a graph is like this: file -- action --> file This is completely wrong! Its the other way around! The actions are not arrows, they're the points! The resulting data structure is called a Multi-Graph. --- x.c -->[ C compiler ] --> x.o Note carefully: the arrows are files. The C compiler is a point. (black box, chip, whatever you want). I have a whole book on this, but its hard to get (by RFC Walters). > I further divide these into robust and fragile multi-rule > targets. A robust one can be partially updated correctly at any time > (like revising some database entries), but a fragile multi-rule target > must be wholly rebuilt (running multiple input rules) when any > dependency demands it. In the worst case there's an ordering > requirement upon the rules (the file must be built in the right order) > which is difficult to correctly represent in a "static" DFD. Wise > designers avoid creating constructing files that are fragile multi-rule > targets, if at all possible. Yes, that sounds interesting. Basically some products can be built one component at a time and some are built "all at once". > I think multi-rule targets arise naturally but mistakenly from > old-school thinking where files and file systems were expensive, Yes. The whole Unix FSH (File system tree) is archaic. The idea of putting all the *.h files in one directory and the *.o files (or *.a files) in another is absurd. But it was done in the old days for performance. No modern systems use this. I think Sun pioneered the right way: one directory for each product (in "opt"). Apple has these, calls them frameworks. On unix systems we usually pervert /usr/local/lib. After all not all software is even C ! > so we > lumped similar things into common files (a kind of not-really database), > sometimes with an associated "registry" (index) of some type. I'm more > in favor of what I call "self-registry", like how /etc/rc.d works (if I > recall right). You drop files/scripts into a "known location" and their > mere presence (when found) acts as the registry, plus you can easily > update every file separately from others. Indeed. Which is why build systems have to be driven bottom up. So you can drop new sources into the right place and just expect them to be built. You can't do that with targets, because they're generated and you don't get to "drop" them anywhere :) > I dispute your > assertion that "some systems require recursion." You can't dispute it, LaTeX requires it, and there's no way around it. > I would assert that you have a design flaw in your package. Correct > building demands "full disclosure" to the build control system, in > whatever language. You misunderstand: interscript IS the build system. It doesn't need any disclosure. That's the point. It can generate code or documentation or indeed do ANY process at all without disclosure. It uses "discovery" instead. > All files must be listed; hidden temporary or > intermediate files not explicitly stated are accidents waiting to > happen. It isn't possible. See doxygen example. > Your example of (presumably) unpredictable deliverable targets > is even worse. It might be expedient for the programmer to just "write > the list as a smart rule," but I think it's bad design. It makes it > impossible to "manifest" the customer deliverable package in a > predictable and auditable way. (I have a lot of experience dealing with > CPE = current product engineering...) Yes, it does. You have re-think your quality control systems to handle this. > I understand WHY programmers like to operate this way. It's clunky to > have to "redundantly" state information to various parts of the > engineering system. Yes, it is clunky .. and only practical for simple systems. With more complex systems, it's a liability. That's the point here: if your build system *depends* on the replicated dependency information, then it can fail silently. If it doesn't it can't. So it is better if it doesn't :) > So being a clever programmer, hell I'll just write a script/program that > embodies some arcane app-specific knowledge about how to create targets > from sources, based on "discovery"... > > Believe me I've see all kinds of half-assed (well-intended but still > hackish) packages put together around these kinds of issues, with no > overall understanding of what it means to deliver maintainable, > updateable, removable packages to customers. I share your concerns. But don't knock discovery as such: like everything, not all systems are reliable! Almost everyone writing Ocaml programs uses Ocamldep to generate dependencies (Ocaml requires files be compiled in dependency order). Even in a 20-30 file program it is almost impossible to maintain the order by hand. If you have to do that, it becomes an obstacle to refactoring. > > I don't think the answer is to punt and say, "my targets are > auto-generated." A better answer is, "I have an easy way to specify > exactly what I'm expecting within and as output from the build system, > and to check that I got what I expected." I do that: what I expect is that the regression tests all pass :) > >> Yes. And the way to do that sophisticated stuff requires a REAL >> programming language like Python. Trying to do this with micky mouse >> crap like Make cannot possibly work. > > Uh, you dismiss it too quickly. No, I discarded it after 20 years struggling to understand how it works and failing all the time to see any connection between what it does and what general tools do: it works marginally well for C and that's about all. >> Fbuild is a caching build system. It caches the results of various >> operations (compiling stuff, etc) and knows when the caches are not up >> to date. So rebuilding is the same as building, except the caching >> allow skipping some parts of the build because the dependencies tell >> fbuild the results will be the same. > > Cool, that's the right concept. Yes, but it's nothing like make. It has ONLY build rules, there are no dependencies. (there is dependency generation in some of the subrules, for example to build an ocaml program). Rather you just give functions like: link(cc("x.c", cc("y.c"), result="aa.out") which just like the "make" rules.. no dependencies specified. But it doesn't do the compiles and links every time, it caches the results of each function call. Yes there are dependencies, but they're "discovered" because we know when you run cc('x.c') that that function call depends on file "x.c". The cc function puts a digest of the file into a database. next time it is called, if the digest is the same, it does nothing. (Actually, it returns the digest of "x.o" from the data base). >> Fbuild captures dependencies automatically, you not only don't have to >> specify them .. you CANNOT specify them. > > Caution, you appear to be headed down the same path as (now what was the > name again of Rational Software's kernel-incestuous over-the-top version > control and build package?) You couldn't swat a fly in that system > without first getting a doctoral thesis! I can't swat a fly with "make" so I'm no worse off :) > How do you let people specify unusual dependencies that aren't as simple > as compile this-to-that? In felix there is a directory called "buildsystem" which contains all the Felix specific rules: ~/felix>ls buildsystem/*.py buildsystem/__init__.py buildsystem/flx_stdlib.py buildsystem/bindings.py buildsystem/iscr.py buildsystem/demux.py buildsystem/judy.py <<<------------------------- buildsystem/dist.py buildsystem/mk_daemon.py buildsystem/dypgen.py buildsystem/ocs.py buildsystem/faio.py buildsystem/post_config.py buildsystem/flx.py buildsystem/re2.py buildsystem/flx_async.py buildsystem/sex.py buildsystem/flx_compiler.py buildsystem/show_build_config.py buildsystem/flx_drivers.py buildsystem/speed.py buildsystem/flx_exceptions.py buildsystem/sqlite3.py buildsystem/flx_gc.py buildsystem/timeout.py buildsystem/flx_glob.py buildsystem/tools.py buildsystem/flx_pthread.py buildsystem/tre.py buildsystem/flx_rtl.py buildsystem/version.py Each of of these files contains some special rules for building something, part of Felix, or a third party library. I marked one of some interest .. :) Here it is: ######################## import fbuild from fbuild.functools import call from fbuild.path import Path from fbuild.record import Record import buildsystem # ------------------------------------------------------------------------------ def build_runtime(phase): path = Path('src/judy') buildsystem.copy_hpps_to_rtl(phase.ctx, path / 'Judy.h') dst = 'lib/rtl/flx_judy' srcs = [ path / 'JudyCommon/JudyMalloc.c', path / 'Judy1/JUDY1_Judy1ByCount.c', path / 'Judy1/JUDY1_Judy1Cascade.c', path / 'Judy1/JUDY1_Judy1Count.c', path / 'Judy1/JUDY1_Judy1CreateBranch.c', path / 'Judy1/JUDY1_Judy1Decascade.c', path / 'Judy1/JUDY1_Judy1First.c', path / 'Judy1/JUDY1_Judy1FreeArray.c', path / 'Judy1/JUDY1_Judy1InsertBranch.c', path / 'Judy1/JUDY1_Judy1MallocIF.c', path / 'Judy1/JUDY1_Judy1MemActive.c', path / 'Judy1/JUDY1_Judy1MemUsed.c', path / 'Judy1/JUDY1_Judy1SetArray.c', path / 'Judy1/JUDY1_Judy1Set.c', path / 'Judy1/JUDY1_Judy1Tables.c', path / 'Judy1/JUDY1_Judy1Unset.c', path / 'Judy1/JUDY1_Judy1Next.c', path / 'Judy1/JUDY1_Judy1NextEmpty.c', path / 'Judy1/JUDY1_Judy1Prev.c', path / 'Judy1/JUDY1_Judy1PrevEmpty.c', path / 'Judy1/JUDY1_Judy1Test.c', path / 'Judy1/JUDY1_j__udy1Test.c', path / 'JudyL/JUDYL_JudyLByCount.c', path / 'JudyL/JUDYL_JudyLCascade.c', path / 'JudyL/JUDYL_JudyLCount.c', path / 'JudyL/JUDYL_JudyLCreateBranch.c', path / 'JudyL/JUDYL_JudyLDecascade.c', path / 'JudyL/JUDYL_JudyLDel.c', path / 'JudyL/JUDYL_JudyLFirst.c', path / 'JudyL/JUDYL_JudyLFreeArray.c', path / 'JudyL/JUDYL_JudyLInsArray.c', path / 'JudyL/JUDYL_JudyLIns.c', path / 'JudyL/JUDYL_JudyLInsertBranch.c', path / 'JudyL/JUDYL_JudyLMemActive.c', path / 'JudyL/JUDYL_JudyLMemUsed.c', path / 'JudyL/JUDYL_JudyLMallocIF.c', path / 'JudyL/JUDYL_JudyLTables.c', path / 'JudyL/JUDYL_JudyLNext.c', path / 'JudyL/JUDYL_JudyLNextEmpty.c', path / 'JudyL/JUDYL_JudyLPrev.c', path / 'JudyL/JUDYL_JudyLPrevEmpty.c', path / 'JudyL/JUDYL_JudyLGet.c', path / 'JudyL/JUDYL_j__udyLGet.c', path / 'JudySL/JudySL.c', path / 'JudyHS/JudyHS.c', ] includes = [ path, path / 'JudyCommon', path / 'Judy1', path / 'JudyL', path / 'JudySL', path / 'JudyHS', ] types = call('fbuild.builders.c.std.config_types', phase.ctx, phase.c.shared) macros = ['BUILD_JUDY'] if types['void*']['size'] == 8: macros.append('JU_64BIT') else: macros.append('JU_32BIT') return Record( static=buildsystem.build_c_static_lib(phase, dst, srcs, includes=includes, macros=macros), shared=buildsystem.build_c_shared_lib(phase, dst, srcs, includes=includes, macros=macros)) def build_flx(phase): return buildsystem.copy_flxs_to_lib(phase.ctx, Path('src/judy/*.flx').glob()) ################### this is actually quite "make like" in that the source files are all specified. notice though, there's no mention of *.o files or *.a or *.so or whatever. It's hard to see, but the build parts and the whole thing return "cache" of the process such that the system know when to rebuild it. I have to specify the inputs, and parameters to the build process, but usually not any outputs. The top level of the build system is not fbuild. it is a file: fbuildroot.py in Felix. fbuild is just a LIBRARY of tools which help specifying a build system. Erick is constantly adding new functionality to that to support more compilers etc. the most important part of that library is the bit which registers and caches functions, using Python marshalling features, Sqlite3 as the database, and uses RPC (remote procedure calls) in there somewhere as well. It's pretty complex. It works pretty well though! --
https://sourceforge.net/p/judy/mailman/message/27025365/
CC-MAIN-2017-09
refinedweb
3,303
68.57
- MVC - Entity Framework - LINQ - Web Services - ADO.NET - Servers - Client-Side Technologies - Concepts - Development - Visual Studio - References - About - References Routing Routing "ASP.NET routing enables you to use URLs that do not have to map to specific files in a Web site, making more easily understood URLs that are descriptive of the page's contents." Default implementations of Web Forms applications contain a request URL which specifies a physical .aspx file. The .aspx file contains the code for obtaining data and formatting a screen. However, default implementations of ASP.NET MVC applications use a routing engine to map URL patterns to action methods inside a controller. The action methods contain the logic to direct the process of obtaining the data and passing it to a view for display. Routing allows request URLs to contain meaningful information about the requested page. For example, a non-routed URL may appear as: while a routed URL may appear as: The meaningful information in routed URL make it easier for users to guess how to change the URL to get to the pages they desire. Also, search engines will rate a page higher when it contains meaningful information in the URL. Route Collections The routes a routing engine uses is stored in a RouteCollection data structure. Routes are added to the route collection in the RegisterRoutes method of the RouteConfig class. A default route is created when the MVC project is created. The default route can be modified or additional routes can be created as desired. The routes.MapRoute method is used to define a route. The order in which the routes are defined in the RegisterRoutes method is significant. The first route which matches the URL pattern will direct the request to an action method. Any routes defined after the matched route will be ignored by the request. So the URL patterns have to be designed so the request URLs flow down the list routes to find the appropriate action method to invoke. Below are a custom route followed by the MVC default route defined in RouteConfig.cs. Routes Defined in RouteConfig.cs Defining Routes To add customer routes, Web Forms applications use the MapPageRoute method of the RouteCollection class, while MVC applications use the MapRoute method. This article is only going to cover adding routes to MVC applications. For web forms, see this article: Adding Routes to a Web Forms Application. MapRoute is an extension method on the RouteCollection class. MapRoute has a number of overloads. The MapRoute overload with the most parameters is shown below. MapRoute Extension Method The parameters for MapRoute include: - name - the name of the route to map. - url - the URL pattern for the route. - defaults - an object that contains default route values. - constraints - set of expressions that limits acceptable values for a URL. An example would be: new { id = @"\d+" } // Constraint to Only Allow Numeric IDs Note: Route constraints can also be added when the route is registered in the global.asax, such as: reportRoute.Constraints = new RouteValueDictionary { { "locale", "[a-z]{2}-[a-z]{2}" }, { "year", @"\d{4}" } }; - namespaces - used with Areas, to specify the area namespace(s) of the desired controller. Phil Haack created a Route Debugger that can be installed through NuGet (PM> Install-Package routedebugger) which shows defined routes in an application and the route which matches the entered URL. Routing Path Request Processing Using Routing Request first pass through the UrlRoutingModule HTTP Module. UrlRoutingModule contains the logic for the routing engine. WHen it determines a request URL matches a defined pattern, it passes the URL to the MVCRouterHandler IHTTP Handler which creates an MvcHandler. The MvcHandler finds the controller component in the URL and passes it to IControllerFactory which creates the controller which implement the IController interface. MvcHandler is delegating the processing of URL request to a controller. MvcHandler is responsible for creating the controller (via IControllerFactory) and cleaning up after the controller has finished.. Routing Path Custom Route Handlers Custom route handlers can be created for specialized routing needs which can not be handled by the default route handler. The following code is an example of a custom route handler. CustomRouteHandler.cs Global.asax.cs Attribute Routing In MVC 5 and Web API 2, a new type of routing was introduced called Attribute Routing. The earlier form of routing called convention-based routing is still supported, but the new Attribute Routing provides more control over the URIs in a web application. To enable attribute routing, useroutes.MapMvcAttributeRoutes() for MVC or config.MapHttpAttributeRoutes() for Web API. routes.MapMvcAttributeRoutes() You can make a URI parameter optional by adding a question mark to the route parameter. You can also specify a default value by using the form parameter=value. You can set a common prefix for an entire controller by using the [RoutePrefix] attribute.}. Below is a list of the predefined route constraints. You can create custom route constraints by implementing the IRouteConstraint interface. Defined Route Constraints
http://www.kcshadow.net/aspnet/?q=routing
CC-MAIN-2018-22
refinedweb
822
56.76
Post your Comment JDK 8 - Developer Preview Released JDK 8 - Developer Preview Released, Check the features of JDK , Download JDK 8 Developer preview release and share you comments with community Java... for the professionals with the newly to be launched JDK 8 in terms of features and innovation How to add JDK 8 support in Eclipse? with many new features and its high time to start using the JDK 8 for developing...Learn how to update Eclipse and add the JDK 8 support? This tutorial... of video tutorial provided here. You will learn how to download and install the JDK 8 Java 8 expected release date . There are many new features of Java 8 which makes it ideal platform for enterprise... more powerful and filled with many new features. The Java 8 is the part of Java... the multi-core cpus. The planned date for the release of JDK 8 ( Java development Oracle to release Java 8 in March 2014 Matching JDK 8 includes many other features and enhancements. You can learn... version of JDK. Oracle is planning to release the JDK 8 ( Java development Kit 8... candidate built might be released on January 23. The JDK 8 is the next release of Java Java 8: Java 8 is officially released and it can be downloaded for your client. Java 8 comes with many new features including Lambda expression... of the significant features with the latest launched JDK 8, it is a huge shift... of the newly launched JDK 8 are different from the obsolete imperative paradigm Java About Java 8 , streams, functional interfaces and others. Some of the features Java 8 might have... to see) in Java 8: Streams: Stream is a new collection that will make...Java 8 is slated to be released in March 2014 but the debate has already begun JEE 8 - Java Enterprise Edition 8 JEE 8 - Java Enterprise Edition 8 Tutorials, Examples and latest updates The JEE 7 was recently released with many new features and great support... for the JEE 8. The next version of JEE 8 may include the following new things Lambda Expression in Java 8 Lambda Expression in Java 8: Learn the power of Lambda Expression.... In this tutorial you will learn about the Lambda Expression in JDK 8. What is Lambda... closures and related features of Java 8. Tutorials of Lambda Expression in Java 8 Java EE 8 Takes Off of the new features introduced in Java SE 8. It includes repeating annotations, lambda... are the important features of the next Java EE 8: Web Standards Here...Finally its good news for Java developers as Java EE 8 Takes Off The future Building and Running Java 8 Support Building and Running Java 8 Support As you know Java 8 is already..., I want to try the latest version of JDK 8 in Eclipse. Since I am using JDK for developing my most of the project it will be great if I am able to use JDK 8 Java EE 8 Takes Off Features support Java EE 8 will use the new features of the Java SE 8 and developers will be able to use all these features with the Java EE 8. New features... team is no featuring out the features to be added to the Java EE 8 Windows 8 Advantages and Disadvantages Is it good to upgrade from your current version of Windows to latest Windows 8 or will it be difficult to adjust with all the new features. Lets take a look.... Metro theme and layout is unique with a whole lot of cool new features Java 8 Consumer Interface with forEach Loop How to use the Java 8 Consumer Interface with forEach Loop? In this tutorial I will explain you about the Java 8 Consumer Interface and then explain you how... you can use the Java 8 Consumer Interface and then iterate through the class Windows 8, the New Operating System from Microsoft , the wait for the Windows 8, the new operating system from Microsoft... of simplicity in computing. Windows 8, the new operating system from Microsoft is all... of the already well known features and advantages of Windows 8 include faster booting time Benefits of Window 8 for Business As Windows 8 has recently unveiled, most of the people want to know, how this new operating system can benefit them, and Benefits of Windows 8 for Businesses... businesses. To start with, one of the prime benefits of Windows 8 How To Install Windows 8 operating system by Microsoft i.e. Windows 8 with a new and looks and attractive features. Now, get started with Windows 8.  ...In this tutorial we will discuss about how to install windows 8 on your 8 - Java Beginners 8 Unit board is PIC18F4520 Purpose of euiqpment is to make test heart beat; fast/slow/moderate heartbeat I need to activate the speaker when testing for heartbeat. Should be able to start buzzing when testing is done. Should Windows 8 versus Android Windows 8 has launched at a time when the much thrived smart phone and tab... the potential success of all the platforms and in talking of Windows 8 versus Android... was relatively a back sitter, but with the launch of Windows 8 the whole smart How Windows 8 will benefit Business? In a way you can say that Windows 8 is the operating platform to address the new... features that any professional would hardly argue in considering Windows 8... standard Windows To Go feature in Windows 8 OS is sure to give a new mobile Is Window 8 better than Windows 7? Windows 8 is a completely new version of Windows that works well...With release of Windows 8 its comparison with Windows 7 is inevitable. Most of the users are in dilemma to whether or not to upgrade to windows 8. Is Windows jdk 1.5 features - Java Interview Questions jdk 1.5 features jdk 1.5 features with examples and source code... or manually constructed classes (typesafe enum pattern). (5)Swing: New... or any Iterable.. For more information on JDK 1.5 visit to : http Features of Spring 4 the latest JVM based innovations. Here are the new Features of Spring 4 Java 8 Support Spring 4 Framework is fully supporting the new features of Java 8... of the framework since 2009. New release supports Java 8 Lambda expression, new java bits 8 - Java Interview Questions java bits 8 Assume that country is set for each class. Given: 10. public class Money { 11. private String country, name; 12. public getCountry() { return country; } 13.} and: 24. class Yen extends Money { 25. public New Features of JAVA SE 6. New Features of JAVA SE 6. Following are the new features in SE 6... Services 2.0 With the Java SE 6 Platform Most exciting new features of the JAVA SE Post your Comment
http://roseindia.net/discussion/50177-Learn-the-Java-8-and-master-the-new-features-of-JDK-8.html
CC-MAIN-2014-42
refinedweb
1,145
64.41
Hi, I am not sure whether I should start a new thread, but my issues are not yet solved and the subject line is still partly accurate, so I'll just continue. To get my ideas working, I tried both the options that I saw in my last post, but I am getting stuck in both. Let me start with the one, with which I progressed the most: 1. Don't use transhandler. Have a main.py as the PythonHandler for the www/root directory. <Directory /var/www/root> SetHandler mod_python PythonHandler main </Directory> ---- with main.py: ---- def handler(req): extension = os.path.splitext(req.filename)[1] if extension == ".html" or extension == "": if not os.path.exists(req.filename): ... (handle further) return apache.OK else: if extension == '': req.filename = req.filename + "/index.html" # THIS LINE SEEMS NOT TO HAVE AN EFFECT return apache.DECLINED else: return apache.DECLINED When a url is requested for an existent .html file, it works fine: apache serves the .html file itself. However if the url is only the name of a directory, I can't seem to give apache its normal –complete– behavior. (E.g. the url (assuming the directory foo/ exists) does not return a directory listing of the contents of that directory nor does it display if there is such a file.) And it seems I also can't change the filename anymore in that handler phase. The only solution I found was to put an .htaccess file in /foo/ saying "SetHandler none". But I was actually looking for a way in which I would not need these .htaccess files when there is just an index.html present. Is there any way I still can set the filename in this phase? </IfModule> ---- and from translate.py: ---- def transhandler(req): ... req.filename = '/var/www/python/main.py' return apache.OK The issue I run against is that now apache shows the code from main.py. It doesn't execute it, it just shows the code. Despite all the other directives I use (in httpd.conf or .htaccess). Even when I add the line 'req.add_handler("PythonHandler", "main")' to the mix, it won't execute main.py, just displays its code. Any ideas how to I solve this? Many thanks, dirk On 31-aug-05, at 00:49, Graham Dumpleton wrote: > > On 31/08/2005, at 5:44 AM, IR labs wrote: >> >> > > > ----------------------------- Dirk van Oosterbosch dirk at ixopusada.com -----------------------------
http://modpython.org/pipermail/mod_python/2005-September/018949.html
CC-MAIN-2018-39
refinedweb
405
78.35
This your database is a cheap and easy way to drastically improve the performance of your application. However, adding a new index to a database table that’s already big can be dangerous. Don’t forget, index creation on a database table is a synchronous action that prevents INSERT, UPDATE, and DELETE operations until the full index is created. If the system is a live production database, this can have severe effects. Indexing very large tables can take many hours. For a system like Semaphore, even short periods are unacceptable. If this happens during deployment, we can potentially cause an unwanted downtime for the whole system. note: There might be a database vendor that doesn’t lock the table by default. We are mostly familiar with PostgreSQL and MySQL. Both of them lock write access on your table while the index is being created. Building Indexes Concurrently PostgreSQL – our database of choice while developing Semaphore – has a handy option that enables us to build indexes concurrently without locking up our database. For example, let’s build an index concurrently for branches on the build model: sql CREATE INDEX CONCURRENTLY idx_builds_branch ON builds USING btree (branch_id); The main benefit of concurrent index creation is that it does not require a lock on the table to build the index tree so we can avoid the issue of accidental downtimes. Keep in mind that while concurrent index building is a safe option for your production system, the build itself takes up to several times longer to complete. The database must perform two scans of the table, and it must wait for all existing transactions that could modify or use the index to terminate. The concurrent index build also imposes extra CPU and I/O load that might slow down other database operations. Concurrent Index Creation in Rails In Rails Migrations, you can use the algorithm option to trigger a concurrent index build on your database table. For example, we recently noticed that we miss a database index for accessing our build_metrics database table from our build models, which in a snowball effect slowed down job creation on Semaphore. Our build_metrics table is huge, counting many millions of elements, and it’s also accessed very frequently. We could not risk introducing a migration that would lock this table and potentially block build processing on Semaphore. We used the safe route, and triggered a concurrent index build: def change add_index :builds, :build_metric_id, :algorithm => :concurrently end However, we immediately learned that you can’t run the above from inside of a transaction. Active Record creates a transition around every migration step. To avoid this, we used the disable_ddl_transaction! introduced in Rails 4 to run this one migration without a transaction wrapper: class AddIndexToBuildMetricIdOnBuilds < ActiveRecord::Migration disable_ddl_transaction! def change add_index :builds, :build_metric_id, :algorithm => :concurrently end end The results were phenomenal. With this simple little tweak, our job processing capabilities got around 2.5 times faster. Small tweaks can sometimes bring great improvements. Premature optimization can be a huge anti-pattern, however investing in metrics and gaining a deep understanding of your system never is. Keep building and tweaking!.
https://semaphoreci.com/blog/2017/06/21/faster-rails-indexing-large-database-tables.html
CC-MAIN-2019-47
refinedweb
521
53.61
Example: Count occurrences of the word "Error" Log events frequently include important messages that you want to count, maybe about the success or failure of operations. For example, an error may occur and be recorded to a log file if a given operation fails. You may want to monitor these entries to understand the trend of your errors. In the example below, a metric filter is created to monitor for the term Error. The policy has been created and added to the log group MyApp/message.log. CloudWatch Logs publishes a data point to the CloudWatch custom metric ErrorCount in the MyApp/message.log namespace with a value of "1" for every event containing Error. If no event contains the word Error, then no data points are published. When graphing this data in the CloudWatch console, be sure to use the sum statistic. To create a metric filter using the CloudWatch console Error. Note All entries in the Filter Pattern field are case-sensitive. To test your filter pattern, in the Select Log Data to Test list, select the log group you want to test the metric filter against, and then click Test Pattern. Under Results, CloudWatch Logs displays a message showing how many occurrences of the filter pattern were found in the log file. Note To see detailed results, click Show test results. Click Assign Metric, and then on the Create Metric Filter and Assign a Metric screen, in the Filter Name field, enter MyAppErrorCount. Under Metric Details, in the Metric Namespace field, enter YourNameSpace. In the Metric Name field, enter ErrorCount, and then click Create Filter. To create a metric filter using the AWS CLI At a command prompt, type: % aws logs put-metric-filter \ --log-group-name MyApp/message.log \ --filter-name MyAppErrorCount \ --filter-pattern 'Error' \ --metric-transformations \ metricName=EventCount,metricNamespace=YourNamespace,metricValue=1 You can test this new policy by posting events containing the word "Error" in the message. To post events using the AWS CLI At a command prompt, remove the backslashes (\) and type this all on one line: % aws logs put-log-events \ --log-group-name MyApp/access.log --log-stream-name TestStream1 \ --log-events \ timestamp=1394793518000,message="This message contains an Error" \ timestamp=1394793528000,message="This message also contains an Error" Note Patterns are case-sensitive.
http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CountOccurrencesExample.html
CC-MAIN-2016-44
refinedweb
383
54.63
Karsten Januszewski Microsoft Corporation December 2001 Summary: This article walks through using UDDI at run time and discusses how UDDI, both the public registry and UDDI Services available in Microsoft Windows Server 2003™, can act as infrastructure for Web services to support client applications. (9 printed pages) Introduction UDDI Runtime Infrastructure Sample Scenario Creating the Web Service: A C# .NET .asmx Consuming the Web Service: A C# Windows Form .NET Client Other Scenarios Conclusion. Consider some of the concerns and issues at hand after a Web service has been integrated into a client application. A key issue is the inability to predict, detect, or recover from failures of the provider that hosts the Web service. What recourse does the client application have if that Web service fails? How can the application recover gracefully and dynamically from an unsuccessful Web service call? Similarly, from a Web service provider perspective, how can the owner of that Web service provide dynamic updates about changes? Consider the situation where the Web service is moved to a new server. How can the clients of that Web service be informed of this change in an efficient way? How can the owner dispense that information at run time, so that all of the clients of that Web service will not break? It is in scenarios such as these that UDDI can play an essential role in providing an infrastructure to support Web services at run time. UDDI addresses these "quality of service" issues by defining a calling convention that involves caching binding information such as the access point of that Web service as well as other parameters specific to that particular implementation. When failures occur, a client can issue a run-time UDDI query, refreshing the cached information with the latest information. The pattern for such a convention would be as follows: From the perspective of a Web service provider, the provider needs to be cognizant that the UDDI entry for that Web service can be updated when appropriate. When the provider of a Web service needs to redirect traffic to a new location or backup system, the provider only need to activate the backup system and then change the access point in the UDDI registry. This approach is called retry on failure and provides for a mechanism for clients to recover from failures at run time. Let's take a look at a sample of how such a pattern might work. This sample scenario will consider the scenario of a fictional company that needs to provide real-time sales data to a department inside its organization. As such, this Web service is one not exposed publicly, but rather used inside the firewall. To begin, we will need a Web service. In this case, we will expose a very simple Web service that supports one method, GetSalesTotalByRange, which allows client to get a snapshot of real-time sales data based on a date range. Then, we will create a client that consumes this Web service. We will configure the client to cache the access point and bindingKey information, and set up a mechanism for the client to refresh its cache from a UDDI registry in the event of failure. The Microsoft .NET Framework makes writing Web services very easy. For this sample, we will create a simple Web service with a single method, GetSalesTotalByRange, which takes two dates as input parameters, and returns a double. Below is an .asmx page, SalesReport.asmx, that achieves this goal: <%@ WebService Language="c#" Class="SalesReportUSA.SalesReport" %> using System; using System.Web.Services; namespace SalesReportUSA { [WebService(Namespace="urn:myCompany-com:SalesReport-Interface")] public class SalesReport : System.Web.Services.WebService { [WebMethod] public double GetSalesTotalByRange ( System.DateTime startDate, System.DateTime endDate ) { return 5000.00; } } } This page should be added to a virtual directory. For the client sample to work, create a virtual directory entitled SalesReportUSA (). Note that this Web service is always returning 5000.00 as its return value. (If only sales reports could be so predictable!) A real-world application would of course make a database call to retrieve this information. For the purposes of this sample, a hard-coded value is sufficient. Upon deploying this Web service, the next step would be to register it in a UDDI registry. This UDDI registry would be an internal UDDI Server, as it would not make sense to expose this Web service to the public. Microsoft provides UDDI Services natively with Microsoft® Windows® Server 2003. (See the Windows Server 2003 Web site for more about this feature. If you do not have Microsoft Windows Server 2003, you can also use the Microsoft UDDI Software Development Kit (SDK) to install UDDI on a local machine. To register a Web service in UDDI, there are two options: You can register using a Web user interface or you can register the Web service programmatically using the UDDI SDK. Using the SDK is convenient—you can see the code sample published in the Web Service Description and Discovery Using UDDI column. Using either method, you first register the WSDL file for your Web service as a tModel. UDDI tModels are XML entities used to represent interfaces and abstract meta-data; thus, WSDL files are signified as tModels. Then, you would register the access point for that Web service as a bindingTemplate. UDDI bindingTemplates are XML structures used to represent implementation details about a given Web service. (For more about the UDDI Schema and its relationship to WSDL, see both and a UDDI "best practices" document, Using WSDL in a UDDI Registry, Version 1.07.) Below is a sample of the resulting UDDI bindingTemplate structure when we completed these steps using UDDI Services. Note that the serviceKey, bindingKey and tModelKey were all generated by UDDI and are unique to the entities we saved. The keys generated by other UDDI registries will be different. <bindingTemplate serviceKey="ef25102d-2171-454c-ade9-3dd7a4a914ee" bindingKey="f46fced9-2b8a-4817-b957-f8d8aca0a2f9"> <accessPoint URLType="http"> </accessPoint> <tModelInstanceDetails> <tModelInstanceInfo tModelKey= "uuid:b28fe40a-ea62-4657-88d5-752d8a6cdf77" /> </tModelInstanceDetails> </bindingTemplate> In this structure, we have highlighted the accessPoint and bindingKey for this Web service. It will be important for our client to have knowledge of these two pieces of information. Also, if a client needed to obtain the WSDL for this Web service, the client could use the tModelKey to query UDDI for that tModel. At this point, we can switch roles, and take a look at the client half of the application. At design time, we would presumably discover this Web service in UDDI. We would download the corresponding WSDL file and generate a proxy class either using Add Web Reference from Microsoft Visual Studio® .NET or WSDL.exe. (WSDL.exe is a command line tool that is part of the Microsoft .NET Framework SDK.) We can proceed with writing the logic in the client application. In this case, it will be a C# Windows Form application called SalesReportClient.exe that allows users to query for Sales Report information. First, we will need to add the UDDI .NET SDK classes to our project, which are available for download. (Microsoft UDDI SDK version 1.5.2 is compatible with Visual Studio .NET Beta 2. Microsoft UDDI .NET SDK Beta version 1.75 is compatible with the Visual Studio .NET Release Candidate.) The using declarations should be as follows: using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Configuration; using System.Windows.Forms; using System.Data; using Microsoft.Uddi; using Microsoft.Uddi.Binding; Then, we need to store the access point for the UDDI server where we found this Web service—after all, UDDI is a Web service itself. To do this, we create an application configuration file for this .exe, where we store the location of the UDDI Server. We will also store the bindingKey of the Web service in this configuration file. The use of XML configuration files in .NET allows us to add any number of appSettings, which is then available through a collection for our application. <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="UDDI_URL" value="" /> <add key="bindingKey" value="f46fced9-2b8a-4817-b957-f8d8aca0a2f9" /> </appSettings> </configuration> In this case we are pointing to a Microsoft UDDI Developer Edition server hosted on our own machine. The UDDI_URL could also be one of the public UDDI nodes, or a UDDI registry hosted somewhere internally in an organization. We save this file using the naming convention for configuration files: app.config. When our application is compiled, the configuration file will be placed in the /bin directory and be named after the name of the .exe itself. This step we just completed—adding configuration information about our Web services—is not unlike how Visual Studio .NET exposes the URL Behavior property on each Web Reference that is added to a project. By changing that property to dynamic, Visual Studio .NET will create a config file with the access point of the Web service. What we have done above is to extend that concept further by providing the ability to re-query UDDI at run time. Consequently, our config file contains the access point of the UDDI node and the bindingKey of the Web service. Now we can begin coding the application itself. First, we create a text box, a label, a button, and two datetime pickers. Then we establish some global variables: //some variables for the application private string InquiryURL = null; private string bindingKey = null; private string accessPoint = null; private BindingTemplate bt; private double salesFigure = 0; Then, when the form is instantiated, we initialize these variables: public Form1() { // // Required for Windows Form Designer support // InitializeComponent(); //populate variables from config file InquiryURL = ConfigurationSettings.AppSettings["UDDI_URL"]; bindingKey = ConfigurationSettings.AppSettings["bindingKey"]; bool InitCache = RefreshCacheFromUDDI(); if ( InitCache == true ) accessPoint = bt.AccessPoint.Text; } The RefreshCacheFromUDDI() function is used to query the UDDI Server to find the access point. We are using the UDDI SDK to perform a UDDI API call, GetBindingDetail, passing the bindingKey as a parameter. private bool RefreshCacheFromUDDI() { //using the UDDI SDK, set the UDDI access point Inquire.Url = InquiryURL; //create a get_bindingDetail UDDI API message GetBindingDetail gbd = new GetBindingDetail(); //add the bindingKey gbd.BindingKeys.Add( bindingKey ); try { BindingDetail bd = gbd.Send(); //if we are successful, update our bindingTemplate object //with the first template in the returned collection bt = bd.BindingTemplates[0]; return true; } catch (Exception err) { textBox1.Text += err.Message; return false; } } For the duration of our application, we will hold the location of the accessPoint in a variable. If the user were to restart the application, it would re-query UDDI for the accessPoint, so that the application always has the most recent change to the Web service. It would be possible to cache this data to the file system or to a database, if one wanted to. We then create a function for invoking the Web service itself: private bool InvokeWebService() { localhost.SalesReport sr = new localhost.SalesReport(); //set the access point for the proxy class sr.Url = accessPoint; try { salesFigure = sr.GetSalesTotalByRange( dateTimePicker1.Value, dateTimePicker2.Value ); label1.Text = "Sales Figure For Dates Selected: $" + salesFigure.ToString(); textBox1.Text += "Web Service invocation successful!"; return true; } catch (Exception err) { textBox1.Text += err.Message; return false; } } Finally, when the user clicks the button, the application will try to invoke the Web service. private void button1_Click(object sender, System.EventArgs e) { Cursor.Current = Cursors.WaitCursor; //try to invoke Web Service bool WebServiceSuccess = InvokeWebService(); //if it fails for some reason, query UDDI if ( WebServiceSuccess == false ) { textBox1.Text += "Web Service failed. Requerying UDDI for new accesspoint.\n\n"; bool UDDISuccess = RefreshCacheFromUDDI(); //we were successful requerying UDDI, if ( UDDISuccess == true ) { //compare the accessPoint with the new accessPoint //to determine if it changed if ( accessPoint.Equals( bt.AccessPoint.Text ) == false) { //because the accessPoint is different, it must be new //we reset our variable accessPoint = bt.AccessPoint.Text; //and attempt to invoke the web service again WebServiceSuccess = InvokeWebService(); //we aren't able to invoke the Web Service with the new info if ( WebServiceSuccess == false ) { textBox1.Text += "Web Service failed again. Updated accesspoint from UDDI didn't help!.\n\n"; } } else { textBox1.Text += "No new information was provided from UDDI.\n\n"; } } else { textBox1.Text += "UDDI refresh failed.\n\n"; } } } Notice how we are setting the accessPoint at run time for our Web service proxy class. Because all proxy classes derive from System.Web.Services.Protocols.SoapHttpClientProtocol, a series of properties are exposed by our proxy class, one of which is the .Url property. Setting this property allows us to specify the access point at run time. We then send our SOAP request over the wire. If we don't catch an exception, all is well and the data returned from the Web service is displayed in the form. However, if we do encounter an exception, this function will return false, and our calling code will try to refresh the access point from UDDI, reusing the RefreshCacheFromUDDI() function. Once we have re-queried UDDI, we will compare the accessPoint returned from UDDI with our old access point. If the access point is identical, then the provider has not updated UDDI with any new information and there is nothing we can do, other than try to contact the provider of the Web service to inform them that the Web service is not responding. However if the access point retrieved from UDDI is different, we can attempt to invoke the Web service again. To simulate failure, change the name of the Web service to something different. Try running the application. Then, update the UDDI entry with the new name for the Web service. Run the application again. The application will discover the new access point in UDDI, successfully query the new service, and save this information. If you close the application entirely and reopen it, you should be able to invoke the Web service again on the first attempt. This retry on failure sample is just area where UDDI can serve as supporting infrastructure at run time for a Web service client. In future columns, we will take a look at other scenarios, including: Additionally, we will look at optimizing WSDL files so as to make them act truly as interface description files. UDDI provides important run-time functionality that can be integrated into applications so as to create more robust, dynamic clients. By using UDDI as infrastructure in a Web services architecture, applications can be written to be more reliable.
http://msdn.microsoft.com/en-us/library/ms953944.aspx
crawl-002
refinedweb
2,396
56.96
> --- qemu-deprecated.texi | 7 +++++++ qemu-options.hx | 4 ++++ vl.c | 4 ++++ 3 files changed, 15 insertions(+) diff --git a/qemu-deprecated.texi b/qemu-deprecated.texi index 5d2d7a3..cb4291f 100644 --- a/qemu-deprecated.texi +++ b/qemu-deprecated.texi @@ -128,6 +128,13 @@ The @option{[hub_id name]} parameter tuple of the 'hostfwd_add' and The ``ivshmem'' device type is replaced by either the ``ivshmem-plain'' or ``ivshmem-doorbell`` device types. + subsection bluetooth (since 3.1) + +The bluetooth subsystem is unmaintained since many years and likely bitrotten +quite a bit. It will be removed without replacement unless some users speaks +up at the @email{qemu-devel@@nongnu.org} mailing list with information about +their usecases. + @section System emulator machines @subsection pc-0.10 and pc-0.11 (since 3.0) diff --git a/qemu-options.hx b/qemu-options.hx index 38c7a97..ee379b3 100644 --- a/qemu-options.hx +++ b/qemu-options.hx @@ -2772,6 +2772,10 @@ logic. The Transport Layer is decided by the machine type. Currently the machines @code{n800} and @code{n810} have one HCI and all other machines have none. +Note: This option and the whole bluetooth subsystem is considered as deprecated. +If you still use it, please send a mail to @email{qemu-devel@@nongnu.org} where +you describe your usecase. + @anchor{bt-hcis} The following three types are recognized: diff --git a/vl.c b/vl.c index 55bab00..fa25d1a 100644 --- a/vl.c +++ b/vl.c @@ -3269,6 +3269,10 @@ int main(int argc, char **argv, char **envp) break; #endif case QEMU_OPTION_bt: + warn_report("The bluetooth subsystem is deprecated and will " + "be removed soon. If the bluetooth subsystem is " + "still useful for you, please send a mail to " + "qemu-devel nongnu org with your usecase."); add_device_config(DEV_BT, optarg); break; case QEMU_OPTION_audio_help: -- 1.8.3.1
https://www.redhat.com/archives/libvir-list/2018-November/msg00387.html
CC-MAIN-2020-40
refinedweb
297
54.9
Get the SoR Robotics Android App on Android Market for FREE. See this forum post for details. 0 Members and 1 Guest are viewing this topic. #include <avr/io.h>//5 on = 0x5F//4 on = 0x6F//3 on = 0x77//2 on = 0x7B//1 on = 0x7D//0 on = 0x7Eint del = 2500; // create a variable to be used as the delayint rate = 100; // create a variable to be used as the rate of change//=================================================================mainint main(void){ DDRC = 0xFF; // set all C ports as outputs //PORT ID 0b76543210 - port labels in the binary setwhile(1) // Basically saying "while true" and since 1 is always true, this will loop forever. Using "while(1)" is terrible programing etiquette. Dont do it. { while(del>=0) //This while loop will increase the speed at which the lights run by decreasing the delay time { int x; for(x=0;x<=5;x++) { PORT_HiLo(x); delay_cycles(del); } for(x=5;x>=0;x--) { PORT_HiLo(x); delay_cycles(del); } del -= rate; } while(del<=2500) //This while loop will decrease the speed at which the lights run by increasing the delay time { int x; for(x=0;x<=5;x++) { PORT_HiLo(x); delay_cycles(del); } for(x=5;x>=0;x--) { PORT_HiLo(x); delay_cycles(del); } del += rate; } }}//=================================================================delay_cyclesint delay_cycles(int x){while (x>0) { x--; }}//=================================================================PORT_HiLoint PORT_HiLo(int x){switch (x) { case 6: PORTC = 0x00; // Put all C ports low - Turns all LEDs on break; case 5: PORTC = 0x5F; // Put port C5 low - Turn top LED on break; case 4: PORTC = 0x6F; // Put port C4 low - Turn 2nd from top LED on break; case 3: PORTC = 0x77; // Put port C3 low - Turn 3rd from top LED on break; case 2: PORTC = 0x7B; // Put port C2 low - Turn 4th from top LED on break; case 1: PORTC = 0x7D; // Put port C1 low - Turn 5th from top LED on break; case 0: PORTC = 0x7E; // Put port C0 low - Turn bottom LED on break; default: PORTC = 0xFF; // Put all C ports high - Turns all LEDs off break; }} Only 1 LED is ever on at one time so I figured if I power them all in parallel from the rail I could just use a single resistor to supply that rail to limit the current and drop the voltage to what I need for all the resistors. Im going to be staying at Jaimie Mantzels place in vermont for like a week so I wont have internet access for the duration. I plan on leaving in like 8 hours or so....
http://www.societyofrobots.com/robotforum/index.php?topic=6286.0
CC-MAIN-2016-50
refinedweb
415
55.54
#include <messagefilter.h> Virtual base class for message filters. 38 of file messagefilter.h. Constructor. Definition at line 20 of file messagefilter.cpp. Virtual Destructor. Definition at line 27 of file messagefilter.cpp. Attaches this MessageFilter to the given MessageSession and hooks it into the session's filter chain. If this filter was attached to a different MessageSession before, it is unregistered there prior to registering it with the new session. Definition at line 31 of file messagefilter.cpp. This function receives a message right before it is sent out (there may be other filters which get to see the message after this filter, though). Implemented in InBandBytestream, MessageEventFilter, and ChatStateFilter. This function receives a message stanza right after it was received (there may be other filters which got to see the stanza before this filter, though). Implemented in InBandBytestream, MessageEventFilter, and ChatStateFilter.
https://camaya.net/api/gloox-0.9.9.12/classgloox_1_1MessageFilter.html
CC-MAIN-2019-18
refinedweb
143
50.33
I was in the computer lab at my school today...and being the nerd that I am, I noticed Dev C++, and opened it up. Clicked through to create a new source file. :eek: I was shocked. :eek: Warning, some may have difficulty with the next line of code. Viewer Discretion advised. I present to you the first line in this nearly blank source file: I kid you not, this is what could-be programmers are being taught at my high school.I kid you not, this is what could-be programmers are being taught at my high school.Code: #include <iostream.h> I was horrified!!! (Thankfully I'm not doing any programming at the school! That was a close call!)
http://cboard.cprogramming.com/brief-history-cprogramming-com/60531-horror-printable-thread.html
CC-MAIN-2016-26
refinedweb
121
87.21
So today I experimented with a new way of learning – I wanted to understand what happens when I run a “hello world” program, but I wasn’t at Hacker School. So I wrote down my current understanding and a bunch of questions and asked Twitter! if anyone has too much time and operating system knowledge, I'd love comments and "well, actually"s on— Julia Evans (@b0rk) November 28, 2013 People left me tons of helpful comments in the gist, which made me really happy. I’m not going to reprise all of the discussion here, but here’s an incomplete summary of what needs to happen when a kernel runs an executable. If you’re interested, definitely check out the gist. The question: If I were an OS, what would I need to do to run “Hello, world?” The original program was #include <stdio.h> int main() { printf("Hello!\n"); } and I statically compiled it by running gcc -static -o hello hello.c. So we don’t have to worry about dynamic linking or anything. (I very much enjoyed this guide to linkers, tangentially) Step 0: Simplify the program a bit The first suggestion I got was to make it a bit easier by using write() instead of printf(). Running strace ./hello tells me all the system calls that happen, including the write() system call: write(1, "Hello world!\n", 13) So we can simplify this program down to int main() { write(1, "Hello world!\n", 13); } which removes the #include and some of the system calls. printf() is a pretty complicated function, so it’s better to not use it. Now we can get down to the actual business of describing what happens when the program executes! These are not in any particular order. Load the code (“text”) into memory In the binary there are a bunch of assembler instructions. These need to be loaded into memory. Load the data segment into memory A program might also have initialized and uninitialized global variables. These need a place in memory. I’d need to zero out the BSS out here for sure. Set up the heap and stack Programs need a heap and a stack. Once these three things are done, we have the program’s “address space” in memory. This looks something like this (thanks to @danellis for the diagram!) +---------------+ | Stack | | | | | v | +---------------+ : : +---------------+ | ^ | | | | | Heap | +---------------+ | Data | +---------------+ | Code | +---------------+ I’m still really not sure about the details of what this set up looks like – people talk a lot about virtual memory and I don’t know how I would implement that at all or if I would have to implement it. Handle system calls User space programs interact with the kernel through “system calls”. If I run strace -o hello.out ./hello, I get this list of all the system calls that happen when running ./hello: execve("./hello2", ["./hello2"], [/* 59 vars */]) = 0 uname({sys="Linux", node="kiwi", ...}) = 0 brk(0) = 0xca9000 brk(0xcaa1c0) = 0xcaa1c0 arch_prctl(ARCH_SET_FS, 0xca9880) = 0 brk(0xccb1c0) = 0xccb1c0 brk(0xccc000) = 0xccc000 write(1, "Hello world!\n", 13) = 13 exit_group(13) = ? I don’t think I have to worry about the first two system calls, since the first one is definitely called by my shell. The brk system call is about moving the “program break” to allocate memory. I’m not totally sure why it needs to allocate memory, but it does. The write system call I definitely feel like I could handle – I found an example on the OSDev wiki of how to write to a VGA buffer, so that could work. I’m guessing exit_group is about quitting the program, so I’d have to do some cleanup or something. I have no idea what arch_prctl is. I’m hoping to actually do some of this in the coming week at Hacker School. I’ve been pointed to the OSDev wiki which has all kinds of fantastic explanations and tutorials.
https://jvns.ca/blog/2013/11/29/what-happens-when-you-run-a-unix-program/
CC-MAIN-2017-09
refinedweb
653
72.76
The Apache WebServices Commons Project has released of AXIOM 1.0. Near as I can tell this is yet another tree model like DOM, JDOM, or XOM. However it's built from StAX rather than SAX. Most importantly Axiom can build the object tree on demand so you don't spend memory on nodes you don't want. That sounds good, but it's been tried before (notably in Xerces's deferred DOM) and the results have not been impressive. Maybe these folks have figured out a more practical way to do this, though. The underlying push-pull parser distinction may be important for this. Also of note is the support for XML Optimized Packaging (XOP) and MTOM. The Axiom announcement gets this exactly backwards though. XOP and MTOM do not allow "XML to carry binary data efficiently and in a transparent manner." Instead they allow both XML and binary data to be bundled together in the same non-XML file. Understanding the distinction is critical for proper use of these technologies. The Axiom API itself is too complex. For example, here's a chunk of code from the tutorial: OMFactory factory = OMAbstractFactory.getOMFactory(); OMNamespace ns1 = factory.createOMNamespace("bar","x"); OMElement root = factory.createOMElement("root",ns1); OMNamespace ns2 = root.declareNamespace("bar1","y"); OMElement elt1 = factory.createOMElement("foo",ns1); OMElement elt2 = factory.createOMElement("yuck",ns2); OMText txt1 = factory.createOMText(elt2,"blah"); elt2.addChild(txt1); elt1.addChild(elt2); root.addChild(elt1); And here's the equivalent in XOM for comparison: Element root = new Element("x:root", "bar"); Element elt1 = new Element("x:foo", "bar"); Element elt2 = new Element("y:yuck", "bar1"); Text txt1 = new Text("blah"); elt2.appendChild(txt1); elt1.appendChild(elt2); root.appendChild(elt1); Of course, XOM would notice that the requested elements use relative namespace URIs, and thus that the document containing them does not have a valid Infoset. For all the talk about Infosets on the Axiom pages, you'd hope somebody would have noticed this. Their examples also demonstrate a lack of correct white space handling, and some serious mistakes with encoding detection. I haven't tried to write code with this API yet, so I can't tell if the problems are in the library itself or just the tutorial. Either way, it's disturbing. Folks: if you're going to write yet another XML API, please, please ask for early review from people who have been through this before. The reason the mistakes in Axiom jump out at me is that I've seen them all dozens of times before. XML is not as simple a spec as it seems at first glance. There are a lot of tricky areas that trip up the unwary. There are some interesting new ideas here, that should be explored further. However, as a library it's clearly unsuitable for production use.
http://www.cafeconleche.org/oldnews/news2006May3.html
CC-MAIN-2014-42
refinedweb
472
58.28