text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Read entire file You are encouraged to solve this task according to the task description, using any language you may know. - Task Load the entire contents of some text file as a single string variable. If applicable, discuss: encoding selection, the possibility of memory-mapping. Of course, in practice one should avoid reading an entire file at once if the file is large and the task can be accomplished incrementally instead (in which case check File IO); this is for those cases where having the entire file is actually what is wanted. Contents - 1 8th - 2 Ada - 3 AutoHotkey - 4 AutoIt - 5 ALGOL 68 - 6 AppleScript - 7 AWK - 8 BASIC - 9 BBC BASIC - 10 Bracmat - 11 Brainf*** - 12 Brat - 13 C - 14 C++ - 15 C# - 16 Clojure - 17 CMake - 18 Common Lisp - 19 D - 20 Delphi - 21 Déjà Vu - 22 E - 23 Elixir - 24 Emacs Lisp - 25 Erlang - 26 Euphoria - 27 F# - 28 Factor - 29 Fantom - 30 Forth - 31 Fortran - 32 Frink - 33 FutureBasic - 34 GAP - 35 Go - 36 Groovy - 37 GUISS - 38 Haskell - 39 Icon and Unicon - 40 Inform 7 - 41 J - 42 Java - 43 JavaScript - 44 jq - 45 Julia - 46 Kotlin - 47 LabVIEW - 48 Lang5 - 49 Lasso - 50 LFE - 51 Liberty BASIC - 52 Lingo - 53 LiveCode - 54 Lua - 55 M4 - 56 Make - 57 Maple - 58 Mathematica - 59 MATLAB / Octave - 60 Mercury - 61 NetRexx - 62 NewLISP - 63 Nim - 64 Objeck - 65 Objective-C - 66 OCaml - 67 ooRexx - 68 OxygenBasic - 69 Oz - 70 PARI/GP - 71 Panda - 72 Pascal - 73 Perl - 74 Perl 6 - 75 Phix - 76 PHP - 77 PicoLisp - 78 Pike - 79 PL/I - 80 PowerShell - 81 PureBasic - 82 Python - 83 Q - 84 R - 85 Racket - 86 Raven - 87 REALbasic - 88 REBOL - 89 Retro - 90 REXX - 91 Ring - 92 Ruby - 93 Run BASIC - 94 Rust - 95 Scala - 96 Scheme - 97 Seed7 - 98 Sidef - 99 Smalltalk - 100 SNOBOL4 - 101 Sparkling - 102 Swift - 103 Tcl - 104 TUSCRIPT - 105 TXR - 106 UNIX Shell - 107 Ursa - 108 Vala - 109 VBScript - 110 Vedit macro language - 111 Visual Basic .NET - 112 Wart - 113 XPL0 - 114 Yorick - 115 zkl 8th[edit] The "slurp" word will read the entire contents of the file into memory, as-is, and give a "buffer". The ">s" converts that to a string, again "as-is" "somefile.txt" f:slurp >s Ada[edit] Ada.Direct_IO[edit] Using Ada.Directories to first ask for the file size and then Ada.Direct_IO to read the whole file in one chunk: with Ada.Directories, Ada.Direct_IO, Ada.Text_IO; procedure Whole_File is File_Name : String := "whole_file.adb"; File_Size : Natural := Natural (Ada.Directories.Size (File_Name)); subtype File_String is String (1 .. File_Size); package File_String_IO is new Ada.Direct_IO (File_String); File : File_String_IO.File_Type; Contents : File_String; begin File_String_IO.Open (File, Mode => File_String_IO.In_File, Name => File_Name); File_String_IO.Read (File, Item => Contents); File_String_IO.Close (File); Ada.Text_IO.Put (Contents); end Whole_File; This kind of solution is limited a bit by the fact that the GNAT implementation of Ada.Direct_IO first allocates a copy of the read object on the stack inside Ada.Direct_IO.Read. On Linux you can use the command " limit stacksize 1024M" to increase the available stack for your processes to 1Gb, which gives your program more freedom to use the stack for allocating objects. POSIX.Memory_Mapping[edit] Mapping the whole file into the address space of your process and then overlaying the file with a String object. with Ada.Text_IO, POSIX.IO, POSIX.Memory_Mapping, System.Storage_Elements; procedure Read_Entire_File is use POSIX, POSIX.IO, POSIX.Memory_Mapping; use System.Storage_Elements; Text_File : File_Descriptor; Text_Size : System.Storage_Elements.Storage_Offset; Text_Address : System.Address; begin Text_File := Open (Name => "read_entire_file.adb", Mode => Read_Only); Text_Size := Storage_Offset (File_Size (Text_File)); Text_Address := Map_Memory (Length => Text_Size, Protection => Allow_Read, Mapping => Map_Shared, File => Text_File, Offset => 0); declare Text : String (1 .. Natural (Text_Size)); for Text'Address use Text_Address; begin Ada.Text_IO.Put (Text); end; Unmap_Memory (First => Text_Address, Length => Text_Size); Close (File => Text_File); end Read_Entire_File; Character encodings and their handling are not really specified in Ada. What Ada does specify is three different character types (and corresponding string types): - Character - containing the set of ISO-8859-1 characters. - Wide_Character - containing the set of ISO-10646 BMP characters. - Wide_Wide_Character - containing the full set of ISO-10646 characters. The GNU Ada compiler (GNAT) seems to read in text files as bytes, completely ignoring any operating system information on character encoding. You can use -gnatW8 in Ada 2005 mode to use UTF-8 characters in identifier names. AutoHotkey[edit] fileread, varname, C:\filename.txt ; adding "MsgBox %varname%" (no quotes) to the next line will display the file contents. This script works fine as-is provided C:\filename.txt exists. AutoIt[edit] $fileOpen = FileOpen("file.txt") $fileRead = FileRead($fileOpen) FileClose($fileOpen) ALGOL 68[edit] In official ALGOL 68 a file is composed of pages, lines and characters, however for ALGOL 68 Genie and ELLA ALGOL 68RS this concept is not supported as they adopt the Unix concept of files being "flat", and hence contain only characters. The book can contain new pages and new lines, are not of any particular character set, hence are system independent. The character set is set by a call to make conv, eg make conv(tape, ebcdic conv); - c.f. Character_codes for more details. In official/standard ALGOL 68 only: MODE BOOK = FLEX[0]FLEX[0]FLEX[0]CHAR; ¢ pages of lines of characters ¢ BOOK book; FILE book file; INT errno = open(book file, "book.txt", stand in channel); get(book file, book) Once a "book" has been read into a book array it can still be associated with a virtual file and again be accessed with standard file routines (such as readf, printf, putf, getf, new line etc). This means data can be directly manipulated from a array cached in "core" using transput (stdio) routines. In official/standard ALGOL 68 only: FILE cached book file; associate(cached book file, book) AppleScript[edit] set pathToTextFile to ((path to desktop folder as string) & "testfile.txt") -- short way: open, read and close in one step set fileContent to read file pathToTextFile -- long way: open a file reference, read content and close access set fileRef to open for access pathToTextFile set fileContent to read fileRef close access fileRef AWK[edit] #!/usr/bin/awk -f BEGIN { ## empty record separate, RS=""; ## read line (i.e. whole file) into $0 getline; ## print line number and content of line print "=== line "NR,":",$0; } { ## no further line is read printed print "=== line "NR,":",$0; } #!/usr/bin/awk -f @include "readfile" BEGIN { str = readfile("file.txt") print str } BASIC[edit] Whether or not various encodings are supported is implementation-specific. DIM f AS STRING OPEN "file.txt" FOR BINARY AS 1 f = SPACE$(LOF(1)) GET #1, 1, f CLOSE 1 BBC BASIC[edit] In BBC BASIC for Windows and Brandy BASIC the maximum string length is 65535 characters. file% = OPENIN("input.txt") strvar$ = "" WHILE NOT EOF#file% strvar$ += CHR$(BGET#file%) ENDWHILE CLOSE #file% API version: file% = OPENIN("input.txt") strvar$ = STRING$(EXT#file%, " ") SYS "ReadFile", @hfile%(file%), !^strvar$, EXT#file%, ^temp%, 0 CLOSE #file% Bracmat[edit] get'(filename,STR):?myString Brainf***[edit] While the language certainly doesn't support strings in the traditional sense, relaxing the definition to mean any contiguous sequence of null-terminated bytes permits a reasonable facsimile. This cat program eschews the simpler byte-by-byte approach (,[.,]) to demonstrate the technique. > Keep cell 0 at 0 as a sentinel value ,[>,] Read into successive cells until EOF <[<] Go all the way back to the beginning >[.>] Print successive cells while nonzero - Output: $ curl -Ls rosettacode.org | bf ">,[>,]<[<]>[.>]" <!DOCTYPE html> ... </html> Tape: [0, 60, 33, 68, 79, 67, 84, 89, 80, 69, 32, 104, 116, 109, 108, 62, 10 ... 60, 47, 104, 116, 109, 108, 62, 10, 0] Brat[edit] include :file file.read file_name C[edit] It is not possible to specify encodings: the file is read as binary data (on some system, the b flag is ignored and there's no difference between "r" and "rb"; on others, it changes the way the "new lines" are treated, but this should not affect fread) #include <stdio.h> #include <stdlib.h> int main() { char *buffer; FILE *fh = fopen("readentirefile.c", "rb"); if ( fh != NULL ) { fseek(fh, 0L, SEEK_END); long s = ftell(fh); rewind(fh); buffer = malloc(s); if ( buffer != NULL ) { fread(buffer, s, 1, fh); // we can now close the file fclose(fh); fh = NULL; // do something, e.g. fwrite(buffer, s, 1, stdout); free(buffer); } if (fh != NULL) fclose(fh); } return EXIT_SUCCESS; } Memory map[edit] We can memory-map the file. #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <fcntl.h> int main() { char *buffer; struct stat s; int fd = open("readentirefile_mm.c", O_RDONLY); if (fd < 0 ) return EXIT_FAILURE; fstat(fd, &s); /* PROT_READ disallows writing to buffer: will segv */ buffer = mmap(0, s.st_size, PROT_READ, MAP_PRIVATE, fd, 0); if ( buffer != (void*)-1 ) { /* do something */ fwrite(buffer, s.st_size, 1, stdout); munmap(buffer, s.st_size); } close(fd); return EXIT_SUCCESS; } Memory map on Windows. See MSDN, starting with File Mapping. In practice, it would be necessary to check for errors, and to take care of large files. Also, this example is using a view on the whole file, but it's possible to create a smaller view. #include <windows.h> #include <stdio.h> int main() { HANDLE hFile, hMap; DWORD filesize; char *p; hFile = CreateFile("mmap_win.c", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); filesize = GetFileSize(hFile, NULL); hMap = CreateFileMapping(hFile, NULL, PAGE_READONLY, 0, 0, NULL); p = MapViewOfFile(hMap, FILE_MAP_READ, 0, 0, 0); fwrite(p, filesize, 1, stdout); CloseHandle(hMap); CloseHandle(hFile); return 0; } C++[edit] #include <iostream> #include <fstream> #include <string> #include <iterator> int main( ) { if (std::ifstream infile("sample.txt")) { // construct string from iterator range std::string fileData(std::istreambuf_iterator<char>(infile), std::istreambuf_iterator<char>()); cout << "File has " << fileData.size() << "chars\n"; // don't need to manually close the ifstream; it will release the file when it goes out of scope return 0; } else { std::cout << "file not found!\n"; return 1; } } C#[edit] using System.IO; class Program { static void Main(string[] args) { var fileContents = File.ReadAllText("c:\\autoexec.bat"); } } Clojure[edit] The core function slurp does the trick; you can specify an encoding as an optional second argument: (slurp "myfile.txt") (slurp "my-utf8-file.txt" "UTF-8") CMake[edit] Sets a variable named string. file(READ /etc/passwd string) This works with text files, but fails with binary files that contain NUL characters. CMake truncates the string at the first NUL character, and there is no way to detect this truncation. The only way to read binary files is to use the HEX keyword to convert the entire file to a hexadecimal string. file(READ /etc/pwd.db string HEX) Common Lisp[edit] The following will read and store the file as a sequence of bytes. (defun file-string (path) (with-open-file (stream path) (let ((data (make-string (file-length stream)))) (read-sequence data stream) data))) The macro with-open-file could be passed :external-format :utf-8 on some implementations (which it would pass on to open) so that reading would occur by unicode character but (file-length stream) would continue to return the number of bytes, not characters, necessary for storing it. D[edit] import std.file: read, readText; void main() { // To read a whole file into a dynamic array of unsigned bytes: auto data = cast(ubyte[])read("unixdict.txt"); // To read a whole file into a validated UTF-8 string: string txt = readText("unixdict.txt"); } Delphi[edit] Using TStringList program ReadAll; {$APPTYPE CONSOLE} uses Classes; var i: Integer; lList: TStringList; begin lList := TStringList.Create; try lList.LoadFromFile('c:\input.txt'); // Write everything at once Writeln(lList.Text); // Write one line at a time for i := 0 to lList.Count - 1 do Writeln(lList[i]); finally lList.Free; end; end. Works with: Delphi 2010 and above program ReadAll; {$APPTYPE CONSOLE} uses SysUtils, IOUtils; begin // with default encoding: Writeln(TFile.ReadAllText('C:\autoexec.bat')); // with encoding specified: Writeln(TFile.ReadAllText('C:\autoexec.bat', TEncoding.ASCII)); Readln; end. Déjà Vu[edit] To get a string from a file, you need to explicitly decode the binary blob that is read. Currently only UTF-8 is supported by vu. local :filecontents !decode!utf-8 !read "file.txt" E[edit] <file:foo.txt>.getText() The file is assumed to be in the default encoding. Elixir[edit] Two solutions in the FileReader namespace. File returns a tuple: {:ok, file} is successful or {:error, reason} if unsuccessful. Errors can be caught and turned into error strings via Erlang's :file.format_error function. defmodule FileReader do # Read in the file def read(path) do case File.read(path) do {:ok, body} -> IO.inspect body {:error,reason} -> :file.format_error(reason) end end # Open the file path, then read in the file def bit_read(path) do case File.open(path) do {:ok, file} -> # :all can be replaced with :line, or with a positive integer to specify the number of characters to read. IO.read(file,:all) |> IO.inspect {:error,reason} -> :file.format_error(reason) end end end Emacs Lisp[edit] insert-file-contents does all Emacs' usual character coding, magic file names, decompression, format decoding, etc. ( insert-file-contents-literally can avoid that if unwanted.) (setq my-variable (with-temp-buffer (insert-file-contents "foo.txt") (buffer-string))) (If an existing buffer is visiting the file, perhaps yet unsaved, it may be helpful to take its contents instead of re-reading the file. find-buffer-visiting can locate such a buffer.) Erlang[edit] {ok, B} = file:read_file("myfile.txt"). This reads the entire file into a binary object. Euphoria[edit] Euphoria cannot natively handle multibyte character encodings. The openEuphoria team is/was working on supporting it. It may have been implemented by now. function load_file(sequence filename) integer fn,c sequence data fn = open(filename,"r") -- "r" for text files, "rb" for binary files if (fn = -1) then return {} end if -- failed to open the file data = {} -- init to empty sequence c = getc(fn) -- prime the char buffer while (c != -1) do -- while not EOF data &= c -- append each character c = getc(fn) -- next char end while close(fn) return data end function F#[edit] // read entire file into variable using default system encoding or with specified encoding open System.IO let data = File.ReadAllText(filename) let utf8 = File.ReadAllText(filename, System.Text.Encoding.UTF8) Factor[edit] USING: io.encodings.ascii io.encodings.binary io.files ; ! to read entire file as binary "foo.txt" binary file-contents ! to read entire file as lines of text "foo.txt" ascii file-lines Fantom[edit] Provide the filename to read from as a command-line parameter. class ReadString { public static Void main (Str[] args) { Str contents := File(args[0].toUri).readAllStr echo ("contents: $contents") } } Forth[edit] s" foo.txt" slurp-file ( str len ) Fortran[edit] Suppose F is an integer with a value such as 10 - it is the I/O unit number, and STUFF is a CHARACTER variable. The basic idea is simple: OPEN (F,FILE="SomeFileName.txt",STATUS="OLD",FORM="UNFORMATTED") READ (F) STUFF By opening the file as UNFORMATTED, the line separators (in ASCII, one of CR, CRLF, LFCR or CR) will not be acted upon and all is grist, all the way to the end of the file. But alas, there is no protocol for arranging that STUFF be the right size for the file, nor is there a standard means to ascertain just how long the file is, as by some aspect of an INQUIRE statement, and anyway there will likely be error reports from the I/O subsystem. In short, it just won't work.The only way is to define some large data structure corresponding the the stuff in the file, and read the file line-by-line to the end. If simple text is expected, and, no line exceeds ENUFF in length, and there are no more than MANY lines, INTEGER MANY,ENUFF !Some sizes. PARAMETER (MANY = 12345,ENUFF = 666) !Sufficient? CHARACTER*(ENUFF) STUFF(MANY) !Lots of memory these days. INTEGER LS(MANY) !Length of STUFF. INTEGER F,N,L !Assistants. F = 10 !Chooose a unit number. OPEN (F,FILE="FileSlurp.for",STATUS="OLD",ACTION="READ") N = 0 Chew through the file. 10 READ (F,11,END = 20) L,STUFF(N + 1)(1:MIN(L,ENUFF)) !Cautious read. 11 FORMAT (Q,A) !The length of the record, then its text. N = N + 1 !Count it in. IF (L.GT.ENUFF) STOP "Record too long!" !Not a very helpful message. IF (N.GT.MANY) STOP "Too many lines!" !But it's better than crashing. LS(N) = MIN(L,ENUFF) !A protected length. GO TO 10 !Try again. Completed. 20 CLOSE (F) !Finished. DO I = 1,N !Proof of life. WRITE (6,21) STUFF(I)(1:LS(I)) !One line at a time. 21 FORMAT (A) !No character count associate. END DO !On to the next line. END !That was easy. This will read lines of text (omitting whichever of CR, CRLF, etc. is in use) until the end of the file. Although "text" is spoken of, actually any bit pattern is grist for the input, except for the bit pattern corresponding to the record separator (the CR, etc.) which is a troublesome context violation if one is actually dealing with arbitrary bit patterns as with binary data from integer and floating-point variables. Should a component byte contain a CR (or whichever) pattern, there will be a line break! Modern computers offer large memories, but also large files. If the STUFF can be processed without the necessity for all of the file's content to be on hand, this problem is eased. Frink[edit] The read[URL] function reads the entire contents of a URL. The encoding can be specified if necessary. a = read["file:yourfile.txt"] b = read["file:yourfile.txt", "UTF-8"] FutureBasic[edit] Note: This code goes beyond simply specifying the file to open. It includes a dialog window that allows the user to select a text file to read. Depending on system memory, as many as 4.2 billion characters can be read. The file contents are placed in a convenient console window with automatic save as, copy and paste, select all and undo commands. (Did I mention that FutureBasic -- or FB as developers prefer to call it -- is handy for Macintosh development!) Of course, the programmer is free to code his own window and menu options. include "ConsoleWindow" local fn ReadTextFile dim as CFURLRef fileRef dim as Handle h dim as CFStringRef cfStr : cfStr = NULL dim as long fileLen if ( files$( _CFURLRefOpen, "TEXT", "Select text file...", @fileRef ) ) open "i", 2, fileRef fileLen = lof( 2, 1 ) h = fn NewHandleClear( fileLen ) if ( h ) read file 2, [h], fileLen close #2 cfStr = fn CFStringCreateWithBytes( _kCFAllocatorDefault, #[h], fn GetHandleSize(h), _kCFStringEncodingMacRoman, _false ) fn DisposeH( h ) end if else // User canceled end if fn HIViewSetText( sConsoleHITextView, cfStr ) CFRelease( cfStr ) end fn fn ReadTextFile This can be shortened considerably by wrapping Objective-C code: include "ConsoleWindow" local fn ReadTextFile dim as CFURLRef fileRef dim as CFStringRef cfStr : cfStr = NULL if ( files$( _CFURLRefOpen, "TEXT", "Select text file...", @fileRef ) ) BeginCCode cfStr = (CFStringRef)[[NSString alloc] initWithContentsOfURL:(NSURL *)fileRef encoding:NSUTF8StringEncoding error:nil]; EndC fn HIViewSetText( sConsoleHITextView, cfStr ) CFRelease( cfStr ) else // User canceled end if end fn fn ReadTextFile GAP[edit] InputTextFile("input.txt"); s := ReadAll(f);; # two semicolons to hide the result, which may be long CloseStream(f); Go[edit] Go has good support for working with strings as UTF-8, but there is no requirement that strings be UTF-8 and in fact they can hold arbitrary data. ioutil.ReadFile returns the contents of the file unaltered as a byte array. The conversion in the next line from byte array to string also makes no changes to the data. In the example below sv will have an exact copy of the data in the file, without regard to encoding. import "io/ioutil" data, err := ioutil.ReadFile(filename) sv := string(data) Go also supports memory mapped files on OSes with a mmap syscall (e.g. Unix-like). The following prints the contents of "file". (The included "build constraint" prevents this from being compiled on architectures known to lack syscall.Mmap, another source file with the opposite build constraint could use ioutil.ReadFile as above). // +build !windows,!plan9,!nacl // These lack syscall.Mmap package main import ( "fmt" "log" "os" "syscall" ) func main() { f, err := os.Open("file") if err != nil { log.Fatal(err) } fi, err := f.Stat() if err != nil { log.Fatal(err) } data, err := syscall.Mmap(int(f.Fd()), 0, int(fi.Size()), syscall.PROT_READ, syscall.MAP_PRIVATE) if err != nil { log.Fatal(err) } fmt.Println(string(data)) } Groovy[edit] def fileContent = new File("c:\\file.txt").text GUISS[edit] Start,Programs,Accessories,Notepad,Menu:File,Open,Doubleclick:Icon:Notes.TXT,Button:OK Haskell[edit] In the IO monad: do text <- readFile filepath -- do stuff with text Note that readFile is lazy. If you want to ensure the entire file is read in at once, before any other IO actions are run, try: eagerReadFile :: FilePath -> IO String eagerReadFile filepath = do text <- readFile filepath last text `seq` return text Icon and Unicon[edit] The first code snippet below reads from stdin directly into the string fs, preserving line separators (if any) and reading in large chunks. every (fs := "") ||:= |reads(1000000) The second code snippet below performs the same operation using an intermediate list fL and applying a function (e.g. FUNC) to each line. Use this form when you need to perform additional string functions such as 'trim' or 'map' on each line. This avoids unnecessary garbage collections which will occur with larger files. The list can be discarded when done. Line separators are mapped into newlines. every put(fL := [],|FUNC(read())) every (fs := "") ||:= !fL || "\n" fL := &null Inform 7[edit] File access is sandboxed by the interpreter, so this solution essentially requires that the file have been previously written by an Inform program running from the same location under the same interpreter. Home is a room. The File of Testing is called "test". When play begins: say "[text of the File of Testing]"; end the story. J[edit] require 'files' NB. not needed for J7 & later var=: freads 'foo.txt' To memory map the file: require 'jmf' JCHAR map_jmf_ 'var';'foo.txt' Caution: updating the value of the memory mapped variable will update the file, and this characteristic remains when the variable's value is passed, unmodified, to a verb which modifies its own local variables. Java[edit] There is no single method to do this in Java 6 and below (probably because reading an entire file at once could fill up your memory quickly), so to do this you could simply append the contents as you read them into a buffer. import java.io.BufferedReader; import java.io.FileReader; import java.io.IOException; public class ReadFile { public static void main(String[] args) throws IOException{ String fileContents = readEntireFile("./foo.txt"); } private static String readEntireFile(String filename) throws IOException { FileReader in = new FileReader(filename); StringBuilder contents = new StringBuilder(); char[] buffer = new char[4096]; int read = 0; do { contents.append(buffer, 0, read); read = in.read(buffer); } while (read >= 0); in.close(); return contents.toString(); } } One can memory-map the file in Java, but there's little to gain if one is to create a String out of the file: import java.nio.channels.FileChannel.MapMode; import java.nio.MappedByteBuffer; import java.io.RandomAccessFile; import java.io.IOException; import java.io.File; public class MMapReadFile { public static void main(String[] args) throws IOException { MappedByteBuffer buff = getBufferFor(new File(args[0])); String results = new String(buff.asCharBuffer()); } public static MappedByteBuffer getBufferFor(File f) throws IOException { RandomAccessFile file = new RandomAccessFile(f, "r"); MappedByteBuffer buffer = file.getChannel().map(MapMode.READ_ONLY, 0, f.length()); file.close(); return buffer; } } or one can take a shortcut: String content = new Scanner(new File("foo"), "UTF-8").useDelimiter("\\A").next(); this works because Scanner will search the file for a delimiter and return everything before that. \A is the beginning of the file, which it will never find until the end of the file is reached. Java 7 added java.nio.file.Files which has two methods for accomplishing this task: Files.readAllLines and Files.readAllBytes: import java.util.List; import java.nio.charset.Charset; import java.nio.file.*; public class ReadAll { public static List<String> readAllLines(String filesname){ Path file = Paths.get(filename); return Files.readAllLines(file, Charset.defaultCharset()); } public static byte[] readAllBytes(String filename){ Path file = Paths.get(filename); return Files.readAllBytes(file); } } JavaScript[edit] This works in IExplorer or a standalone js file. Note the similarity to the VBScript code. var fso=new ActiveXObject("Scripting.FileSystemObject"); var f=fso.OpenTextFile("c:\\myfile.txt",1); var s=f.ReadAll(); f.Close(); try{alert(s)}catch(e){WScript.Echo(s)} The following works in all browsers, including IE10. var file = document.getElementById("fileInput").files.item(0); //a file input element if (file) { var reader = new FileReader(); reader.readAsText(file, "UTF-8"); reader.onload = loadedFile; reader.onerror = errorHandler; } function loadedFile(event) { var fileString = event.target.result; alert(fileString); } function errorHandler(event) { alert(event); } jq[edit] The . filter will read in a file of raw text, e.g. if the file is named input.txt and we wanted to emit it as a single JSON string: jq -R -s . input.txt In practice, this is probably not very useful. It would be more typical to collect the raw lines into an array of JSON strings.If it is known that the lines are delimited by a single "newline" character, then one could simply pipe from one jq command to another: jq -R . input.txt | jq -s .Equivalently: jq -R -s 'split("\n")' input.txt Other cases can be similarly handled. Julia[edit] The built-in function readall reads into a string (assuming UTF8 encoding), or you can also read into an array of bytes: readall("/devel/myfile.txt") # read file into a string open(readbytes, "/devel/myfile.txt") # read file into an array of bytes Alternatively, there are a variety of ways to memory-map the file, here as an array of bytes: f = open("/devel/myfile.txt", "r") A = mmap_array(Uint8, (filesize("/devel/myfile.txt"),), f) Kotlin[edit] fun readText() { val string = File("unixdict.txt").readText(charset = Charsets.UTF_8) } LabVIEW[edit] This image is a VI Snippet, an executable image of LabVIEW code. The LabVIEW version is shown on the top-right hand corner. You can download it, then drag-and-drop it onto the LabVIEW block diagram from a file browser, and it will appear as runnable, editable code. Lang5[edit] 'foo.txt slurp Lasso[edit] By default, string objects, which are always Unicode, are created with the assumption that the file contains UTF-8 encoded data. This assumption can be changed by settings the file objects’s character encoding value. When reading the data as a bytes object, the unaltered file data is returned. local(f) = file('foo.txt') #f->readString LFE[edit] (set `#(ok ,data) (file:read_file "myfile.txt")) Liberty BASIC[edit] filedialog "Open a Text File","*.txt",file$ if file$<>"" then open file$ for input as #1 entire$ = input$(#1, lof(#1)) close #1 print entire$ end if Lingo[edit] ---------------------------------------- -- Reads whole file, returns string -- @param {string} tFile -- @return {string|false} ---------------------------------------- on readFile (tFile) fp = xtra("fileIO").new() fp.openFile(tFile, 1) if fp.status() then return false res = fp.readFile() fp.closeFile() return res end LiveCode[edit] Livecode offers 2 ways: Using URL put URL "" into tVar put the number of lines of tVar Using file open + read + close local tFile,tLinecount put "/usr/share/dict/words" into tFile open file tFile for text read read from file tFile until EOF put the number of lines of it -- file contents held in "it" variable close file tFile Lua[edit] --If the file opens with no problems, io.open will return a --handle to the file with methods attached. --If the file does not exist, io.open will return nil and --an error message. --assert will return the handle to the file if present, or --it will throw an error with the message returned second --by io.open. local file = assert(io.open(filename)) --Without wrapping io.open in an assert, local file would be nil, --which would cause an 'attempt to index a nil value' error when --calling file:read. --file:read takes the number of bytes to read, or a string for --special cases, such as "*a" to read the entire file. local contents = file:read'*a' --If the file handle was local to the expression --(ie. "assert(io.open(filename)):read'a'"), --the file would remain open until its handle was --garbage collected. file:close() M4[edit] An approximation to file reading can be had by include() which reads a file as M4 input. If it's inside a define() then the input is captured as a definition. But this is extremely limited since any macro names, parens, commas, quote characters etc in the file will expand and upset the capture. define(`foo',include(`file.txt')) defn(`foo') defn(`foo') Make[edit] contents := $(shell cat foo.txt) This is from the GNU Make manual. As noted there, newlines are converted to spaces in the $(contents) variable. This might be acceptable for files which are a list of words anyway. Maple[edit] First solution: s1 := readbytes( "file1.txt", infinity, TEXT ): Second solution: s2 := FileTools:-Text:-ReadFile( "file2.txt" ): Mathematica[edit] Import["filename","String"] MATLAB / Octave[edit] fid = fopen('filename','r'); [str,count] = fread(fid, [1,inf], 'uint8=>char'); % s will be a character array, count has the number of bytes fclose(fid); Mercury[edit] :- module read_entire_file. :- interface. :- import_module io. :- pred main(io::di, io::uo) is det. :- implementation. :- import_module string. main(!IO) :- io.open_input("file.txt", OpenResult, !IO), ( OpenResult = ok(File), io.read_file_as_string(File, ReadResult, !IO), ( ReadResult = ok(FileContents), io.write_string(FileContents, !IO) ; ReadResult = error(_, IO_Error), io.stderr_stream(Stderr, !IO), io.write_string(Stderr, io.error_message(IO_Error) ++ "\n", !IO) ) ; OpenResult = error(IO_Error), io.stderr_stream(Stderr, !IO), io.write_string(Stderr, io.error_message(IO_Error) ++ "\n", !IO) ). NetRexx[edit] /* NetRexx */ options replace format comments java crossref symbols nobinary parse arg inFileName . if inFileName = '' | inFileName = '.' then inFileName = './data/dwarfs.json' fileContents = slurp(inFileName) say fileContents return -- Slurp a file and return contents as a Rexx string method slurp(inFileName) public static returns Rexx slurped = Rexx null slurpStr = StringBuilder() ioBuffer = byte[1024] inBytes = int 0 do inFile = File(inFileName) inFileIS = BufferedInputStream(FileInputStream(inFile)) loop label ioLoop until inBytes = -1 slurpStr.append(String(ioBuffer, 0, inBytes)) inBytes = inFileIS.read(ioBuffer) end ioLoop catch exFNF = FileNotFoundException exFNF.printStackTrace catch exIO = IOException exIO.printStackTrace finally do inFileIS.close() catch ex = IOException ex.printStackTrace end end slurped = Rexx(slurpStr.toString) return slurped NewLISP[edit] (read-file "filename") Nim[edit] readFile(filename) Objeck[edit] string := FileReader->ReadFile("in.txt"); Objective-C[edit] /*** 0. PREPARATION */ // We need a text file to read; let's redirect a C string to a new file // using the shell by way of the stdlib system() function. system ("echo \"Hello, World!\" > ~/HelloRosetta"); /*** 1. THE TASK */ // Instantiate an NSString which describes the filesystem location of // the file we will be reading. NSString *filePath = [NSHomeDirectory() stringByAppendingPathComponent:@"HelloRosetta"]; // The selector we're going to use to complete this task, // stringWithContentsOfFile:encoding:error, has an optional `error' // parameter which can be used to return information about any // errors it might run into. It's optional, but we'll create an NSError // anyways to demonstrate best practice. NSError *anError; // And finally, the task: read and store the contents of a file as an // NSString. NSString *aString = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:&anError]; // If the file read was unsuccessful, display the error description. // Otherwise, display the NSString. if (!aString) { NSLog(@"%@", [anError localizedDescription]); } else { NSLog(@"%@", aString); } OCaml[edit] For most uses we can use this function: let load_file f = let ic = open_in f in let n = in_channel_length ic in let s = String.create n in really_input ic s 0 n; close_in ic; (s) There is no problem reading an entire file with the function really_input because this function is implemented appropriately with an internal loop, but it can only load files which size is equal or inferior to the maximum length of an ocaml string. This maximum size is available with the variable Sys.max_string_length. On 32 bit machines this size is about 16Mo. To load bigger files several solutions exist, for example create a structure that contains several strings where the contents of the file can be split. Or another solution that is often used is to use a bigarray of chars instead of a string: type big_string = (char, Bigarray.int8_unsigned_elt, Bigarray.c_layout) Bigarray.Array1.t The function below returns the contents of a file with this type big_string, and it does so with "memory-mapping": let load_big_file filename = let fd = Unix.openfile filename [Unix.O_RDONLY] 0o640 in let len = Unix.lseek fd 0 Unix.SEEK_END in let _ = Unix.lseek fd 0 Unix.SEEK_SET in let shared = false in (* modifications are done in memory only *) let bstr = Bigarray.Array1.map_file fd Bigarray.char Bigarray.c_layout shared len in Unix.close fd; (bstr) Then the length of the data can be get with Bigarray.Array1.dim instead of String.length, and we can access to a given char with the syntactic sugar bstr.{i} (instead of str.[i]) as shown in the small piece of code below (similar to the cat command): let () = let bstr = load_big_file Sys.argv.(1) in let len = Bigarray.Array1.dim bstr in for i = 0 to pred len do let c = bstr.{i} in print_char c done ooRexx[edit] version 1[edit] file = 'c:\test.txt' myStream = .stream~new(file) myString = myStream~charIn(,myStream~chars) Streams are opened on demand and closed when the script finishes. It is possible if you wish to open and close the streams explicitly file = 'c:\test.txt' myStream = .stream~new(file) if mystream~open('read') = 'READY:' then do myString = myStream~charIn(,myStream~chars) myStream~close end version 2 EXECIO[edit] One can also use EXECIO as it is known from VM/CMS and MVS/TSO: address hostemu 'execio * diskr "./st.in" (finis stem in.' Say in.0 'lines in file st.in' v='' Do i=1 To in.0 Say i '>'in.i'<' v=v||in.i End say 'v='v ::requires "hostemu" LIBRARY - Output: E:\>rexx ref 6 lines in file st.in 1 >address hostemu 'execio * diskr "./st.in" (finis stem in.'< 2 >Say in.0< 3 >Do i=1 To in.0< 4 > Say i '>'in.i'<'< 5 > End< 6 >::requires "hostemu" LIBRARY< v=address hostemu 'execio * diskr "./st.in" (finis stem in.'Say in.0Do i=1 To in .0 Say i '>'in.i'<' End::requires "hostemu" LIBRARY OxygenBasic[edit] Two Formats: string s 'AS FUNCTION s=GetFile "t.txt" 'AS PROCEDURE Getfile "t.txt",s Oz[edit] The interface for file operations is object-oriented. declare FileHandle = {New Open.file init(name:"test.txt")} FileContents = {FileHandle read(size:all list:$)} in {FileHandle close} {System.printInfo FileContents} FileContents is a list of bytes. The operation does not assume any particular encoding. PARI/GP[edit] The GP interpreter's ability to read files is extremely limited; reading an entire file is almost all that it can do. The C code PARI library is not similarly limited. readstr() returns a vector of strings which are the file lines, without newlines. They can be concatenated to make a single string. str = concat(apply(s->concat(s,"\n"), readstr("file.txt"))) Since readstr() returns strings without newlines there's no way to tell whether the last line had a newline or not. This is fine for its intended use on text files, but not good for reading binary files. Panda[edit] It returns a unicode string of type 'text'. file:readme.txt .text Pascal[edit] See TStrignList example of Delphi Perl[edit] The modern recommended way, is using one of these CPAN modules: use File::Slurper 'read_text'; my $text = read_text($filename, $data); use Path::Tiny; my $text = path($filename)->slurp_utf8; use IO::All; $text = io($filename)->utf8->all; Traditional ways, without CPAN modules: open my $fh, '<:encoding(UTF-8)', $filename or die "Could not open '$filename': $!"; my $text; read $fh, $text, -s $filename; close $fh; my $text; { local $/ = undef; open my $fh, '<:encoding(UTF-8)', $filename or die "Could not open '$filename': $!"; $text = <$fh>; close $fh; } my $text = do { local( @ARGV, $/ ) = ( $filename ); <> }; For a one-liner from shell, use -0[code]. It normally specifies the oct char code of record separator ( $/), so for example perl -n -040 would read chunks of text ending at each space ( $/ = ' '). However, -0777 has special meaning: $/ = undef, so the whole file is read in at once ( chr 0777 happens to be "ǿ", but Larry doesn't think one should use that as record separator). perl -n -0777 -e 'print "file len: ".length' stuff.txt Memory-mapping[edit] use File::Map 'map_file'; map_file(my $str, "foo.txt"); print $str; use Sys::Mmap; Sys::Mmap->new(my $str, 0, 'foo.txt') or die "Cannot Sys::Mmap->new: $!"; print $str; File::Map has the advantage of not requiring an explicit munmap(). Its tie is faster than the tie form of Sys::Mmap too. Perl 6[edit] my $string = slurp 'sample.txt'; The default encoding is UTF-8. The :enc adverb can be used to specify a different one: my $string = slurp 'sample.txt', :enc<UTF-16>; IO::Path objects also provide slurp as a method: my $string = 'sample.txt'.IO.slurp; Phix[edit] constant fn = open(command_line()[2],"rb") ?get_text(fn) close(fn) {} = wait_key() - Output: "constant fn = open(command_line()[2],\"rb\")\r\n?get_text(fn)\r\nclose(fn)\r\n{} = wait_key()\r\n" The value returned by get_text is actually a string containing raw binary data (no \r\n -> \n substitution, even if the file is opened in text mode) and is not limited to text files. There is no builtin method for handling different encodings, but demo\edita handles all such files with ease, including the nifty little encoding drop-down on the open/close dialog. PHP[edit] file_get_contents($filename) PicoLisp[edit] Using 'till' is the shortest way: (in "file" (till NIL T)) To read the file into a list of characters: (in "file" (till NIL)) or, more explicit: (in "file" (make (while (char) (link @)))) Encoding is always assumed to be UTF-8. Pike[edit] string content=Stdio.File("foo.txt")->read(); PL/I[edit] get file (in) edit ((substr(s, i, 1) do i = 1 to 32767)) (a); PowerShell[edit] Get-Content foo.txt This will only detect Unicode correctly with a BOM in place (even for UTF-8). With explicit selection of encoding: Get-Content foo.txt -Encoding UTF8 However, both return an array of strings which is fine for pipeline use but if a single string is desired the array needs to be joined: (Get-Content foo.txt) -join "`n" PureBasic[edit] A file can be read with any of the built in commands Number.b = ReadByte(#File) Length.i = ReadData(#File, *MemoryBuffer, LengthToRead) Number.c = ReadCharacter(#File) Number.d = ReadDouble(#File) Number.f = ReadFloat(#File) Number.i = ReadInteger(#File) Number.l = ReadLong(#File) Number.q = ReadQuad(#File) Text$ = ReadString(#File [, Flags]) Number.w = ReadWord(#File) If the file is s pure text file (no CR/LF etc.), this will work and will read each line untill EOL is found. If ReadFile(0, "RC.txt") Variable$=ReadString(0) CloseFile(0) EndIf Since PureBasic terminates strings with a #NULL and also split the ReadString() is encountering new line chars, any file containing these must be treated as a data stream. Title$="Select a file" Pattern$="Text (.txt)|*.txt|All files (*.*)|*.*" fileName$ = OpenFileRequester(Title$,"",Pattern$,0) If fileName$ If ReadFile(0, fileName$) length = Lof(0) *MemoryID = AllocateMemory(length) If *MemoryID bytes = ReadData(0, *MemoryID, length) MessageRequester("Info",Str(bytes)+" was read") EndIf CloseFile(0) EndIf EndIf Python[edit] open(filename).read() This returns a byte string and does not assume any particular encoding. In Python 3 strings are in unicode, you can specify encoding when reading: open(filename, encoding='utf-8').read() Python docs recommend dealing with files using the with statement: with open(filename) as f: data = f.read() Q[edit] q)file:read0`:file.txt "First line of file" "Second line of file" "" R[edit] fname <- "notes.txt" contents <- readChar(fname, file.info(fname)$size) Racket[edit] (file->string "foo.txt") Raven[edit] 'myfile.txt' read as $content_as_string or '' open as $handle $handle read as $content_as_string $handle close REALbasic[edit] This function accepts a file (FolderItem object) and an optional TextEncoding class. If the TextEncoding is not defined, then REALbasic defaults to UTF-8. Since it is intended for cross-platform development, REALbasic has a number of built-in tools for working with different text encodings, line terminators, etc. [1] Function readFile(theFile As FolderItem, txtEncode As TextEncoding = Nil) As String Dim fileContents As String Dim tis As TextInputStream tis = tis.Open(theFile) fileContents = tis.ReadAll(txtEncode) tis.Close Return fileContents Exception err As NilObjectException MsgBox("File Not Found.") End Function REBOL[edit] read %my-file ; read as text read/binary %my-file ; preserve contents exactly Retro[edit] with files' here "input.txt" slurp REXX[edit] using LINEIN[edit] /*REXX program reads an entire file line-by-line and stores it as a continuous string.*/ parse arg iFID . /*obtain optional argument from the CL.*/ if iFID=='' then iFID= 'a_file' /*Not specified? Then use the default.*/ $= /*a string of file's contents (so far).*/ do while lines(iFID)\==0 /*read the file's lines until finished.*/ $=$ || linein(iFID) /*append a (file's) line to the string,*/ end /*while*/ /*stick a fork in it, we're all done. */ using CHARIN[edit] Note that CRLF are in the resulting string. /*REXX program reads a file and stores it as a continuous character str.*/ Parse Version v iFID = 'st.in' /*name of the input file. */ If left(v,11)='REXX-Regina' |, left(v,11)='REXX-ooRexx' Then Do len=chars(iFid) /*size of the file */ v = charin(iFid,,len) /*read entire file */ End Else Do /* for other Rexx Interpreters */ v='' Do while chars(iFid)>0 /* read the file chunk by chunk */ v=v||charin(iFid,,500) End End say 'v='v say 'length(v)='length(v) - Output: E:\>rexx refc v=line 1 of 3 line 2 of 3 line 3 of 3 length(v)=39 Ring[edit] # Read the file cStr = read("myfile.txt") # print the file content See cStr Also in one line we can read and print the file content. cStr = read("myfile.txt") See cStr We can avoid the string, but it's required in the task. See read("myfile.txt") Ruby[edit] IO.read is for text files. It uses the default text encodings, and on Microsoft Windows, it also converts "\r\n" to "\n". # Read entire text file. str = IO.read "foobar.txt" # It can also read a subprocess. str = IO.read "| grep ftp /etc/services" Caution! IO.read and File.read take a portname. To open an arbitrary path (which might start with "|"), you must use File.open, then IO#read. path = "|strange-name.txt" str = File.open(path) {|f| f.read} To read a binary file, open it in binary mode. # Read entire binary file. str = File.open(path, "rb") {|f| f.read} Ruby 1.9 can read text files in different encodings. # Read EUC-JP text from file. str = File.open(path, "r:euc-jp") {|f| f.read} # Read EUC-JP text from file; transcode text from EUC-JP to UTF-8. str = File.open(path, "r:euc-jp:utf-8") {|f| f.read} Run BASIC[edit] open DefaultDir$ + "/public/test.txt" for binary as #f fileLen = LOF(#f) a$ = input$(#f, fileLen) print a$ close #f Rust[edit] use std::fs::File; use std::io::Read; fn main() { let mut file = File::open("somefile.txt").unwrap(); let mut contents: Vec<u8> = Vec::new(); // Returns amount of bytes read and append the result to the buffer let result = file.read_to_end(&mut contents).unwrap(); println!("Read {} bytes", result); // To print the contents of the file let filestr = String::from_utf8(contents).unwrap(); println!("{}", filestr); } Scala[edit] object TextFileSlurper extends App { val fileLines = try scala.io.Source.fromFile("my_file.txt", "UTF-8").mkString catch { case e: java.io.FileNotFoundException => e.getLocalizedMessage() } } Scheme[edit] Uses SRFI-13: (with-input-from-file "foo.txt" (lambda () (reverse-list->string (let loop ((char (read-char)) (result '())) (if (eof-object? char) result (loop (read-char) (cons char result))))))) Works with Chicken Scheme: (with-input-from-file "foo.txt" read-string) Seed7[edit] The library getf.s7i defines the function getf, which reads a whole file into a string: $ include "seed7_05.s7i"; include "getf.s7i"; const proc: main is func local var string: fileContent is ""; begin fileContent := getf("text.txt"); end func; Sidef[edit] Reading an entire file as a string, can be achieved with the FileHandle.slurp() method, as illustrated bellow: var file = File.new(__FILE__); var content = file.open_r.slurp; print content; Starting with version 2.30, File.read() can do the same: var file = File(__FILE__) var content = file.read(:utf8) print content Smalltalk[edit] (StandardFileStream oldFileNamed: 'foo.txt') contents 'foo.txt' asFilename contentsAsString SNOBOL4[edit] In SNOBOL4, file I/O is done by associating a variable with the desired file, via the input() built-in function. After the association, each reference to the named variable provides as the variable's value the next block or line of data from the corresponding file. The exact format of the input() function parameters tends to vary based on the implementation in use. In this example, the code reads the file in blocks of 512k bytes (or less) until the entire file has been read into one long string in memory. input(.inbin,21,"filename.txt [-r524288]") :f(end) rdlp buf = inbin :s(rdlp) * * now process the 'buf' containing the file * end Sparkling[edit] let contents = readfile("foo.txt"); Swift[edit] import Foundation let path = "~/input.txt".stringByExpandingTildeInPath if let string = String(contentsOfFile: path, encoding: NSUTF8StringEncoding) { println(string) // print contents of file } Tcl[edit] This reads the data in as text, applying the default encoding translations. set f [open $filename] set data [read $f] close $f To read the data in as uninterpreted bytes, either use fconfigure to put the handle into binary mode before reading, or (from Tcl 8.5 onwards) do this: set f [open $filename "rb"] set data [read $f] close $f TUSCRIPT[edit] $$ MODE TUSCRIPT ERROR/STOP OPEN ("rosetta.txt",READ,-std-) var=FILE ("rosetta.txt") TXR[edit] @(next "foo.txt") @(freeform) @LINE The. UNIX Shell[edit] We start a 'cat' process to read the entire file, and use '$(...)' to grab the output of 'cat'. We use 'printf' which might be more portable than 'echo'. Because '$(...)' can chop off a newline at the end of the file, we tell 'printf' to add an extra newline. f=`cat foo.txt` # f will contain the entire contents of the file printf '%s\n' "$f" f=$(cat foo.txt) printf '%s\n' "$f" Some shells provide a shortcut to read a file without starting a 'cat' process. f=$(<foo.txt) echo -E "$f" file=$(<foo.txt) print $file alternatively zmodload zsh/mapfile print $mapfile[foo.txt] Ursa[edit] decl string contents decl file f f.open "filename.txt" set contents (f.readall) Vala[edit] string file_contents; FileUtils.get_contents("foo.txt", out file_contents); VBScript[edit] Read text file with default encoding into variable and display dim s s = createobject("scripting.filesystemobject").opentextfile("slurp.vbs",1).readall wscript.echo s Read text file with UTF-16 encoding into memory and display wscript.echo createobject("scripting.filesystemobject").opentextfile("utf16encoded.txt",1,-1).readall Vedit macro language[edit] In Vedit Macro Language, a "string variable" can be either an edit buffer or a text register. Text registers can hold only a limited amount of data (about 120 KB each in current version). Edit buffers can handle files of unlimited size (even larger than the size of virtual memory). For large files, only a part of the file is kept in memory, but from users point of view there is no practical difference to having the whole file in memory. Read file into edit buffer. The buffer is allocated automatically: File_Open("example.txt") Read file into text register 10: Reg_Load(10, "example.txt") Visual Basic .NET[edit] Imports System.IO Public Class Form1 ' Read all of the lines of a file. ' Function assumes that the file exists. Private Sub ReadLines(ByVal FileName As String) Dim oReader As New StreamReader(FileName) Dim sLine As String = oReader.ReadToEnd() oReader.Close() End Sub End Class Wart[edit] with infile "x" with outstring whilet line (read_line) prn line XPL0[edit] This example reads its own source code file and displays it as a string. The command line is: readfile <readfile.xpl ] - Output: ] Yorick[edit] This loads foo.txt into lines as an array of strings. Each array element is one line. Each line's trailing newline is removed. lines = rdfile("foo.txt"); This loads foo.txt into content as a single scalar string, without losing newlines. f = open("foo.txt", "rb"); raw = array(char, sizeof(f)); _read, f, 0, raw; close, f; content = strchar(raw); zkl[edit] data := File("foo.txt","r").read() The file parameters are the same as C's - Programming Tasks - File handling - Batch File/Omit - Brainf***/Omit - TI-83 BASIC/Omit - TI-89 BASIC/Omit - Unlambda/Omit - 8th - Ada - AutoHotkey - AutoIt - ALGOL 68 - AppleScript - AWK - BASIC - BBC BASIC - Bracmat - Brainf*** - Brat - C - C++ - C sharp - Clojure - CMake - Common Lisp - D - Delphi - Déjà Vu - E - Elixir - Emacs Lisp - Erlang - Euphoria - F Sharp - Factor - Fantom - Forth - Fortran - Frink - FutureBasic - GAP - Go - Groovy - GUISS - Haskell - Icon - Unicon - Inform 7 - J - Java - JavaScript - Jq - Julia - Kotlin - LabVIEW - Lang5 - Lasso - LFE - Liberty BASIC - Lingo - LiveCode - Lua - M4 - Make - Maple - Mathematica - MATLAB - Octave - Mercury - NetRexx - NewLISP - Nim - Objeck - Objective-C - OCaml - OoRexx - OxygenBasic - Oz - PARI/GP - Panda - Pascal - Perl - Perl 6 - Phix - PHP - PicoLisp - Pike - PL/I - PowerShell - PureBasic - Python - Q - R - Racket - Raven - REALbasic - REBOL - Retro - REXX - Ring - Ruby - Run BASIC - Rust - Scala - Scheme - Seed7 - Sidef - Smalltalk - SNOBOL4 - Sparkling - Swift - Tcl - TUSCRIPT - TXR - UNIX Shell - Ursa - Vala - VBScript - Vedit macro language - Visual Basic .NET - Wart - XPL0 - Yorick - Zkl
http://rosettacode.org/wiki/Read_entire_file
CC-MAIN-2016-44
refinedweb
8,381
58.79
I wanted the day to think about this. :-) I almost like it. My only concern, is the matching. If I understand what you are saying, each handler would register a function, without specifying the char *, then the core would call every handler, and pass them the (Handler, Mime-Type, etc). The handler would then either return quickly with a DECLINED, or it would try to process the request. Is that correct? I dislike calling every handler, even if we could tell before-hand that we aren't going to use it. Could the handlers register a hook with the following call: ap_register_handler(char *handler, func *handler_func, pred, succ, order); The core could then keep track of everything internally. Allowing us to use a simple table to determine which Handler should be called, but making things look like hooks to Apache. Ryan On Mon, 1 Jan 2001, Ben Laurie wrote: > Stunned you into silence, eh? I need some feedback one way or the other. > Or should I just do it and await the screams? > > Cheers, > > Ben. > > Ben Laurie wrote: > > > > OK, so the obvious thing is that handlers should be sorted like hooks. I > > initially thought that they should also be sorted according to their > > wildness but the more I think about it, the more I think that the whole > > handler structure should disappear and be replaced by YAH (Yet Another > > Hook). The down side of this is a slight loss of efficiency. The upside > > is that handlers can be more subtle about how they match the content > > type/handler string (and other things they may want to match), when they > > match it, and how they interact with other handlers. > > > > You could argue that they could still do this with an extension of the > > current system (i.e. string matching as a prefilter before calling the > > handler, which then can be more subtle if desired). Yes, this is true, > > but the same must surely be true of all sorts of other hooks, so why > > single this one out? Also, it reduces the clarity of the process and > > potentially makes handlers do silly things (like match "*" because they > > really want to match something completely different) in order to squeeze > > into the structure. > > > > So, what I'm inclined to think is that if we want to use "hints" to > > prefilter hook calls, we should do it in general, not just for handlers. > > > > With that out of the way, I'd propose to axe the handler structure, and > > introduce a new handler_hook, which would look like this: > > > > int handler_hook(const char *handler, request_rec *r); > > > > where handler is the thing currently matched against the string (i.e. > > the handler set by SetHandler and friends, or the MIME type). It can > > return DECLINED, OK or an error. Anything except DECLINED will stop > > processing. Obviously, the first thing most handlers will do is a string > > comparison against handler... > > > > OK, next is the question of configured control of ordering. > > > > This one is potentially really easy. We just leverage the existing > > {prede,suc)cessor arguments in the hooking calls. What we do is allow > > the config file to specify ordering between pairs of modules for a > > particular hook, something akin to: > > > > HookOrder handler mod_dir mod_autoindex > > > > which would cause mod_dir's handler hook to run before mod_autoindex'. > > There are two issues, one new, one not new. The first is naming - we > > _really_ should have a global namespace for modules. Its time to stop > > avoiding this question! I propose we use something akin to Java - > > derived from DNS. I see no need to use backwards DNS, though. My initial > > thought is that global module names would look like this: > > "<FQDN>/<localname>". So, mod_autoindex would be > > "httpd.apache.org/autoindex", for example. Mod_backhand would be > > "cnds.jhu.edu/backhand" (say). Then the HookOrder directive would look > > like: > > > > HookOrder handler httpd.apache.org/dir httpd.apache.org/autoindex > > > > (if people object the URL-ish look, we could use "-" instead of "/", > > which makes it clear we're doing something totally different). Oh, I'd > > include the module name somewhere in the module structure, so we can > > enumerate them, BTW. > > > > OK, the other issue is this - should we introduce the concept of > > mandatory and overridable {prede,suc}cessors? The idea being that > > overridable ones can be overridden by config (but specify the "usually > > appropriate" order), but mandatory ones can't? Or should we say that if > > they are what we'd consider overridable, then they MUST be configured. I > > don't hugely like the second option, because I firmly believe that you > > should be able to run Apache with an almost empty configuration. And, > > that it should be as obvious as possible what needs configuring when it > > does. > > > > Oh, BTW, if we do this, we get I/O filtering ordering for free. Cool, or > > what? Almost, but not really. We still need to figure out how to specify a filter for use in a request (I have the logic figured out, but I haven't written the code yet). We also don't really use the standard hooks mechanism for the filters at all, so this probably won't help much in that respect. Ryan _______________________________________________________________________________ Ryan Bloom rbb@apache.org 406 29th St. San Francisco, CA 94131 -------------------------------------------------------------------------------
http://mail-archives.apache.org/mod_mbox/httpd-dev/200101.mbox/%3CPine.LNX.4.21.0101010521000.31077-100000@koj%3E
CC-MAIN-2018-30
refinedweb
872
73.07
Hide Forgot Recent versions of dpdk advertise support for ARM and POWER platforms. (aarch64) (ppc64le) Unfortunately, the latter fails when linking the test app: test_pmd_ring.o: In function `test_pmd_ring': /builddir/build/BUILD/dpdk-16.11/app/test/test_pmd_ring.c:451: undefined reference to `rte_eth_from_rings' test_pmd_ring.o: In function `test_pmd_ring': test_pmd_ring.c:(.text+0x1ef8): undefined reference to `rte_eth_from_rings' test_pmd_ring.c:(.text+0x1f24): undefined reference to `rte_eth_from_rings' test_pmd_ring.c:(.text+0x1f50): undefined reference to `rte_eth_from_rings' test_pmd_ring.c:(.text+0x1f84): undefined reference to `rte_eth_from_rings' test_pmd_ring_perf.o: In function `test_ring_pmd_perf': /builddir/build/BUILD/dpdk-16.11/app/test/test_pmd_ring_perf.c:170: undefined reference to `rte_eth_from_ring' FWIW, armv7hl is also supposedly included in the sources, but numactl currently ExcludeArch's it. Created attachment 1248197 [details] Preliminary patch for rawhide, builds on aarch64, fails on ppc64le Found a solution for ppc64le in OpenSUSE[1]: %ifarch ppc64le setconf CONFIG_RTE_LIBRTE_PMD_RING n %endif succeeded with that. FWIW, they also setconf CONFIG_RTE_LIBRTE_DISTRIBUTOR n on both aarch64 and ppc64le. I'm not sure why though, as the build succeeds without it. [1] Created attachment 1248225 [details] Patch for rawhide I'm a bit lost as to why the ring pmd would fail on arm, its a software only construct. I'll fix that up and get this submitted, thanks! This bug appears to have been reported against 'rawhide' during the Fedora 26 development cycle. Changing version to .
https://bugzilla.redhat.com/show_bug.cgi?id=1419731
CC-MAIN-2021-31
refinedweb
226
52.56
Some time ago I read something about how classes are sometimes thinly veiled globals. I can't remember if this was agrued in general or specific to python. Either way, when you look around the web for examples you'll find a lot of small programs that are basically just a single class, instantiating that class and then running it. Now I notice myself doing pretty much the same. My current project so far involves a calculation and GUI for that calculation. They are being developed in separate files, they're not connected yet. I imagine keeping them separate and later on import the calculation into the GUI file. Since separate files have separate namespaces anyway so I'm wondering what the point is of single class programs (or program files) like that. You might as well make everything global to the file and use functional programming instead of OOP. self.bla would just become a global variable. Obviously you shouldn't because everyone says so, so I must be missing something. Thoughts?
http://www.python-forum.org/viewtopic.php?p=3947
CC-MAIN-2014-41
refinedweb
173
64.51
Timeout decorator Project description Timeout decorator Installation From source code: python setup.py install From pypi: pip install timeout-decorator Usage import time import timeout_decorator @timeout_decorator.timeout(5) def mytest(): print "Start" for i in range(1,10): time.sleep(1) print "%d seconds have passed" % i if __name__ == '__main__': mytest() Multithreading By default, timeout-decorator uses signals to limit the execution time of the given function. This appoach does not work if your function is executed not in a main thread (for example if it’s a worker thread of the web application). There is alternative timeout strategy for this case - by using multiprocessing. To use it, just pass use_signals=False to the timeout decorator function: import time import timeout_decorator @timeout_decorator.timeout(5, use_signals=False) def mytest(): print "Start" for i in range(1,10): time.sleep(1) print "%d seconds have passed" % i if __name__ == '__main__': mytest() Warning Make sure that in case of multiprocessing strategy for timeout, your function does not return objects which cannot be pickled, otherwise it will fail at marshalling it between master and child processes. Acknowledgement Derived from and Contribute I would love for you to fork and send me pull request for this project. Please contribute. License This software is licensed under the MIT license See License file Changelog 0.3.1 - Fixed issue with PicklingError causes the timeout to never be reached. 0.3.0 - Added optional threading support via python multiprocessing (bubenkoff) - Switched to pytest test runner (bubenkoff) 0.2.1 - Initial public release Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/timeout-decorator/0.3.2/
CC-MAIN-2019-35
refinedweb
277
54.42
Several processors in the Freescale ColdFire family come with an on-chip SPI device, also known as QSPI. This package provides an eCos bus driver for that device. It implements the functionality defined by the generic SPI package CYGPKG_IO_SPI. The driver supports both polled and interrupt-driven transfers. Typical supported transfer rates range from 128KHz to 33MHz, although the exact details depend on the specific ColdFire processor being used and on the processor's clock speed. The hardware does not support DMA so large transfers at high transfer rates will consume much of the available cpu time. This bus driver package does not instantiate any cyg_spi_bus structures. It is possible for a processor to have more than one SPI bus, so it is better to leave it to the processor HAL to define the bus or buses. Instead the bus driver package just provides functions and utility macros for use by the processor HAL. Similarly the bus driver package does not provide any cyg_spi_device structures. Exactly which devices are attached to the SPI bus is a characteristic of the platform so usually it is the platform HAL which provides the device instances. cyg_spi_bus cyg_spi_device This SPI bus driver package should be loaded automatically when selecting a target containing a ColdFire processor with QSPI hardware, and it should never be necessary to load the package explicitly. If the application does not use any of the SPI functionality then all the SPI support code should be removed at link-time and the application does not suffer any overheads. The package contains a single configuration option CYGHWR_DEVS_SPI_MCFxxxx_QSPI_MULTIPLE_BUSES. Usually this option should not be manipulated by application developers, instead it is set by the processor HAL. When the option is disabled the driver will optimize for the common case of a single bus. The only other configuration options provided by this package relate to compiler flags. The header file cyg/io/mcfxxxx_qspi.h provides a utility macro CYG_MCFxxxx_QSPI_BUS to allow processor HALs to instantiate a bus. Existing HALs such as the MCF521x's will show how to use this macro. For most boards the platform HAL will create cyg_spi_device instances for all attached SPI devices, and will initialize the system so that the SPI-related processor pins are connected appropriately. Some development boards may not have any SPI devices but instead export the relevant signals to expansion connectors. In those cases it will be the responsibility of application code to create the device instances and manipulate the GPIO pins appropriately. Device instances should take the form of a cyg_mcfxxxx_qspi_device structure, which contains a cyg_spi_device as its first field. cyg_mcfxxxx_qspi_device #include <cyg/io/mcfxxxx_qspi.h> … cyg_mcfxxxx_qspi_device hal_spi_atod CYG_SPI_DEVICE_ON_BUS(mcfxxxx_qspi) = { .qspi_common.spi_bus = &cyg_mcfxxxx_qspi_bus, … }; This defines a variable hal_spi_atod which can be used by other packages or by application code as an argument to the I/O functions provided by the generic SPI package CYGPKG_IO_SPI. A gcc extension, designated initializers, is used to fill in the qspi_common.spi_bus structure field. The structure contains a further seven fields which define exactly how to interact with the SPI device. Most of these fields are simply hardware register values, and the appropriate ColdFire User Manual should be consulted for full details of these registers. The header file cyg/hal/hal_io.h will provide #define's for the various bits, for example HAL_MCFxxxx_QSPIx_QMR_MSTR for the master mode bit of the QMR register. qspi_common.spi_bus qspi_qmr When performing a transfer to this SPI device the bus driver will use the qspi_qmr field for the QSPI hardware's QMR register. The main fields in this register are: This bit specifies that the QSPI hardware should operate in master mode. It must always be set. The data items transferred can range from 8 to 16 bits. For example, to specify 12-bit data items the qspi_qmr field should include HAL_MCFxxxx_QSPIx_QMR_BITS_12. Clock polarity. The default is inactive-low, active-high. If the device requires the opposite polarity then HAL_MCFxxxx_QSPIx_QMR_CPOL should be specified. Clock phase. The default is to capture data on the leading clock edge. If the device captures data on the trailing edge instead then HAL_MCFxxxx_QSPIx_QMR_CPHA should be specified. Baud rate divider. This should be a small number, usually between 1 and 255, which controls the clock rate. The value to be used depends on the device's maximum clock rate, the specific processor used, and the processor's clock speed. qspi_qdlyr This field is used to set the QSPI delay register QDLYR when performing transfers to this device. It contains two delay fields, QCD and DTL, which can be used in conjunction with qspi_qcr for fine control over bus timing. Most devices do not have any special requirements here so a value of 0 can be used. The register also contains an SPE bit to start a transfer, but that bit is used by the bus driver and should not be set in the device structure. qspi_qcr qspi_qwr This field is used to set the QWR register. Only one bit, CSIV, in this register may be defined. The other fields in the register are manipulated by the bus driver. Usually if the device has an active-low chip select then the CSIV bit should be set, otherwise the structure field should be 0. If a custom chip select control function is used then that may require different CSIV behaviour. This is used to fill in the command RAM registers during a data transfer. It contains five fields. The CONT bit is not normally required but can provide additional control over the chip select. Note that some versions of the various ColdFire User Manuals give an incomplete description of this bit and the errata sheets should be consulted as well. The BITSE bit should be set if transfers involve data items which are not 8 bits. The DT and DSCK bits can be used to enable one or both delays in the QDLYR register. The QSPI_CS field consists of four bits for the four QSPI chip select pins. If all the devices connected to the SPI bus are active-high and each is connected directly to a chip select, then only of these bits should be set. If all the devices are active-low then only one of the bits should be clear. With some hardware the QSPI_CS bits can be more complicated. For example consider an SPI bus with an active-high device attached to QSPI chip selects 0 and 1, and active-low devices attached to the other two chip selects. The device definition for the CS0 device should have the QWR CSIV bit clear. The QCR QSPI_CS bits should have bits 0, 2 and 3 set. Between transfers all chip select pins will be low. This will activate the devices on CS2 and CS3, but since there is no clock signal this is harmless. When a transfer happens CS0, CS2 and CS3 will all be high, and CS1 will remain low. This will activate the device on CS0, but leave the other three devices inactive. Hence only the specified device is active during a transfer. If the hardware requires further control over the chip selects then the device definition can include a custom qspi_cs_control function. qspi_cs_control There is no support for using different QCR values for different parts of a transfer, for example the first data item versus the rest of the transfer. Such functionality is rarely useful and would require extra complexity in the bus driver, including performance-critical parts. qspi_qcr_tick This is used to fill in the command RAM registers during a tick operation, when none of the devices should be active. Some devices need to see a certain number of clock signals even when their chip select is not active, or they will not operate correctly. The hardware fields are the same as for qspi_qcr. Usually the QSPI_CS bits will be all 0 or all 1, but some hardware may require a more complicated value. qspi_tick_data When performing a tick operation this field will be used as the data to be transferred. Usually the value will not matter because, by the definition of an SPI tick, none of the SPI devices will be selected. Some hardware may have chip select requirements which cannot be satisfied simply by setting the QWR CSIV and the QCR QSPI_CS bits. For example if there are more than four SPI devices then the surplus may have their chip selects connected to GPIO pins. Also some devices may require that the chip select remain asserted for the duration of a multi-transfer transaction, and that is not supported directly by the QSPI hardware. To cope with such cases it is possible to define a custom chip select control function. Consider a simple SPI device on a board with a 64MHz MCF5282 processor. The device uses 8-bit data, default clock polarity and phase, can be driven at up to 10 MHz, does not require any special delays, has an active-high chip select, and is connected to the processor's QSPI CS0 pin. There are no other devices on the bus. #include <cyg/io/mcfxxxx_qspi.h> … cyg_mcfxxxx_qspi_device hal_spi_dev_8 | 0x04, .qspi_qdlyr = 0, .qspi_qwr = 0, .qspi_qcr = HAL_MCFxxxx_QSPIx_QCRn_QSPI_CS_CS0, .qspi_qcr_tick = 0, .qspi_tick_data = 0xFF, .qspi_cs_control = (void (*)(cyg_mcfxxxx_qspi_device*, int)) 0 }; For a more complicated example, consider a board with an MCF5272 processor and an SPI device that involves 12-bit data items, uses inverted clock polarity and phase, can only be driven at the slowest clock rate, does not require any special delays or chip select logic, has an active-low chip select, and is connected to the processor's QSPI CS2 pin: #include <cyg/io/mcfxxxx_qspi.h> … cyg_mcfxxxx_qspi_device hal_spi_dev_12 | HAL_MCFxxxx_QSPIx_QMR_CPOL | HAL_MCFxxxx_QSPIx_QMR_CPHA | 0xFF, .qspi_qdlyr = 0, .qspi_qwr = HAL_MCFxxxx_QSPIx_QWR_CSIV, .qspi_qcr =3, .qspi_qcr_tick =2 | HAL_MCFxxxx_QSPIx_QCRn_QSPI_CS_CS3, .qspi_tick_data = 0xFF, .qspi_cs_control = (void (*)(cyg_mcfxxxx_qspi_device*, int)) 0 }; This definition assumes that there are no attached SPI devices with an active-high chip select. If there are such devices then the qspi_qcr and qspi_qcr_tick fields should be modified so that these devices are not activated at the wrong time. The header file cyg/io/mcfxxxx_qspi.h provides a utility macro CYG_MCFxxxx_QSPI_DEVICE which can be used to instantiate a device. Essentially the macro just expands to a structure definition as above. CYG_MCFxxxx_QSPI_DEVICE The ColdFire QSPI hardware provides support for controlling the chip select signals of up to four SPI devices. In many situations this support is adequate, but there are exceptions: The QSPI chip select outputs may share processor pins with other on-chip ColdFire devices. For example on the mcf5272 the QSPI CS2 signal uses the same pin as the uart1 CTS signal, so if the application needs uart1 and hardware flow-control then that QSPI CS2 pin is no longer available. If the hardware has more than four SPI devices then additional chip selects are needed. With most SPI devices the chip select signal only needs to be asserted while I/O is taking place, but there are exceptions. For example interactions with an MMC card involve a sequence of transfers, and the chip select must remain asserted in between these transfers. Depending on thread priorities and other factors there may be a considerable delay between these transfers and the QSPI hardware does not provide any way of keeping a chip select asserted indefinitely. The issue of insufficient chip selects can usually be handled by adding extra hardware, for example an external decoder chip possibly complemented by inverters if there is a mixture of active-high and active-low devices. This approach can be supported simply by programming the right values for qspi_qcr and qspi_qcr_tick, but the cost of the extra hardware may be unacceptable. An alternative approach is to use one or more of the processor's GPIO pins to control the extra devices. The issue of persistent chip selects can be handled in one of two main ways. A GPIO pin can be used to control the chip select, bypassing the QSPI support. Alternatively the QWR CSIV bit can be used in an inverted sense, to activate an SPI device rather than to define the inactive state. To support these variations an arbitrary chip select control function can be specified for a device. Such a function takes two arguments. The first is a pointer to the SPI device, possibly allowing the function to be shared between multiple devices. The second is one of the following: During system initialization the QSPI bus driver will iterate over all the attached SPI devices. If a device has a qspi_cs_control function then this will be invoked. A typical action would be to configure a GPIO pin. Note that these calls happen quite early during system initialization so other subsystems like standard I/O may not be set up yet. This is used to assert the chip select, in other words to set the chip select to low for an active-low device or high for an active-high device. It will be called at the start of any transfer, unless the previous transfer has left the chip select asserted. This is used to deassert the chip select. It will be called at the end of any transfer that specifies drop_cs. It will also be called at the start of a tick operation. To support persistent chip selects via the CSIV signal the bus driver package provides two chip control functions cyg_mcfxxxx_qspi_csiv_cs_control_active_high and cyg_mcfxxxx_qspi_csiv_cs_control_active_low. To use these with say an active-low device: cyg_mcfxxxx_qspi_csiv_cs_control_active_high cyg_mcfxxxx_qspi_csiv_cs_control_active_low The qspi_qwr field should be set to HAL_MCFxxxx_QSPIx_QWR_CSIV, so the chip select is high when there is no I/O taking place. The qspi_cs_control field should be set to &cyg_mcfxxxx_qspi_csiv_cs_control_active_low. This function will be invoked by the bus driver to assert or drop the signal (initialization is a no-op). &cyg_mcfxxxx_qspi_csiv_cs_control_active_low The QSPI_CS bits in the qspi_qcr field still have the usual meaning. At the start of a transfer cyg_mcfxxxx_qspi_csiv_cs_control_active_low will clear the QWR CSIV bit. There is no I/O taking place yet so all chip select outputs will switch to low, activating all active-low devices. This is generally harmless since there is no clock signal. When the I/O actually starts the qspi_qcr field will be used, deactivating all devices except the current one. At the end of each individual transfer the chip selects will revert to their inactive state, which because of the CSIV setting means low. Again this will activate all active-low devices, but there is no clock signal so no I/O takes place. For the last transfer of a transaction or for a tick operation cyg_mcfxxxx_qspi_cs_control_active_low will be invoked again with a DROP argument. It will reset the QWR CSIV bit to 1, deactivating all devices. cyg_mcfxxxx_qspi_cs_control_active_low The overall effect is a persistent chip select with the desired polarity, using just the QSPI hardware facilities rather than a GPIO pin.
http://www.ecoscentric.com/ecospro/doc/html/ref/devs-spi-m68k-mcfxxxx.html
crawl-003
refinedweb
2,452
62.17
In this article written by Sanjeev Jaiswal and Ratan Kumar, authors of the book Learning Django Web Development, this article will cover all the basic topics which you would require to follow, such as coding practices for better Django web development, which IDE to use, version control, and so on. We will learn the following topics in this article: - Django coding style - Using IDE for Django web development - Django project structure This article is based on the important fact that code is read much more often than it is written. Thus, before you actually start building your projects, we suggest that you familiarize yourself with all the standard practices adopted by the Django community for web development. Django coding style Most of Django's important practices are based on Python. Though chances are you already know them, we will still take a break and write all the documented practices so that you know these concepts even before you begin. To mainstream standard practices, Python enhancement proposals are made, and one such widely adopted standard practice for development is PEP8, the style guide for Python code–the best way to style the Python code authored by Guido van Rossum. The documentation says, "PEP8 deals with semantics and conventions associated with Python docstrings." For further reading, please visit. Understanding indentation in Python When you are writing Python code, indentation plays a very important role. It acts as a block like in other languages, such as C or Perl. But it's always a matter of discussion amongst programmers whether we should use tabs or spaces, and, if space, how many–two or four or eight. Using four spaces for indentation is better than eight, and if there are a few more nested blocks, using eight spaces for each indentation may take up more characters than can be shown in single line. But, again, this is the programmer's choice. The following is what incorrect indentation practices lead to: >>> def a(): ... print "foo" ... print "bar" IndentationError: unexpected indent So, which one we should use: tabs or spaces? Choose any one of them, but never mix up tabs and spaces in the same project or else it will be a nightmare for maintenance. The most popular way of indention in Python is with spaces; tabs come in second. If any code you have encountered has a mixture of tabs and spaces, you should convert it to using spaces exclusively. Doing indentation right – do we need four spaces per indentation level? There has been a lot of confusion about it, as of course, Python's syntax is all about indentation. Let's be honest: in most cases, it is. So, what is highly recommended is to use four spaces per indentation level, and if you have been following the two-space method, stop using it. There is nothing wrong with it, but when you deal with multiple third party libraries, you might end up having a spaghetti of different versions, which will ultimately become hard to debug. Now for indentation. When your code is in a continuation line, you should wrap it vertically aligned, or you can go in for a hanging indent. When you are using a hanging indent, the first line should not contain any argument and further indentation should be used to clearly distinguish it as a continuation line. A hanging indent (also known as a negative indent) is a style of indentation in which all lines are indented except for the first line of the paragraph. The preceding paragraph is the example of hanging indent. The following example illustrates how you should use a proper indentation method while writing the code: bar = some_function_name(var_first, var_second, var_third, var_fourth) # Here indentation of arguments makes them grouped, and stand clear from others. def some_function_name( var_first, var_second, var_third, var_fourth): print(var_first) # This example shows the hanging intent. We do not encourage the following coding style, and it will not work in Python anyway: # When vertical alignment is not used, Arguments on the first line are forbidden foo = some_function_name(var_first, var_second, var_third, var_fourth) # Further indentation is required as indentation is not distinguishable between arguments and source code. def some_function_name( var_first, var_second, var_third, var_fourth): print(var_first) Although extra indentation is not required, if you want to use extra indentation to ensure that the code will work, you can use the following coding style: # Extra indentation is not necessary. if (this and that): do_something() Ideally, you should limit each line to a maximum of 79 characters. It allows for a + or – character used for viewing difference using version control. It is even better to limit lines to 79 characters for uniformity across editors. You can use the rest of the space for other purposes. The importance of blank lines The importance of two blank lines and single blank lines are as follows: - Two blank lines: A double blank lines can be used to separate top-level functions and the class definition, which enhances code readability. - Single blank lines: A single blank line can be used in the use cases–for example, each function inside a class can be separated by a single line, and related functions can be grouped together with a single line. You can also separate the logical section of source code with a single line. Importing a package Importing a package is a direct implication of code reusability. Therefore, always place imports at the top of your source file, just after any module comments and document strings, and before the module's global and constants as variables. Each import should usually be on separate lines. The best way to import packages is as follows: import os import sys It is not advisable to import more than one package in the same line, for example: import sys, os You may import packages in the following fashion, although it is optional: from django.http import Http404, HttpResponse If your import gets longer, you can use the following method to declare them: from django.http import ( Http404, HttpResponse, HttpResponsePermanentRedirect ) Grouping imported packages Package imports can be grouped in the following ways: - Standard library imports: Such as sys, os, subprocess, and so on. import re import simplejson - Related third party imports: These are usually downloaded from the Python cheese shop, that is, PyPy (using pip install). Here is an example: from decimal import * - Local application / library-specific imports: This included the local modules of your projects, such as models, views, and so on. from models import ModelFoo from models import ModelBar Naming conventions in Python/Django Every programming language and framework has its own naming convention. The naming convention in Python/Django is more or less the same, but it is worth mentioning it here. You will need to follow this while creating a variable name or global variable name and when naming a class, package, modules, and so on. This is the common naming convention that we should follow: - Name the variables properly: Never use single characters, for example, 'x' or 'X' as variable names. It might be okay for your normal Python scripts, but when you are building a web application, you must name the variable properly as it determines the readability of the whole project. - Naming of packages and modules: Lowercase and short names are recommended for modules. Underscores can be used if their use would improve readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. - Since module names are mapped to file names (models.py, urls.py, and so on), it is important that module names be chosen to be fairly short as some file systems are case insensitive and truncate long names. - Naming a class: Class names should follow the CamelCase naming convention, and classes for internal use can have a leading underscore in their name. - Global variable names: First of all, you should avoid using global variables, but if you need to use them, prevention of global variables from getting exported can be done via __all__, or by defining them with a prefixed underscore (the old, conventional way). - Function names and method argument: Names of functions should be in lowercase and separated by an underscore and self as the first argument to instantiate methods. For classes or methods, use CLS or the objects for initialization. - Method names and instance variables: Use the function naming rules—lowercase with words separated by underscores as necessary to improve readability. Use one leading underscore only for non-public methods and instance variables. Using IDE for faster development There: - SublimeText: This editor is lightweight and very powerful. It is available for all major platforms, supports syntax highlighting and code completion, and works well with Python. The editor is open source and you can find it at - PyCharm: This, I would say, is most intelligent code editor of all and has advanced features, such as code refactoring and code analysis, which makes development cleaner. Features for Django include template debugging (which is a winner) and also quick documentation, so this look-up is a must for beginners. The community edition is free and you can sample a 30-day trial version before buying the professional edition. Setting up your project with the Sublime text editor Most of the examples that we will show you in this book will be written using Sublime text editor. In this section, we will show how to install and set up the Django project. - Download and installation: You can download Sublime from the download tab of the site. Click on the downloaded file option to install. - Setting up for Django: Sublime has a very extensive plug-in ecosystem, which means that once you have downloaded the editor, you can install plug-ins for adding more features to it. After successful installation, it will look like this: Most important of all is Package Control, which is the manager for installing additional plugins directly from within Sublime. This will be your only manual installation of the package. It will take care of the rest of the package installation ahead. Some of the recommendations for Python development using Sublime are as follows: - Sublime Linter: This gives instant feedback about the Python code as you write it. It also has PEP8 support; this plugin will highlight in real time the things we discussed about better coding in the previous section so that you can fix them. - Sublime CodeIntel: This is maintained by the developer of SublimeLint. Sublime CodeIntel have some of advanced functionalities, such as directly go-to definition, intelligent code completion, and import suggestions. You can also explore other plugins for Sublime to increase your productivity. Setting up the pycharm IDE You can use any of your favorite IDEs for Django project development. We will use pycharm IDE for this book. This IDE is recommended as it will help you at the time of debugging, using breakpoints that will save you a lot of time figuring out what actually went wrong. Here is how to install and set up pycharm IDE for Django: Download and installation: You can check the features and download the pycharm IDE from the following link: - Setting up for Django: Setting up pycharm for Django is very easy. You just have to import the project folder and give the manage.py path, as shown in the following figure: The Django project structure The Django project structure has been changed in the 1.6 release version. Django (django-admin.py) also has a startapp command to create an application, so it is high time to tell you the difference between an application and a project in Django. A project is a complete website or application, whereas an application is a small, self-contained Django application. An application is based on the principle that it should do one thing and do it right. To ease out the pain of building a Django project right from scratch, Django gives you an advantage by auto-generating the basic project structure files from which any project can be taken forward for its development and feature addition. Thus, to conclude, we can say that a project is a collection of applications, and an application can be written as a separate entity and can be easily exported to other applications for reusability. To create your first Django project, open a terminal (or Command Prompt for Windows users), type the following command, and hit Enter: $ django-admin.py startproject django_mytweets This command will make a folder named django_mytweets in the current directory and create the initial directory structure inside it. Let's see what kind of files are created. The new structure is as follows: django_mytweets/// django_mytweets/ manage.py This is the content of django_mytweets/: django_mytweets/ __init__.py settings.py urls.py wsgi.py Here is a quick explanation of what these files are: - django_mytweets (the outer folder): This folder is the project folder. Contrary to the earlier project structure in which the whole project was kept in a single folder, the new Django project structure somehow hints that every project is an application inside Django. This means that you can import other third party applications on the same level as the Django project. This folder also contains the manage.py file, which include all the project management settings. - manage.py: This is utility script is used to manage our project. You can think of it as your project's version of django-admin.py. Actually, both django-admin.py and manage.py share the same backend code. Further clarification about the settings will be provided when are going to tweak the changes. Let's have a look at the manage.py file: #!/usr/bin/env python import os import sys if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mytweets.settings") from django.core.management import execute_from_command_line execute_from_command_line(sys.argv) The source code of the manage.py file will be self-explanatory once you read the following code explanation. #!/usr/bin/env python The first line is just the declaration that the following file is a Python file, followed by the import section in which os and sys modules are imported. These modules mainly contain system-related operations. import os import sys The next piece of code checks whether the file is executed by the main function, which is the first function to be executed, and then loads the Django setting module to the current path. As you are already running a virtual environment, this will set the path for all the modules to the path of the current running virtual environment. if __name__ == "__main__": os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_mytweets.settings") django_mytweets/ ( Inner folder) __init__.py Django projects are Python packages, and this file is required to tell Python that this folder is to be treated as a package. A package in Python's terminology is a collection of modules, and they are used to group similar files together and prevent naming conflicts. - settings.py: This is the main configuration file for your Django project. In it, you can specify a variety of options, including database settings, site language(s), what Django features need to be enabled, and so on. By default, the database is configured to use SQLite Database, which is advisable to use for testing purposes. Here, we will only see how to enter the database in the settings file; it also contains the basic setting configuration, and with slight modification in the manage.py file, it can be moved to another folder, such as config or conf. To make every other third-party application a part of the project, we need to register it in the settings.py file. INSTALLED_APPS is a variable that contains all the entries about the installed application. As the project grows, it becomes difficult to manage; therefore, there are three logical partitions for the INSTALLED_APPS variable, as follows: - DEFAULT_APPS: This parameter contains the default Django installed applications (such as the admin) - THIRD_PARTY_APPS: This parameter contains other application like SocialAuth used for social authentication - LOCAL_APPS: This parameter contains the applications that are created by you - url.py: This is another configuration file. You can think of it as a mapping between URLs and the Django view functions that handle them. This file is one of Django's more powerful features. When we start writing code for our application, we will create new files inside the project's folder. So, the folder also serves as a container for our code. Now that you have a general idea of the structure of a Django project, let's configure our database system. Summary We prepared our development environment in this article, created our first project, set up the database, and learned how to launch the Django development server. We learned the best way to write code for our Django project and saw the default Django project structure. Resources for Article: Further resources on this subject: - Tinkering Around in Django JavaScript Integration [article] - Adding a developer with Django forms [article] - So, what is Django? [article]
https://www.packtpub.com/books/content/code-style-django
CC-MAIN-2017-22
refinedweb
2,824
51.58
fdetach - detach a name from a STREAMS-based file descriptor #include <stropts.h> int fdetach(const char *path); The fdetach() function detaches a STREAMS-based file from the file to which it was attached by a previous call to fattach().() on the attached file. Upon successful completion, fdetach() returns 0. Otherwise, it returns -1 and sets errno to indicate the error. The fdetach() function will fail if: - [EACCES] - Search permission is denied on a component of the path prefix. - [EPERM] - The effective user ID is not the owner of path and the process does not have appropriate privileges. - [ENOTDIR] - A component of the path prefix is not a directory. - [ENOENT] - A component of path does not name an existing file or path is an empty string. - [EINVAL] - The path argument names a file that is not currently attached. - [ENAMETOOLONG] - The size of a pathname exceeds {PATH_MAX}, or a pathname component is longer than {NAME_MAX}. - [ELOOP] - Too many symbolic links were encountered in resolving path. The fdetach() function may fail if: - [ENAMETOOLONG] - Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. None. None. None. fattach(), <stropts.h>.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/fdetach.html
CC-MAIN-2015-32
refinedweb
191
56.96
David Broman's CLR Profiling API BlogInfo about the Common Language Runtime's Profiling API Evolution Platform Developer Build (Build: 5.6.50428.7875)2009-05-26T18:23:54ZGoodbye from Dave<p>Hello to my millions of readers across the world. Or, well, maybe 10 readers across the world. Something like that.</p> <p>Anyway, I wanted to let you all know that the CLR Profiling API is shifting ownership to some other fantastic folks on the CLR team. So after many years of working on this facet of the CLR, it's time for me to say goodbye.</p> <p>Please continue to monitor the .NET Team blog (<a href=""></a>) for exciting new happenings in the CLR, including the profiling API.</p> <p>So long, and thanks for the memories!</p> <p> </p> <p>Dave</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman a Profiler of Windows Store apps<p>If you’ve written a profiler that consumes the CLR Profiling API, and are looking to update it to analyze Windows Store apps, then you’ll be interested in this new whitepaper: </p> <p><a href=""></a></p> <p>The whitepaper provides recommendations for all the changes you’ll need to make so your profiler can profile Windows Store apps.  As you read through the whitepaper, keep in mind that the source code to <a href="">CLRProfiler 4.5</a> can also be used to illustrate many of these changes you’ll need to make.</p> <p>Happy coding!</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman sample code for rewriting IL: ILRewrite Profiler<p>In my <a href="">previous post</a>, I mentioned the new home for CLRProfiler, where you can find its latest version, 4.5: <a title="" href=""></a>.  On the same CodePlex site you can also find a new sample, <strong>ILRewrite</strong>.  ILRewrite contains sample code demonstrating the following:</p> <ul> <li>Parsing a stream of IL bytes into a linked list of editable structures </li> <li>Writing that linked list back out into a new stream of IL bytes </li> <li>Using the metadata API to add AssemblyRefs, TypeRefs, and MemberRefs to modules you instrument </li> <li>Using the metadata API to add brand new methods into mscorlib.dll </li> <li>Using the new RequestReJIT and RequestRevert APIs. </li> </ul> <p>Click on the <strong>ILRewrite10Source </strong>download from the <strong>Downloads</strong> <strong>tab</strong>.  You can find some basic documentation on the <strong>Documentation tab</strong>, and more detailed documentation in the <strong>readme.txt</strong> distributed as part of the source.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman 4.5 released: includes Windows Store app support<p>You can find CLRProfiler 4.5 at its new home on CodePlex: <a title="" href=""></a></p> <p>If you’re interested in <em>using</em> CLRProfiler to diagnose memory issues with your managed app, including managed Windows Store apps, all you need is the <strong>CLRProfiler45Binaries</strong> download from the <strong>Downloads</strong> <strong>tab</strong>.  <font color="#ff0000">Please be sure to read the installation instructions first</font>.  You will find a link to the installation instructions from the Downloads tab.</p> <p>If you’re interested in <em>writing </em>your own profiler to diagnose Windows Store apps, you may find CLRProfiler 4.5 useful as an example.  You’ll want the <strong>CLRProfiler45Source</strong> download from the <strong>Downloads</strong> <strong>tab</strong>.  (Note, in early December we expect also to release a white paper documenting what you’ll need to know about writing a profiler that analyzes Windows Store apps.)</p> <p>In the coming weeks we expect to publish more complete information on CLRProfiler 4.5, but I wanted to get this small post out there now so you’re aware that CLRProfiler 4.5 is available.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman Heap and Alignment Padding<p>The docs for <a href="">GetObjectSize</a> have recently been updated with this info, but I wanted to mention it here, too, to ensure you were aware of this information.</p> <p.</p> <ul> <li><strong><u>On x86</u></strong>: All objects are 4-byte aligned, except for objects on the large-object-heap, which are always 8-byte aligned.</li> <li><strong><u>On x64</u></strong>: All objects are always 8-byte aligned, in all generations.</li> </ul> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman is it safe to use ObjectIDs?<p>As mentioned in <a href="">this post</a>, ObjectIDs are really pointers to managed objects on the GC heap. And as you know, objects get collected or move around on the heap during GCs. So how do you safely work with ObjectIDs?</p> <p>The overall guidance is that if you plan to dereference an ObjectID or pass it to an ICorProfilerInfo(2,3,4) method, then you must do so either:</p> <ol> <li>From inside a GC, from a thread doing the GC (e.g., in response to one of the GC callbacks, in which case you're guaranteed that the GC is blocked by this call), OR</li> <li>From a callback that gave you the ObjectID (in which case you're guaranteed that the GC is blocked by the callback that gave you the ObjectID)</li> </ol> .</p> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman Tokens, Run-Time IDs, and Type Loading<h1>Overview</h1> <p>In this post, I write about the two primary kinds of IDs your profiler deals with, when each kind is appropriate to use, how to convert between those two types of IDs, and some gotchas with those conversions—particularly in how they may invoke the type loader.</p> <h1>The two kinds of IDs</h1> <p>Profilers have to deal with two kinds of IDs.  The first kind are IDs from metadata, a.k.a., <strong>metadata tokens</strong>.  These are the mdToken values, like mdMethodDef or mdTypeDef, which are read straight out of the metadata of managed modules.  These values do not change for a given module from process to process.  They are placed in the module by the language compiler that generates the IL (e.g., csc.exe).  Profilers typically use metadata tokens in order to look up symbolic information from the metadata (e.g., for pretty-printing names of methods or classes), and for performing IL rewriting.  Metadata tokens are also fantastic for deferring symbolic lookup to a post-processing phase.  For example, a sampling profiler could log metadata tokens for classes and functions encountered on a sample at run-time and defer looking up the names of those classes and functions to a post-processing phase that occurs after the profiled process has exited.  This keeps the profiler’s data collection lightweight, and is only possible because metadata tokens don’t change so long as the managed modules defining those tokens don’t change.</p> <p>The second kind of IDs are <strong>run-time IDs</strong>, such as FunctionID or ClassID which are defined in corprof.idl.  These values do change from process to process, and they represent internal data structures that the CLR builds up at run-time as it loads modules, loads types, JIT compiles functions, etc.  Profilers use these values as its main currency between ICorProfilerInfo* and ICorProfilerCallback* methods.  The CLR uses these values when it notifies profilers of various events (ICorProfilerCallback* methods), and the profiler passes these values back into the CLR (ICorProfilerInfo* methods) in order to get further information about them.  These IDs are handy because they are your profiler’s key to unlocking class layout, generated code, object addresses, and everything else that the CLR maintains about the actively executing managed code at run-time.  See <a href="">this post</a> for more info about what these IDs really are.</p> <h1>Converting between metadata tokens and run-time IDs</h1> <p>Since metadata tokens are good for some things and run-time IDs are good for others, you will inevitably find yourself in situations where you have one kind of ID handy, but you really need the other kind of ID.  Can you convert from one kind of ID to another?  Yes, but there are some caveats!</p> <p>It’s always safe to go this direction: run-time ID –> metadata token.  Just use methods such as GetFunctionInfo2 and GetClassIDInfo2, which take run-time IDs as input, and provide their module + metadata token as (part of) the output.</p> <p>However, it is problematic going the opposite direction: metadata token –> run-time ID.  Why?  Because a given type may not be loaded yet, and thus the run-time ID may not exist.  There exist methods on the ICorProfilerInfo* interfaces that go this direction, namely GetFunctionFromToken(AndTypeArgs) and GetClassFromToken(AndTypeArgs).  However, they are dangerous to use (see below), and should be avoided.  Instead, it’s preferable that your profiler build up its own run-time ID –> metadata token map as it encounters run-time IDs, and then performing reverse lookups in that map as necessary.  For example, as your profiler encounters ClassIDs via callbacks like ClassLoadFinished, it goes the “safe” direction (run-time ID –> metadata token), to build up its map.  When it later encounters an mdTypeDef for a class, it checks to see if that mdTypeDef exists yet in its map—if so, your profiler uses that map to find the corresponding ClassID.  Safe and easy.</p> <p>“Dave, stop telling us to do impossible things.  You know full well that profilers which attach to a process after it has started up don’t have the benefit of seeing all the ClassLoad* notifications.  Also, if regular NGEN’d images are used, ClassLoad* notifications are not reliably sent.”</p> <p>True.  Though you will come across ClassIDs other ways.  Memory profilers will encounter ObjectIDs on the heap, and can call GetClassFromObject to start filling up its map of ClassIDs and thus mdTypeDefs.  Similarly, sampling profilers encounter FunctionIDs during stack walks, and can then get the ClassIDs containing those FunctionIDs and thus build up its map that way.</p> <p>“You’re a dreamer, man.  There will still be cases where I have a metadata token, but have not yet encountered the ClassID.  Think about deep inspection of embedded structs!”</p> <p>Yes, that is a good example.  You are an astute reader.  Memory profilers that wish to deeply inspect values of classes and structures on the heap need to know the ClassIDs in order to call GetClassLayout.  This works great when you’re dealing with reference types whose fields point to other reference types: as you bounce from object to object, you can take the ObjectID (i.e., the location in memory where the object starts), pass it to GetClassFromObject, and there’s your ClassID.  But what happens when a struct is embedded inside an object?  Sure, you can get the layout of the object, and determine the offset into the object where the embedded struct lives.  But then what?  How to inspect and report on the values of fields <em>inside the embedded struct</em>?  At this point, all you can get is the mdTypeDef for the struct (from the metadata of the containing class), but you may never have seen the ClassID for that struct.</p> <p>“Told you so.”</p> <h1>Going from metadata token to run-time ID</h1> <h1></h1> <h1></h1> <p>As I mentioned above, the safest way to do this is to build up your own map and do reverse-lookups as necessary.  If that scheme meets your needs, then by all means do that, and stop reading!  But in the cases where this is insufficient, you may need to resort to using GetFunctionFromToken(AndTypeArgs) and GetClassFromToken(AndTypeArgs).  There is no simple, foolproof way to use these APIs safely, but here is your guideline:</p> <p><strong>Never call GetFunctionFromToken(AndTypeArgs) and GetClassFromToken(AndTypeArgs) unless you’re certain the relevant types have been loaded.</strong>  (“Relevant types” include the ClassID containing the FunctionID whose mdMethodDef you pass to GetFunctionFromToken(AndTypeArgs), and the ClassID whose mdTypeDef you pass to GetClassFromToken(AndTypeArgs).)  If these types have not been loaded, <em>you may cause them to be loaded now</em>!  This is bad because:</p> <ul> <li>This is an easy way to crash the app.  Trying to load a type at the wrong time could cause cycles, causing infinite loops (depending on what your profiler does in response to class load notifications) or outright crashes.  For example, trying to load a type while its containing assembly is still in an early phase of loading is a great and fun way to crash the CLR. </li> <li>You will impact the behavior of the app.  If you’re lucky enough not to crash the app, you’ve still impacted its behavior, by causing types to get loaded in a different order than they normally would.  Any impact to app behavior like this makes it difficult for your users to reproduce problems that they are trying to use your tool to diagnose, or may hide problems that they don’t discover until they run their application outside of your tool. </li> </ul> <h2>Determining whether a class was loaded</h2> <p>So how do you know a class has been fully loaded?</p> <p>Unfortunately, receiving the <strong>ClassLoadFinished</strong> callback does not necessarily mean that ClassID has been fully loaded yet, as the MSDN <a href="">documentation</a> warns us.</p> <p>Basically, the CLR type loader is one of the laziest things on this planet.  It doesn’t want to do anything unless it really, really has to.  The best guideline I can give you is this:  If the app is currently executing managed code that uses a type, then the type is loaded.  For example, if you do a stackwalk, and determine that the app is executing inside of</p> <p>MyRetType MyClass::MyFunction(MyArgumentType myArgumentType)</p> <p>then you can be reasonably assured that the following are loaded:</p> <ul> <li>MyClass </li> <li>MyArgumentType (if it’s a value-type) </li> <li>MyRetType (if it’s a value-type) </li> <li>For any class you know is loaded, so should be: <ul> <li>its base class </li> <li>its value-type fields (not necessarily reference-type fields!) </li> <li>implemented interfaces </li> <li>value-type generic type arguments (and even reference-type generic type arguments in the case of MyClass) </li> </ul> </li> </ul> <p>So much for stacks.  What if you encounter an instance of a class on the heap?  Surely the class is loaded then, right?  Well, probably.  If you encounter an object on heap just after GC (inside <strong>GarbageCollectionFinished</strong>, before you return), it should be safe to inspect the class’s layout, and then peek through ObjectIDs to see the values of their fields.</p> <p>But what if you encounter an object earlier than that?  For example, if you receive an <strong>ObjectAllocated</strong> callback, and call <strong>GetClassFromObject</strong> on the allocated ObjectID, can you be certain the ClassID has been fully loaded?  Well, usually.  But I have seen cases in the past, with types stored in NGENd images, where the CLR may issue an ObjectAllocated callback <em>just before</em> the type has been fully loaded from the NGENd image.  I’ve recently tried to get this to happen again but couldn’t, which probably means this is rather unlikely, but not necessarily impossible.  Ugh.</p> <p>In general, a lot of the uncertainty above comes from types stored in NGENd modules.  If we actually JIT-compile a function at run-time and load the types it uses from non-NGENd modules, then you can have much greater certainty about the above types being loaded.  You can even make further assumptions about locals and types from signatures of direct callees being loaded. </p> <h2>Interlude: Remember the Unloads!</h2> <p>Now is a good time remind you that, not only is it dangerous to inspect run-time IDs too early (i.e., before they load); it’s also dangerous to inspect run-time IDs too late (i.e., after they <strong>unload</strong>).  For example, if you store ClassIDs and FunctionIDs for later use, and use them “too late”, you can easily crash the CLR.  The profiling API does pretty much no validation of anything (in many cases, it’s incapable of doing so without using up significant amounts of memory to maintain lookup tables for everything).  So we generally take any run-time ID that you pass to ICorProfilerInfo* methods, cast it to an internal CLR structure ptr, and go boom if the ID is bad.</p> <p>There is no way to just ask the CLR if a FunctionID or ClassID is valid.  Indeed, classes could get unloaded, and new classes loaded, and your ClassID may now refer to a totally different (valid) class.  </p> <p>You need to keep track of the unloads yourself.  You are notified when run-time IDs go out of scope (today, this happens at the level of an AppDomain unloading or a collectible assembly unloading—in both cases all IDs “contained” in the unloading thing are now invalid).  Once a run-time ID is out of scope, you are not allowed to pass that run-time ID back to the CLR.  In fact, you should consider whether thread synchronization will be necessary in your profiler to maintain this invariant.  For example, if a run-time ID gets unloaded on thread A, you’re still not allowed to pass that run-time ID back to the CLR on thread B.  So you may need to block on a critical section in thread A during the *UnloadStarted / AppDomainShutdown* callbacks, to prevent them from returning to the CLR until any uses of the contained IDs in thread B are finished.</p> <p>Take a look at the <a href="">docs</a> is for more info.</p> <h1>TypeRefs</h1> <p>So far I’ve been talking about how to go from a typeDef to its run-time ID, and by now that should seem hard enough that we don’t need to throw a monkey wrench into the works.  But the sad fact is we’re rarely lucky enough even to have a typeDef.  A class’s fields or even base type, might have their types defined in <em>other modules</em>, in which case the metadata tells us the fields or base type might actually be typeRefs, and not typeDefs.  Ugh.  Whaddya do with that?!</p> <p>I’ll tell you what you <em>don’t</em> do.  You don’t call the enticingly-named IMetaDataImport::ResolveTypeRef.  On the surface, it seems like ResolveTypeRef would do exactly what you want: starting from a typeRef, please find the referenced module and return an IMetaDataImport on that module, along with the typeDef in that target module to which the typeRef refers.  But the problem lies with how ResolveTypeRef determines the module to which a typeRef refers.</p> <p>I think ResolveTypeRef was originally designed for use at build-time (by language compilers), though I don’t know if it’s even used in that scenario anymore.  It is certainly not good for use at run-time, where the loader’s decision on how to locate a referenced assembly can be arbitrarily complex.  Different AppDomains in the same process may have different rules on how to locate the referenced assembly due to varying permission sets, host settings, or assembly versions.  In the limit, the CLR may even <em>call into the user’s managed code</em> to dynamically influence the decision of where the referenced assembly exists (see <a href="">AppDomain.AssemblyResolve Event</a>).</p> <p>ResolveTypeRef doesn’t know about any of this—it was never designed to be used in a running application with all these environmental factors.  It has an extremely simple (and inaccurate) algorithm to iterate through a set of “known modules”, in an arbitrary order, looking for the first one that matches the reference.  What does “known modules” mean?  It’s a set of modules that have been opened into the metadata system, which is NOT the same as the list of modules already loaded by the assembly loader (and thus notified to your profiler).  And it’s certainly not the same as the set of modules installed onto the disk.</p> <p>If you absolutely need to resolve refs to defs, your best bet may be to use your own algorithm which will be as accurate as you can make it, under the circumstances, and which will never try to locate a module that hasn’t been loaded yet.  That means that you shouldn’t try to resolve a ref to a def if that def hasn’t actually been loaded into a type by the CLR.  Consider using an algorithm similar to the following:</p> <ol> <li>Get the AssemblyRef from the TypeRef to get to the name, public key token and version of the assembly where the type should reside. </li> <li>Enumerate all loaded modules that the Profiling API has notified you of (or via <a href="">EnumModules</a>) (you can filter out a specific AppDomain at this point if you want). </li> <li>In each enumerated module, search for a TypeDef with the same name and namespace as the TypeRef (IMetaDataImport::FindTypeDefByName) </li> <li>Pay attention to <strong>type forwarding</strong>!  Once you find the TypeDef, it may actually be an “exported” type, in which case you will need to follow the trail to the next module.  Read toward the bottom of <a title="" href=""></a> for more info. </li> </ol> <p>The above can be a little bit smarter by paying attention to what order you choose to search through the modules:</p> <ul> <li>First search for the TypeDef in assemblies which exactly match the name, public key token and version for the AssemblyRef. </li> <li>If that fails, then search through assemblies matching name and public key token (where the version is higher than the one supplied – this can happen for Framework assemblies). </li> <li>If that fails, then search through all the other assemblies </li> </ul> <p>I must warn you that the above scheme is <strong>not tested and not supported.  Use at your own risk!</strong></p> <h1>Future</h1> <p>Although I cannot comment on what will or will not be in any particular future version of the CLR, I can tell you that it is clear to us on the CLR team that we have work to do, to make dealing with metadata tokens and their corresponding run-time type information easier from the profiling API.  After all, it doesn’t take a rocket scientist to read the above and conclude that it does take a rocket scientist to actually follow all this advice.  So for now, enjoy the fact that what you do is really hard, making you difficult to replace, and thus your job all the more secure.  You’re welcome.</p> <p> </p> <p>Special thanks to David Wrighton and Karel Zikmund, who have helped considerably with all content in this entry around the type system and metadata.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman: A How-To Guide<p>By now, you’ve surely downloaded your copy of the .NET 4.5 Developer Preview, and you’ve opened up the brand-spanking new corprof.idl, and searched that file for all the new APIs available in 4.5.  There’s a bunch with “ReJIT” in the name, and all you need to know is how and when to call what.</p> <p>If none of the above makes any sense to you, you’ve probably stumbled onto the wrong blog.  Indeed, both “rejit” and “ejit” have other definitions (some rather unfriendly) that are completely unrelated to the CLR.  I’m talking about that ReJIT thing you do when you want to instrument and then JIT-compile code that has already been JIT-compiled before.</p> <p>This post is organized in chronological order, telling what your profiler should be doing at the following times in the process:</p> <ul> <li>Startup Time </li> <li>ModuleLoadFinished Time </li> <li>RequestReJIT Time </li> <li>Actual ReJIT Time </li> <li>RequestRevert Time </li> </ul> <p> </p> <h2>Startup Time</h2> <p>The first thing your profiler will do is get itself loaded on startup of a managed application—the old environment variable way, not the new attach way.  I’m sure you’ve already read up on the <a href="">limitations</a>!</p> <p>Inside your profiler’s Initialize() method, it will of course call SetEventMask().  In that call, your profiler must include (<strong>COR_PRF_ENABLE_REJIT | COR_PRF_DISABLE_ALL_NGEN_IMAGES</strong>) in the bitmask.  COR_PRF_ENABLE_REJIT is required to use any of the ReJIT APIs later on (they’ll fail immediately otherwise).  COR_PRF_DISABLE_ALL_NGEN_IMAGES causes the CLR’s assembly loader to ignore all NGENd images (even NGEN /Profile images), and thus all code will be JITted from scratch, and all classes loaded from scratch.  If you try to be tricky and specify only COR_PRF_ENABLE_REJIT (without COR_PRF_DISABLE_ALL_NGEN_IMAGES), then SetEventMask will fail.  Conversely, though, you’re perfectly welcome to specify COR_PRF_DISABLE_ALL_NGEN_IMAGES without COR_PRF_ENABLE_REJIT if you want.</p> <p>At this time you will likely want to set other flags that control optimizations, particularly <strong>inlining</strong> (COR_PRF_DISABLE_OPTIMIZATIONS, COR_PRF_DISABLE_INLINING), or at least subscribe to the inlining callbacks (COR_PRF_MONITOR_JIT_COMPILATION).</p> <p>Typically, your profiler will also create a new thread at this point, call it your “<strong>ReJIT Thread</strong>”.  The expected use-case of ReJIT is to perform instrumentation “on demand”, triggered by some user action (like fiddling with dials in your profiler’s out-of-process GUI).  As such, you’ll need an unmanaged thread of your own creation to receive and act on these requests from out-of-process.  Perhaps you already have such a thread to service other kinds of requests.  It’s perfectly acceptable for such a thread to now also act as your ReJIT Thread.</p> <h2>ModuleLoadFinished Time</h2> <h3></h3> <h3></h3> <h3> </h3> <h3>Metadata Changes</h3> <p>As each module loads, you will likely need to add metadata so that your future ReJITs will have the tokens they need.  What you do here heavily depends on the kind of instrumentation you want to do.  I’m assuming you’re doing instrumentation that adds some calls from the user code into brand new profiler helper methods you will add somewhere.  If you plan to instrument mscorlib, you will likely want to add those profiler helper methods into mscorlib (remember, mscorlib is not allowed to contain an AssemblyRef that points to any other assembly!).  Otherwise, perhaps you plan to ship a managed helper assembly that will sit on your user’s disk, and all your profiler helper methods will reside in this on-disk managed helper assembly.</p> <p>So…</p> <p>IF the module loading is mscorlib AND you plan to <strong>add your profiler helper methods</strong> into mscorlib, THEN use the metadata APIs now to add those methods.</p> <p>IF the module loading contains methods that you might possibly ever want to instrument, THEN use the metadata APIs to <strong>add any AssemblyRefs, TypeRefs, MemberRefs, etc.</strong>, which point to your profiler helper methods, that you might possibly need later when you potentially instrument methods from this loading module.  The guiding principle here is that metadata changes may be done at ModuleLoadFinished time, and not later.  So you need to assume you might possibly want to ReJIT methods in the loading module <em>eventually</em>, and proactively add to the loading module whatever metadata you will eventually need (should you actually perform the ReJIT later), and add that metadata <em>now</em>, just in case.</p> <h3>Re-Request Prior ReJITs</h3> <p>This won’t make much sense until you’ve read the next section, but I’m placing it here to keep it in chronological order.  If you’ve made a prior call to RequestReJIT for an unshared (non-domain-neutral) ModuleID, AND if you want that request to apply to the mdMethodDef that appears in all other unshared copies of the module, AND if you’re inside ModuleLoadFinished for the load of a new ModuleID that is just such a new unshared copy of the module, THEN you’ll want to explicitly call RequestReJIT on this newly-loaded ModuleID with that mdMethodDef.  Note that this is optional—if you want to treat AppDomains differently and want, say, only one unshared copy of the function to be ReJITted, then you’re perfectly welcome to cause that behavior and not to call RequestReJIT on any new ModuleIDs relating to the module.  Come back and re-read those last two sentences after you’ve read the next section.</p> <h2>RequestReJIT Time</h2> <p>Now imagine your user has turned some dial on your out-of-process GUI, to request that some functions get instrumented (or re-instrumented (or re-re-instrumented (or …))).  This results in a signal sent to your in-process profiler component.  Your ReJIT Thread now knows it must call <strong>RequestReJIT</strong>.  You can call this API once in bulk for a list of functions to ReJIT.  Note that functions are expressed in terms of ModuleID + mdMethodDef metadata tokens.  A few things to note about this:</p> <ul> <li>You request that all instantiations of a generic function (or function on a generic class) get ReJITted with a single ModuleID + mdMethodDef pair.  You cannot request a specific instantiation be ReJITted, or provide instantiation-specific IL.  This is nothing new, as classic first-JIT-instrumentation should never be customized per instantiation either.  But the ReJIT API is designed with this restriction in mind, as you’ll see later on. </li> <li>ModuleID is specific to one AppDomain for unshared modules, or the SharedDomain for shared modules.  Thus: <ul> <li>If ModuleID is shared, then your request will simultaneously apply to all domains using the shared copy of this module (and thus function) </li> <li>If ModuleID is unshared, then your request will apply only to the single AppDomain using this module (and function) </li> <li>Therefore, if you want this ReJIT request to apply to <em>all unshared copies</em> of this function: <ul> <li>You’ll need to include all such ModuleIDs in this request. </li> <li>And… any <em>future</em> unshared loads of this module will result in new ModuleIDs.  So as those loads happen, you’ll need to make further calls to RequestReJIT with the new ModuleIDs to ensure those copies get ReJITted as well. </li> <li>This is optional, and only need be done if you truly want this ReJIT request to apply to all unshared copies of the function.  You’re perfectly welcome to ReJIT only those unshared copies you want (and / or the shared copy). </li> <li>Now you can re-read the “Re-Request Prior ReJITs” section above.  :-) </li> </ul> </li> </ul> </li> </ul> <h2></h2> <h3> </h3> <h3>More on AppDomains</h3> <p>This whole shared / multiple unshared business can get confusing.  So to bring it home, consider your user.  If your user expresses instrumentation intent at the level of a class/method name, then you pretty much want to ReJIT every copy of that function (all unshared copies plus the shared copy).  But if your user expresses instrumentation intent at the level of a class/method name <em>plus AppDomain </em>(think one single AppPool inside ASP.NET), then you’d only want to ReJIT the copy of the function that resides in the single ModuleID associated with that AppDomain.</p> <p>The SharedDomain can make that last alternative tricky, though.  Because if the ModuleID ends up belonging to the SharedDomain, and you ReJIT a method in that ModuleID, then all AppDomains that share that module will see your instrumentation (whether you want them to or not).  This is due to the very nature of SharedDomain / domain-neutrality.  There’s only one shared copy of this function to instrument, so if two domains share the function, they both see it, either with or without instrumentation.  It doesn’t make sense to instrument the function from the point of view of only one of those two domains.</p> <h3>Pre-ReJIT</h3> <p>Obviously, the main coolness of RequestReJIT is that you can call it with a function that has already been JITted.  But one of the niceties of RequestReJIT is that you don’t actually have to wait until a function is first JITted to use it.  You can request a ReJIT on a function that has never been JITted before (I call this “Pre-ReJIT”).  Indeed, with generics, there’s no way to know if all the instantiations that will ever be used in an AppDomain have been JITted or not.  There may always be some important instantiation that has not been JITted yet.  RequestReJIT takes all this into account as follows:</p> <p>If a function (or generic instantiation) has already been JITted, it is marked for ReJIT next time it is called.</p> <p>If a function (or generic instantiation) has not yet been JITted, then it is marked internally for “Pre-ReJIT”.  This means that once it is called, its original (non-instrumented) IL gets JIT-compiled as usual.  Immediately after, it is then ReJITted.  In this way, a Pre-ReJIT request works exactly like a ReJIT request.  Original IL is compiled first, and then instrumented IL is compiled later.  This ensures we can easily “revert” back to the original code at a later time using the same revert mechanism.  (See below.)</p> <h2>Actual ReJIT Time</h2> <p>You may have noticed that you have read a whole lot of words so far, but we haven’t yet provided the instrumented IL to the CLR.  This is because the function hasn’t ReJITted yet.  You’ve only <em>requested </em>that it be ReJITted.  But the actual ReJITting happens the next time the function is called.   Until then, any threads already executing inside functions you requested to be ReJITted <em>stay</em> in those functions, and don’t see the instrumented code until they return and call the functions again.  Once a function is finally called for the first time after its RequestReJIT, you get some callbacks.</p> <p>IF this is the first generic instantiation to ReJIT, for a given RequestReJIT call (or this is not a generic at all), THEN:</p> <ul> <li>CLR calls <strong>GetReJITParameters</strong> <ul> <li>This callback passes an ICorProfilerFunctionControl to your profiler.  Inside your implementation of GetReJITParameters (and no later!) you may call into ICorProfilerFunctionControl to provide the instrumented IL and codegen flags that the CLR should use during the ReJIT </li> <li>Therefore it is here where you may: <ul> <li>Call GetILFunctionBody </li> <li>Add any new LocalVarSigTokens to the function’s module’s metadata.  (You may not do any other metadata modifications here, though!) </li> <li>Rewrite the IL to your specifications, passing it to ICorProfilerFunctionControl::SetILFunctionBody. </li> </ul> </li> <li>You may NOT call ICorProfilerInfo::SetILFunctionBody for a ReJIT!  This API still exists if you want to do classic first-JIT IL rewriting only. </li> <li>Note that GetReJITParameters expresses the function getting compiled in terms of the ModuleID + mdMethodDef pair you previously specified to RequestReJIT, and <em>not </em>in terms of a FunctionID.  As mentioned before, you may not provide instantiation-specific IL! </li> </ul> </li> </ul> <p>And then, for all ReJITs (regardless of whether they are for the first generic instantiation or not):</p> <ul> <li>CLR calls <strong>ReJITCompilationStarted</strong> </li> <li>CLR calls <strong>ReJITCompilationFinished</strong> </li> </ul> <p>These callbacks express the function getting compiled in terms of FunctionID + ReJITID.  (ReJITID is simply a disambiguating value so that each ReJITted version of a function instantiation can be uniquely identified via FunctionID + ReJITID.)  Your profiler doesn’t need to do anything in the above callbacks if it doesn’t want to.  They just notify you that the ReJIT is occurring, and get called for each generic instantiation (or non-generic) that gets ReJITted.</p> <p>And of course, for any calls to these functions after they have been ReJITted, there are no further ReJIT compilations or callbacks to your profiler.  This ReJITted version is now the current and only version for all new calls to the function.</p> <h3>Versions</h3> <p>Your profiler is welcome to call RequestReJIT again on these functions, and the cycle starts again.  The next time a call comes in, they’ll get ReJITted again, and you’ll provide instrumented IL at that time, as usual.  At any given time, only the most recently ReJITted version of a function is active and in use for new calls.  But any prior calls still inside previously ReJITted (or original) versions of the function stay in that version until they return.</p> <h2>RequestRevert Time</h2> <p>Eventually your user may turn the dial back down, and request that the original, un-instrumented, version of the function be reinstated.  When this happens, your profiler receives this signal from out-of-proc using your nifty cross-proc communication channel, and your ReJIT Thread calls <strong>RequestRevert</strong>.</p> <p>At this time, the CLR sets the original version of the function that it JITted the first time as being the <em>current</em> version for all future calls.  Any prior calls still executing in various ReJITted versions of the function remain where they’re at until they return.  All new calls go into the version originally JITted (from the original IL).</p> <p>Note that RequestRevert allows you to revert back to the original JITted IL, and not back to some previous ReJITted version of the IL.  If you want to revert back to a previous ReJITted version of the IL, you’ll need to do so manually, by using RequestReJIT instead, and providing that IL explicitly to the CLR.</p> <h2>Errors</h2> <p>If there are any errors with performing the ReJIT, you will be notified by the dedicated callback ICorProfilerCallback4::ReJITError().  Errors can happen at a couple times:</p> <ul> <li><u>RequestReJIT Time</u>: These are fundamental errors with the request itself.  This can include bad parameter values, requesting to ReJIT dynamic (Ref.Emit) code, out of memory, etc.  If errors occur here, you’ll get a callback to your implementation of ReJITError(), sandwiched inside your call to RequestReJIT on your ReJIT Thread. </li> <li><u>Actual ReJIT Time</u>: These are errors we don’t encounter until actually trying to ReJIT the function itself.  When these later errors occur, your implementation of ReJITError() is called on whatever CLR thread encountered the error. </li> </ul> <p>You’ll note that ReJITError can provide you not only the ModuleID + mdMethodDef pair that caused the error, but optionally a FunctionID as well.  Depending on the nature of the error occurred, the FunctionID may be available, so that your profiler may know the exact generic instantiation involved with the error.  If FunctionID is null, then the error was fundamental to the generic function itself (and thus occurred for all instantiations).</p> <p> </p> <p>Ok, that about covers it on how your profiler is expected to use ReJIT.  As you can see, there are several different tasks your profiler needs to do at different times to get everything right.  But I trust you, you’re smart.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman Limitations in .NET 4.5<p>Anyone <em>not</em> be in 4.5.  Many folks who have asked for this feature have a pre-set list of sub-features in mind that <em>of course </em>will be supported by ReJIT, they think.  But many of those obvious sub-features will not be available in .NET 4.5.  That’s what this post is about.</p> <h2>Is ReJIT For You?</h2> <p>If you’re writing a monitoring tool, typically run in production, and…</p> <p>If your tool is always on, always monitoring, but needs a way to fine-tune the amount of instrumentation it does without forcing the monitored application to restart, and…</p> <p>If your tool instruments potentially everything, including framework assemblies like mscorlib, and you therefore disable the use of NGENd images and are willing to put up with longer startup times as a result, then…</p> <p>ReJIT may be for you.</p> <p>The ReJIT we plan to release in .NET 4.5 was designed with this scenario in mind.  As such, there are many potential sub-features of ReJIT that will not be available, because they are not essential for this scenario.</p> <h2>List those Limitations!</h2> <p><u>ReJIT + Attach? No!</u></p> <p>In order to enable ReJIT, your profiler must load at startup, and set an immutable flag in your Initialize method that enables the ReJIT functionality.</p> <p><u>ReJIT + </u><u>NGEN? No!</u></p> <p.</p> <p><u>Metadata changes in ModuleLoadFinished only</u></p> <p.</p> <p><u>Memory reclaimed at AppDomain unload, <em>not</em> revert</u></p> <p.</p> <p><u>ReJIT inlined functions?  No!</u></p> <p.</p> <p.</p> <p><u>ReJIT + managed debugging? No!</u></p> <p>While not technically disabled, it is not advised or supported to run a managed debugger against a process that also has a ReJITting profiler enabled.  (Native-only debugging is just fine, though.)</p> <p>Whatever debugging support there is, is only there for you, the profiler writer, and <em>not </em.</p> <p.</p> <p><u>ReJIT dynamic code?  No!</u></p> <p>Not a new limitation, but just to be explicit, profilers are not allowed to instrument dynamic code generated via the Reflection.Emit namespace, and that includes ReJIT.</p> <h2>Why so strict?</h2> <p>ReJIT, as originally conceived by the CLR team, involved allowing profilers to attach to running processes and then instrument arbitrary code at any time.  Just that one sentence would eliminate almost all the restrictions mentioned above.  So what happened?</p> <p>Reality, that’s what.</p> <p>Stuff takes time.  And in this case a <em>lot</em>.</p> <p.</p> .</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman for Realz?!<p>Yes!  Check out this new video on channel 9 for the scoop:</p> <p><a title="" href=""></a></p> <p., <strong>ReJIT it!</strong>).</p> <p>If you want to play with this feature, it is available in the .NET 4.5 Developer Preview.  You can search for it on the Microsoft Download Center, and you’ll find the .NET 4.5-only link and the full Visual Studio 11 link:</p> <p><a title="" href=""></a> <br /><a title="" href=""></a></p> <p.</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman V4 Released<p> </p> <p>CLRProfiler V4 is now publicly available.  You may download from here: </p> <p><a title="" href=""></a></p> <p>This is of interest both to folks who want a free profiler to diagnose memory issues with their managed apps, and for folks who author profilers of their own, and would like to look at source code of a real-world example of a profiler.</p> <h2>If you just want to run it…</h2> <p>Then the following new features will be of interest to you:</p> <ul> <li>CLRProfiler V4 allows you to profile managed code that uses .NET 2.0, 3.0, 3.5, or 4.0.  However, you must always have .NET 4.0 installed on your box in order to use CLRProfiler V4, as CLRProfiler itself contains managed code that depends on .NET 4.0 </li> <li>CLRProfiler V4 can target <strong>Silverlight 4</strong> apps. </li> <li>CLRProfiler V4 may be used to <strong>attach to and detach from</strong> live processes, to generate heap graphs.  (Note:  This feature requires the process to be running .NET 4.0, does not work against Silverlight, and does not allow gathering allocation call stacks.) </li> <li>CLRProfiler V4 understands <strong>in-process side-by-side CLR instances</strong>, and can allow you to pick which CLR instance from a given process to profile. </li> </ul> <h2>If you want to write a profiler…</h2> <p>Then you may look through the source code for examples of all the above features, including how to target Silverlight, use the attach / detach API, and how to implement the “pick-one” approach for in-process side-by-side CLR instances.</p> <p>Also, CLRProfiler V4 consumes the new Enter3/Leave3/Tailcall3 signatures along with FunctionIDMapper2, so you can consult the source for examples of the naked assembly language wrappers.</p> <h2>Problems?</h2> <p>If you encounter problems with CLRProfiler V4, the best place to go is our forum:</p> <p><a title="" href=""></a></p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman uploaded to MSDN Code Gallery<p>A while back I <a href="" title="posted">posted</a> some sample code written by Rico Mariani to parse CLR metadata signatures. This code is now also available on the MSDN Code Gallery <a href="" title="SigParse">SigParse</a> page. If any of you were nervous about incorporating that source into your product without an official license agreement, please take a look at the MSDN Code Gallery page, where you'll find the source code is now governed by the MICROSOFT PUBLIC LICENSE (Ms-PL).</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman, in-process side-by-side CLR instances, and a free test harness<p>My previous post on <a href="">New stuff in Profiling API for upcoming CLR 4.0</a> mentioned that any profiler that implements ICorProfilerCallback3 must be “side-by-side aware”.  This post goes into more detail on how to do this, and how to test it.</p> <h1>What are in-process side-by-side CLR instances?</h1> <p>To understand this fully, take a look at this CLR Inside Out <a href="">article</a>.  It will help you understand the “what” and the “why” around this feature.  The simple summary is that, in order to aid with compatibility in certain scenarios, a single process can now have multiple instances of the CLR loaded simultaneously.  What that means today is that, in one process, you can have a V4-based CLR and [either a V1.1-based or V2.0-based CLR (though not both)].  In the future, the possibilities will likely grow as more major versions of the CLR are released.</p> <p>The CLR instances are unaware of each other.  If, say, a V2 and V4 CLR are loaded, then any managed code running against V2 CLR will look just like native code to the V4 CLR.  And vice-versa: any managed code running against the V4 CLR will look just like native code to the V2 CLR.  There is no direct communication between these two instances of the CLR.  What is possible is for, say, the V2 managed code to P/Invoke out to native code, which then calls a COM object implemented in V4 managed code.  In that way, one CLR can invoke another, but only in this indirect kind of way with native code in the middle.</p> <p>To support in-process side-by-side CLR instances, the CLR team has extended the hosting interfaces via the new “metahost” interface.  The metahost interface provides a way to operate over multiple CLRs that may be loaded into a single process, with each CLR represented by an ICLRRuntimeInfo interface.  If you have implemented the “attach” feature for your profiler, then you are already familiar with the metahost interface.  You can find some profiler-specific information about metahost, along with sample code for implementing attach, in <a href="">this blog post</a>. </p> <p>Again, I’d encourage you to read through the CLR Inside Out article linked above, as I don’t plan to repeat its content here.  That will give you context on why the feature of in-process side-by-side CLR instances even exists, and what problems it helps to solve.  What I will talk about is how this situation will appear to your profiler, and how your profiler can deal with it.</p> <h1>Profiler’s Point of View</h1> <p>When multiple CLR instances are loaded into a single process, and those CLRs each load your profiler, then your profiler DLL will be loaded multiple times, once per CLR instance.  This means your DLL gets LoadLibrary’d multiple times and you’ll receive multiple “CreateInstance” calls to your class factory object.  Depending on how you code your “CreateInstance”, that could mean multiple instances of your ICorProfilerCallback implementation would be generated.</p> <p>As you know, when the same DLL is LoadLibrary’d multiple times, it isn’t really “loaded” multiple times.  Windows just increments a reference count on that DLL (to be released via each FreeLibrary call).  Any global or static state in that DLL is shared across all code that executes in that DLL, regardless of how many LoadLibrary calls are made.  This means that, since your class factory’s CreateInstance() call could theoretically be called on two threads at the same time, any access CreateInstance() makes to globals in your DLL should be protected with synchronization primitives like a critical section.  Furthermore, if you allow multiple instances of your ICorProfilerCallback implementation to be created, then if they access any global or static class data, that access will need to be protected as well, if it isn’t already.</p> <h1>Pick First, Pick One</h1> <p>The easiest way for your profiler to become side-by-side aware is to choose to profile only one CLR at a time, and to add code to enforce that.  This is fairly easy to do, and is a quite reasonable solution to the in-process side-by-side problem.  In fact, the Visual Studio 2010 profiler and the upcoming CLRProfiler V4 update currently choose this approach.</p> <p>With <strong>Pick First</strong>, your class factory CreateInstance simply keeps track of whether it was already called.  First time through, it creates your ICorProfilerCallback implementation and succeeds.  Thereafter, it fails.  The advantage of this approach is that it’s the easiest to implement, and will always do what the user wants in scenarios where only one CLR is loaded.  The disadvantage is that, when multiple CLRs are loaded, although your profiler will operate just fine, the user may be upset if she was trying to profile the second CLR that got loaded, as your profiler provides no way to do that.</p> <p>With <strong>Pick One</strong>, you provide some kind of UI to your user to specify which CLR to profile.  This could be fancy GUI, less-fancy command-line parameters, whatever you like.  The user would specify the CLR in terms of its version, and your profiler would refuse to profile any CLR that didn’t match that version.  While being only slightly more difficult to implement than Pick First, this ensures your user remains in control of what gets profiled.  With this approach, you’d always succeed your CreateInstance method, and then do the version checking inside your ICorProfilerCallback::Initialize() method, which you would succeed or fail, depending on whether your version check passes.  To do your version check, you would first QueryInterface for ICorProfilerInfo3.  If that fails, you know you’re dealing with a CLR based on 2.0 or earlier.  If that succeeds, the CLR is 4.0 or later.  You would then use ICorProfilerInfo3::GetRuntimeInformation to get the specific version information of the CLR to check against what the user selected.  Finally, you now know if you should succeed or fail ICorProfilerCallback::Initialize().</p> <p>When you fail either your CreateInstance, ICorProfilerCallback::Initialize, or ICorProfilerCallback3::InitializeForAttach method due to an intentional choice not to profile that CLR (as opposed to encountering some kind of user-serviceable problem), I’d recommend you return the new “CORPROF_E_PROFILER_CANCEL_ACTIVATION” HRESULT. CORPROF_E_PROFILER_CANCEL_ACTIVATION is special in that the new CLR V4 will not log an error to the event log when it receives this HRESULT from the profiler’s CreateInstance or ICorProfilerCallback::Initialize method.  Instead, CLR V4 logs a less-alarming informational message to the event log stating that the profiler has intentionally chosen not to profile that CLR in that process.  Of course, in cases where you fail for an exceptional reason, you should continue to dutifully surface whatever HRESULT describes the problem, and let the CLR treat that as an error so that the user is properly informed of the problem.</p> <h1>Pick Many, Pick All</h1> <p>You can provide more value to your users—particularly those who may be dealing with in-process side-by-side CLR scenarios more often—if you allow the user to profile multiple CLRs that may be loaded in a given process.  This approach would allow your profiler to provide the most information to the user, including capturing timings of all managed methods (regardless of the governing CLR), present interleaved call stacks including code from all runtimes, analyze all managed heaps from all runtimes, monitor the behavior of all managed code in the process, etc.  Doing this properly will require that you take care to synchronize, or in some cases remove entirely, global state from your DLL.</p> <p>Just as a simple example, many profilers use global pointers that point to their ICorProfilerCallback implementation and / or the CLR’s ICorProfilerInfo implementation (e.g., say g_pMyCallback, g_pInfo).  This is no longer acceptable when there could be arbitrarily many of your ICorProfilerCallback implementations instantiated, and CLR’s ICorProfilerInfos lying around.  The key problem is this: if you pass an ID from one CLR to the ICorProfilerInfo of another CLR, you will crash.  Example: CLR #1 informs you about a FunctionID which you pass to CLR #2’s GetFunctionInfo().  Boom.  As many of you know, the CLR is intolerant about bogus IDs.  And from any given CLR’s point of view, an ID from a different CLR is garbage.</p> <p>This means you must take care always to communicate with the appropriate CLR.  Thus, you’ll want to reevaluate any reliance your DLL has on global state, and protect or remove it appropriately.  In this section I’ll list some of the things to look out for, and recommended ways to address them.</p> <h2>Global Profiler Manager</h2> <p>For the most part, communicating with the right CLR is straightforward.  Suppose multiple CLRs get loaded, and that forces multiple instances of your ICorProfilerCallback implementation to be created.  If:</p> <ul> <li>Each ICorProfilerCallback implementation keeps a pointer to the ICorProfilerInfo it was given at initialization time, and </li> <li>Each ICorProfilerCallback implementation always uses this pointer to call into ICorProfilerInfo in response to callbacks </li> </ul> <p>then you’re mostly there.  Any Info calls you make in response to a callback will always be routed to the appropriate CLR.  Simple, right?  Yeah, but what about the ways your profiler gets control other than ICorProfilerCallback methods?  For example:</p> <ul> <li>Enter/Leave/Tailcall probes </li> <li>Callouts you add via instrumentation </li> <li>Separate threads you create for sampling, forcing GCs or other reasons </li> </ul> <p>An approach I’d recommend is that you create a single-instance, global profiler manager, which gets invoked in the above cases and “figures out” which ICorProfilerCallback implementation (and thus which ICorProfilerInfo pointer) to route the request to.</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="499" height="290" /></a> </p> <p> </p> <p>Diagrams are spiffy.  But the interesting part is how the single global profiler manager figures out which ICorProfilerCallback implementation to talk to.  A diagram that just shows arrows doesn’t really help explain that.  The following sections address this.</p> <h2>Enter / Leave / Tailcall / FunctionIDMapper</h2> <p>Since Enter, Leave, Tailcall, and FunctionIDMapper are implemented as global C functions, they’re technically part of your global profiler manager.  So they must somehow figure out which CLR invoked them.  The key to this is a new parameter added to the new V4 SetFunctionIDMapper2:</p> <pre class="code">HRESULT SetFunctionIDMapper2( [<span style="color: blue">in</span>] FunctionIDMapper2 *pFunc, <span style="background-color: #ffff00">[<span style="color: blue">in] <span style="color: blue">void *clientData</span></span></span>);</pre> <a href=""></a> <p>clientData can be anything you like, though typically it will be a pointer to your ICorProfilerCallback implementation instance that makes the call SetFunctionIDMapper2.  Then, when your mapper function gets called:</p> <pre class="code"><span style="color: blue">typedef </span>UINT_PTR <span style="color: blue">__stdcall </span>FunctionIDMapper2( FunctionID funcId, <span style="color: blue"><span style="background-color: #ffff00"><span style="color: blue">void *clientData</span></span></span>, BOOL *pbHookFunction);</pre> <p><a href=""></a>the CLR passes that clientData right back to you.  Now your global profiler manager can associate this FunctionID with the correct ICorProfilerCallback implementation.  You can store this association in a hash table or use the FunctionIDMapper as it was intended, and return an ID of your own, typically a index into an array you build up which would contain the correct ICorProfilerCallback implementation (as well as the FunctionID you remapped), rather than using a hash table.</p> <p>Now, the next time your Enter/Leave/Tailcall probes are called, your global profiler manager will be able to map the FunctionID provided (or, your remapped client ID assuming your FunctionIDMapper returned one), to the appropriate ICorProfilerCallback implementation.</p> <h2>Instrumentation</h2> <p>Some profilers rewrite IL to call into a managed helper library that ships with the profiler, and that managed library may then P/Invoke back into the native profiler code.  In such cases, how can the native profiler code know which CLR instance did the P/Invoke?  The target of the P/Invoke would likely be your global profiler manager, which then needs to determine which ICorProfilerCallback implementation to route the call to.  This knowledge is required if the native profiler code needs to call any ICorProfilerInfo methods to do further inspection on any of the parameters the managed helper library passed in the P/Invoke.  So how does the global profiler manager figure out the right CLR?</p> <p>One way is to take advantage of a new method in the .NET Framework, <a href="">System.Runtime.InteropService.RuntimeEnvironment.GetRuntimeInterfaceAsObject</a>.  That returns the ICLRRuntimeInfo (i.e., the interface metahost uses to describe a given CLR version’s instance, as mentioned above) for the CLR instance that managed the calling code.  If your managed helper library passes that ICLRRuntimeInfo pointer to your native global profiler manager, then your native code can use that ICLRRuntimeInfo to determine which CLR version did the P/Invoke, and thus which ICorProfilerCallback implementation to route that call to.</p> <p>Another option is that you can do version-specific instrumentation.  When your profiler receives JITCompilationStarted and then calls SetILFunctionBody, your profiler knows which CLR is managing that particular method (because you receive these notifications on the appropriate ICorProfilerCallback interface).  Your profiler could then add specific markers to the instrumented code (e.g., adding integer constants like 1 or 2 or 4 to indicate the CLR version, or really any other plan you can think up).  Then, when the instrumented code gets invoked, it can pass your special values to the P/Invoke, which your global profiler manager can inspect to determine which CLR instance was in control.</p> <h2>DoStackSnapshot</h2> <p>If you’re writing a sampling profiler, or any profiler that needs to occasionally take snapshots of the stack (without building a shadow stack via Enter/Leave/Tailcall), then you use the DoStackSnapshot (DSS) API.  If your profiler implements the “Pick Many” or “Pick All” approach, then it can provide your users with the advantage of seeing more complete stacks, including managed code from all runtimes.  Remember that code managed by one CLR looks like native code to another CLR.  So by having your profiler simultaneously load against all runtimes, the profiler can then provide the most complete view of the stack.  Otherwise, chains of frames managed by a CLR the profiler is not loaded against would look like native code, with the profiler unable to report anything useful about those frames (such as function names).</p> <p><a href=""><img style="border-right-width: 0px; display: inline; border-top-width: 0px; border-bottom-width: 0px; border-left-width: 0px" title="image" border="0" alt="image" src="" width="499" height="336" /></a> </p> <p>A pick-all, “mixed-mode” (i.e., native + managed) profiler can assemble the frames managed by various CLRs, along with native frames, into a single stack view by using an algorithm like the following.  (Note that a complete algorithm for doing a mixed-mode stack walk is out of scope of this post.  More information on mixed-mode stack walking can be found <a href="">here</a>.)</p> <ol> <li>Global Profiler Manager begins an unmanaged walk of the stack starting at its thread’s current register context. </li> <li>Global Profiler Manager cycles through all ICorProfilerCallback implementations, having each one call into its CLR’s ICorProfilerInfo::GetFunctionFromIP with the IP of that stack frame. <ul> <li>Note: I mentioned above that you should never pass a profiler ID (e.g., FunctionID, ClassID) to the wrong CLR’s ICorProfilerInfo, as that will easily cause an AV.  However, it is always safe to pass any IP address to GetFunctionFromIP(). </li> </ul> </li> <li>If ICorProfilerInfo::GetFunctionFromIP succeeds with a FunctionID, that means this frame is managed, and you’ve found the CLR that manages it.  You may call DoStackSnapshot from this CLR’s ICorProfilerInfo2 to perform a complete stack walk starting at this frame.  <strong>See below for details.</strong> </li> <li>If ICorProfilerInfo::GetFunctionFromIP fails, continue cycling through the other CLRs’ ICorProfilerInfo::GetFunctionFromIP until you find one that works. </li> <li>If none of the GetFunctionFromIP calls succeed, then this really is a native frame.  Your Global Profiler Manager will need to use whatever native stack walking techniques you have to identify the frame, and then walk past it to the calling frame, and go back to step 2. </li> </ol> <p>Step 3 above occurs when your Global Profiler Manager finds a frame managed by a particular CLR, and has the corresponding profiler instance call DoStackSnapshot (on the corresponding CLR), seeded with that frame, to perform a walk from that point.  Your Global Profiler Manager will effectively repeat the above algorithm recursively, inside native blocks reported by that DoStackSnapshot.  Here are the details:</p> <ul> <li>You now have a view of the stack with information from all frames managed by this CLR. </li> <li>All frames <em>not </em>managed by this CLR appear as blocks of native frames.  Some of these frames really are native, and some are managed by a different CLR. </li> <li>For each block of native frames, repeat the algorithm above (i.e., calling GetFunctionFromIP / DoStackSnapshot from each CLR to find the CLR that manages it (if any), or to walk to the next frame and retry otherwise). </li> <li>Stitch together the frames from all CLRs, using SP as your guide on where each frame should be sorted. </li> </ul> <p>By the time you’re done, any frame for which you were unable to find a CLR that manages it really is native.  The rest of the frames are managed, and you should now have information from the appropriate CLR to identify them.</p> <p>A pick-all, managed-only profiler has a simpler job:</p> <ol> <li>Global Profiler Manager cycles through all ICorProfilerCallback implementations, having each one call into its CLR’s DoStackSnapshot to perform an unseeded walk. </li> <li>Global Profiler Manager stitches together all managed frames found by the above walks using SP as its guide on where each frame should be sorted. </li> </ol> <h1>Side-by-side and Profiler Backward Compatibility</h1> <p>Now that I’ve covered how a V4 profiler can properly support in-process side-by-side CLR instances, you may be wondering what happens if an older V2 profiler encounters multiple CLRs.  As I covered in a previous <a href="">post</a>, V2 profilers will not even be loaded by V4 CLR by default—but users may set the “COMPLUS_ProfAPI_ProfilerCompatibilitySetting” environment variable to allow a V2 profiler to be loaded by a V4 CLR.  What happens then?</p> <p>The V4 CLR attempts some low-cost heroics to try to shield the V2 profiler from pain caused by in-process side-by-side CLR instances.  However, it’s far from perfect.  As I mentioned from that previous <a href="">post</a>, COMPLUS_ProfAPI_ProfilerCompatibilitySetting may be set to one of the following three values: EnableV2Profiler, DisableV2Profiler (default), and PreventLoad.  In fact, I mentioned that “PreventLoad” would be explained in more detail in a future post.  Well, this is that post.  And it’s only a year later.  Yowza, time flies.</p> <p>The quick summary of the “low cost heroics” is that, if V4 CLR detects that V2 CLR has already been loaded, then V4 CLR will protectively refuse to load the V2 profiler (since it was already loaded by the V2 CLR).  The caveat with this plan is that it doesn’t work so well when the CLRs are loaded in the other order.  If V4 CLR loads first, it has no idea if a V2 CLR will ever load.  So it optimistically loads the V2 profiler and hopes for the best.  If a V2 CLR does load later, then the V2 profiler will likely fail in some horrible AV’ish kind of way.</p> <p>It’s also worth noting how the V4 CLR decides whether a profiler is a V2 or V4 profiler in the first place.  The V4 CLR will QI for ICorProfilerCallback3.  If it works, it’s a V4 profiler; else it’s a V2 (or even older) profiler.  This means the V4 CLR actually has to LoadLibrary your DLL, use your class factory to create an instance of your ICorProfilerCallback implementation, and then QI for ICorProfilerCallback3.</p> <p>The following table details the behavior of whether and how a V2 profiler gets loaded depending on the setting of COMPLUS_ProfAPI_ProfilerCompatibilitySetting, and the order in which the CLRs get loaded.</p> <table border="1" cellspacing="0" cellpadding="2" width="491"><tbody> <tr style="background-color: #0080ff"> <td valign="top" width="169"><font color="#ffffff">ProfilerCompatibilitySetting</font></td> <td valign="top" width="71"><font color="#ffffff">CLR Load Order</font></td> <td valign="top" width="249"><font color="#ffffff">Result</font></td> </tr> <tr> <td valign="top" width="169">EnableV2Profiler</td> <td valign="top" width="71">V2, V4 </td> <td valign="top" width="249">V2 loads profiler, V4 does not load profiler. </td> </tr> <tr> <td valign="top" width="169">EnableV2Profiler</td> <td valign="top" width="71">V4, V2 </td> <td valign="top" width="249">V4 loads profiler, V2 loads profiler <font color="#ff0000">(profiler will likely AV due to active use of multiple callback instances)</font> </td> </tr> <tr> <td valign="top" width="169">DisableV2Profiler (default) </td> <td valign="top" width="71">V2, V4 </td> <td valign="top" width="249">V2 loads profiler, V4 queries then releases the V2 profiler interface but never unloads the profiler DLL <font color="#ff0000">(profiler may possibly AV on V4 instantiation)</font> </td> </tr> <tr> <td valign="top" width="169">DisableV2Profiler (default) </td> <td valign="top" width="71">V4, V2 </td> <td valign="top" width="249">V4 queries then releases the profiler interface but never unloads the profiler DLL, V2 loads the profiler.</td> </tr> <tr> <td valign="top" width="169">PreventLoad</td> <td valign="top" width="71">V2, V4 </td> <td valign="top" width="249">V2 loads profiler, V4 does not load profiler. </td> </tr> <tr> <td valign="top" width="169">PreventLoad</td> <td valign="top" width="71">V4, V2 </td> <td valign="top" width="249">V4 does not load profiler, V2 loads profiler. </td> </tr> </tbody></table> <p></p> <p>We’re now in a better position to see the point of setting COMPLUS_ProfAPI_ProfilerCompatibilitySetting=<strong>PreventLoad</strong>.  If a user is encountering a scenario with in-process side-by-side CLR instances, particularly where V4 CLR loads first, then a V2 profiler is likely to AV.  PreventLoad tells V4 CLR not to load any profilers whatsoever, regardless of whatever version they happen to be.  Of course, V2 CLR totally ignores COMPLUS_ProfAPI_ProfilerCompatibilitySetting (since that environment variable appeared after CLR V2 shipped!), so V2 CLR will happily load the V2 profiler.  Thus, PreventLoad allows the user to use a V2 profiler to profile the V2 CLR, without allowing a V4 CLR to spoil the fun. </p> <h1>Free Test Harness</h1> <p>In-process side-by-side scenarios may be hard to test, so we have a harness you can use that will force multiple CLRs to get loaded:</p> <p>Download <a href="">RunSxS</a>.</p> <p>RunSxS has many options to customize its behavior, though you’ll probably want to start with something simple.  Here’s an example sequence of steps you can try out:</p> <ol> <li>Open an ([elevated], if necessary) command prompt of the appropriate bitness. </li> <li>Register your profiler, if necessary. </li> <li>Set the usual environment variables, including COR_ENABLE_PROFILING, COR_PROFILER, and optionally COR_PROFILER_PATH. </li> <li>You do not need to set COMPLUS_ProfAPI_ProfilerCompatibilitySetting unless you’re trying to test out your old V2 profiler. </li> <li>Have some sample V2 CLR and V4 CLR applications handy (we’ll call them AppV2.exe and AppV4.exe). </li> <li>Execute some of the RunSxS command-lines below. </li> </ol> <p>Here are some RunSxS command-lines to try out.  This one deterministically loads V2 CLR (and thus your V2 profiler), and then V4 CLR (and thus your V4 profiler):</p> <ul> <li>RunSxS /st v2.0.50727 c:\Path\To\AppV2.exe "appv2arg1 appv2arg2" v4.0.30319 c:\Path\To\AppV4.exe "appv4arg1 appv4arg2" </li> </ul> <p>Now, reverse the order:</p> <ul> <li>RunSxS /st v4.0.30319 c:\Path\To\AppV4.exe "appv4arg1 appv4arg2" v2.0.50727 c:\Path\To\AppV2.exe "appv2arg1 appv2arg2" </li> </ul> <p>This one will simultaneously launch a V2 & V4 app on separate threads, so you can see how your profiler fares with multi-threaded loading.  Try to catch some nondeterministic bugs:</p> <ul> <li>RunSxS v2.0.50727 c:\Path\To\AppV2.exe "appv2arg1 appv2arg2" v4.0.30319 c:\Path\To\AppV4.exe "appv4arg1 appv4arg2" </li> </ul> <h1>Is it over yet?!</h1> <p>Yes.  Yes it is.  To recap:</p> <ul> <li>In order to say your profiler works with V4 CLR, your profiler must be side-by-side-aware, which means it must support pick-first/one or pick-many/all. </li> <li>The latter is harder to implement, but provides your user with the most information when multiple CLRs are loaded. <ul> <li>Consider factoring your code so you have a single, global profiler manager, that is distinct from your (multiple) callback implementation instances. </li> <li>Enter/Leave/Tailcall, instrumentation, and stack walking have special considerations </li> </ul> </li> <li>Older, V2 profilers will probably have issues if multiple CLRs are loaded, though the V4 CLR half-heartedly tries to protect those older profilers. </li> <li>Test, test, test! </li> </ul> <p>Thanks to Shane Yuan for much of the content and illustrations in this post!</p><div style="clear:both;"></div><img src="" width="1" height="1">David Broman V4: Profiler Detach<p>I described how profilers may attach to already-running processes in some previous posts (<a href="">#1</a> and <a href="">#2</a>).  In this post I’m writing about how profilers that are already loaded may detach from a running process before that process exits.  Like Profiler Attach, this is a new feature available starting with CLR V4.</p> <p.</p> <h2>Limitations</h2> <p>Not every V4 profiler is allowed to detach from a running process.  The general rule is that a profiler which has caused an irreversible impact in the process it’s profiling should <em>not </em>attempt to detach.  The CLR catches the following cases:</p> <ul> <li>Profiler set immutable flags (COR_PRF_MONITOR_IMMUTABLE) via SetEventMask. </li> <li>Profiler performed IL rewriting via SetILFunctionBody </li> <li>Profiler used the Enter/Leave/Tailcall methods to add callouts to its probes </li> </ul> <p>If the profiler attempts to detach after doing any of the above, the CLR will disallow the attempt (see below for details).</p> <p.</p> <p.</p> <h2>How Detaching Works</h2> <p.</p> <p>So, the sequence works like this:</p> <ol> <li>The profiler <strong>deactivates all the ways control could enter the profiler</strong> (aside from the CLR Profiling API itself).  This means removing any Windows callbacks, timer interrupts, hijacking, disabling any other components that may try to call into the profiler DLL, etc.  The profiler must also wait for all threads that it has created (e.g., a sampling thread, inter-process communication threads, a ForceGC thread, etc.) to exit, except for the one thread the profiler will use to call RequestProfilerDetach().  Any threads created by the CLR, of course, should not be tampered with. <ul> <li>Your profiler must block here until all those ways control can enter your profiler DLL have truly been deactivated (e.g., just setting a flag to disable sampling may not be enough if your sampling thread is currently performing a sample already in progress).  You must coordinate with all components of your profiler so that your profiler DLL knows that everything is verifiably deactivated, and all profiler-created threads have exited (except for the one thread the profiler will use to call RequestProfilerDetach()). </li> </ul> </li> <li>If the profiler will use a thread of its own creation to call RequestProfilerDetach() (which is the typical way this API will be called), that thread must own a reference onto the profiler’s DLL, via its own <strong>LoadLibrary()</strong> call that it makes on the profiler DLL.  This can either be done when the thread starts up, or now, or sometime in between.  But that reference must be added at some point before calling RequestProfilerDetach(). </li> <li>Profiler calls ICorProfilerInfo3::<strong>RequestProfilerDetach</strong>(). <ul> <li>(A) This causes the CLR to (synchronously) set internal state to avoid making any further calls into the profiler via the ICorProfilerCallback* interfaces, and to refuse any calls from the profiler into ICorProfilerInfo* interfaces (such calls will now fail early with CORPROF_E_PROFILER_DETACHING). </li> <li>(B) The CLR also (asynchronously) begins a period safety check on another thread to determine when all pre-existing calls into the profiler via the ICorProfilerCallback* interfaces have returned. </li> <li>Note: It is expected that your profiler will not make any more “unsolicited” calls back into the CLR via any interfaces (ICorProfilerInfo*, hosting, metahost, metadata, etc.).  By “unsolicited”, I’m referring to calls that didn’t originate from the CLR via ICorProfilerCallback*.  In other words, it’s ok for the profiler to continue to do its usual stuff in its implementation of ICorProfilerCallback methods (which may include calling into the CLR via ICorProfilerInfo*), as the CLR will wait for those outer ICorProfilerCallback methods to return as per 3B.  But the profiler must not make any other calls into the CLR (i.e., that are not sandwiched inside an ICorProfilerCallback call).  You should already have deactivated any component of your profiler that would make such unsolicited calls in step 1. </li> </ul> </li> <li>Assuming the above RequestProfilerDetach call was made on a profiler-created thread, that thread must now call <a href=""><strong>FreeLibraryAndExitThread</strong></a><strong>()</strong>.  (Note: that’s a specialized Windows API that combines FreeLibrary() and ExitThread() in such a way that races can be avoided—do not call FreeLibrary() and ExitThread() separately.) </li> <li>On another thread, the CLR continues its <strong>period safety checks</strong> from 3B above.  Eventually the CLR determines that there are no more ICorProfilerCallback* interface calls currently executing, and it is therefore safe to unload the profiler. </li> <li>The CLR calls ICorProfilerCallback3::<strong>ProfilerDetachSucceeded</strong>.  The profiler can use this signal to know that it’s about to be unloaded.  It’s expected that the profiler will do very little in this callback—probably just notifying the user that the profiler is about to be unloaded.  Any cleanup the profiler needs to do should already have been done during step 1. </li> <li>CLR makes the necessary number of <strong>Release</strong>() calls on ICorProfilerCallback3.  The reference count should go down to 0 at this point, and the profiler may deallocate any memory it had previously allocated to support its callback implementation. </li> <li>CLR calls <strong>FreeLibrary</strong>() on the profiler DLL.  This should be the last reference to the profiler’s DLL, and your DLL will now be unloaded. <ul> <li>Note: in some cases, it’s theoretically possible that step 4 doesn’t happen until <em>after</em> this step, in which case the last reference to the profiler’s DLL will actually be released by your profiler’s thread that called RequestProfilerDetach and then FreeLibraryAndExitThread.  That’s because steps 1-4 happen on your profiler’s thread, and steps 5-8 happen on a dedicated CLR thread (for detaching profilers) sometime after step 3 is completed.  So there’s a race between step 4 and all of steps 5-8.  There’s no harm in this, so long as you’re playing nice by doing your own LoadLibrary and FreeLibraryAndExitThread as described above. </li> </ul> </li> <li>The CLR adds an Informational entry to the Application Event Log noting that the profiler has been unloaded.  The CLR is now ready to service any profiler attach requests. </li> </ol> <h2>RequestProfilerDetach</h2> <p>Let’s dive a little deeper into the method you call to detach your profiler:</p> <p>HRESULT RequestProfilerDetach([<span style="color: rgb(0,0,255)">in</span>] DWORD dwExpectedCompletionMilliseconds);</p> <a href=""></a> <p <em>everything </em.</p> <p:</p> <ul> <li>Do a GC now and show me the heap </li> <li>Dial up or down the sampling frequency </li> <li>Change which instrumented methods should log their invocations </li> <li>Start / stop monitoring exceptions </li> <li>etc. </li> </ul> <p.</p> .</p> <p.</p> <p>Until the profiler can be unloaded, it will be considered “loaded” (though deactivated in the sense that no new callback methods will be called).  This prevents any new profiler from attaching.</p> <p> </p> <p.  <em>Before</em> your profiler calls RequestProfilerDetach:</p> <ul> <li>You must take care to deactivate all other ways control can enter your profiler DLL </li> <li>Your profiler must block until all those other ways control can enter your profiler DLL have verifiably been deactivated </li> </ul><img src="" width="1" height="1">David Broman and Your Profiler<p>If.</p> <h2>Terminology</h2> <p>Let's say a C# developer writes code like this: </p> <a href=""></a> <pre class="code"><span style="color: rgb(0,0,255)">class</span> <span style="color: rgb(43,145,175)">MyClass</span><S> { <span style="color: rgb(0,0,255)">static</span> <span style="color: rgb(0,0,255)">string</span> Foo<T>(S instanceOfS, T instanceOfT) { <span style="color: rgb(0,0,255)">return</span> instanceOfS.ToString() + instanceOfT.ToString(); } }</pre> <a href=""></a> <p>Here we have a generic function, MyClass<S>.Foo<T>.  Let's say the developer instantiated MyClass & Foo by making the following function call:</p> <pre class="code">MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">float</span>>(4, 8.8);</pre> <a href=""></a> <p>It's important to distinguish between <strong>function </strong>arguments and <strong>type </strong>arguments.  The function arguments are the dudes inside the parentheses—4 and 8.8 in the example above.  Type arguments are the things you find inside the angle brackets <>.  Foo is given one type argument, <span style="color: rgb(0,0,255)">float</span>.  Foo belongs to class MyClass, which itself is given the type argument, <span style="color: rgb(0,0,255)">int</span>.</p> <p>It’s worth spending a bit of time thinking about this.  When one sees the term “type arguments”, one might mistake that for “argument types”, or “types of the function arguments”, which in the above case would be <span style="color: rgb(0,0,255)">int </span><em>and</em> <span style="color: rgb(0,0,255)">float, </span>:</p> <pre class="code"> U Alloc<U>() { <span style="color: rgb(0,0,255)">return</span> <span style="color: rgb(0,0,255)">new</span> U(); }</pre> <a href=""></a> <p>takes no function arguments at all, but it still requires a type argument (for the “U”) in order to be instantiated.</p> <h2>GetFunctionInfo2</h2> <p>So if you were to get the FunctionID for MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">float</span>>, and you passed that FunctionID to GetFunctionInfo2, what should you get back in the [out] parameters?</p> <pre class="code"> HRESULT GetFunctionInfo2( [<span style="color: rgb(0,0,255)">in</span>] FunctionID funcId, [<span style="color: rgb(0,0,255)">in</span>] COR_PRF_FRAME_INFO frameInfo, [<span style="color: rgb(0,0,255)">out</span>] ClassID *pClassId, [<span style="color: rgb(0,0,255)">out</span>] ModuleID *pModuleId, [<span style="color: rgb(0,0,255)">out</span>] mdToken *pToken, [<span style="color: rgb(0,0,255)">in</span>] ULONG32 cTypeArgs, [<span style="color: rgb(0,0,255)">out</span>] ULONG32 *pcTypeArgs, [<span style="color: rgb(0,0,255)">out</span>] ClassID typeArgs[]);</pre> <a href=""></a> <p>*pClassId: This will be the ClassID for the instantiated MyClass<<span style="color: rgb(0,0,255)">int</span>>.  More on this later.</p> <p>> <p>*pToken: This is the metadata token (mdMethodDef) for MyClass<S>.Foo<T>.  Note that you get the same mdMethodDef for any conceivable instantiation of a generic method.</p> <p>typeArgs[]: This is the array of <strong>type arguments</strong> to MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">float</span>>.  So this will be an array of only one element: the ClassID for <span style="color: rgb(0,0,255)">float</span>.  (The <span style="color: rgb(0,0,255)">int</span> in MyClass<<span style="color: rgb(0,0,255)">int</span>> is a type argument to MyClass, not to Foo, and you would only see that when you call GetClassIDInfo2 with MyClass<<span style="color: rgb(0,0,255)">int</span>>.)</p> <h2></h2> <h2>GetClassIDInfo2</h2> <p>OK, someone in parentheses said something about calling GetClassIDInfo2, so let’s do that.  Since we got the ClassID for MyClass<<span style="color: rgb(0,0,255)">int</span>> above, let’s pass it to GetClassIDInfo2 to see what we get:</p> <pre class="code"> HRESULT GetClassIDInfo2( [<span style="color: rgb(0,0,255)">in</span>] ClassID classId, [<span style="color: rgb(0,0,255)">out</span>] ModuleID *pModuleId, [<span style="color: rgb(0,0,255)">out</span>] mdTypeDef *pTypeDefToken, [<span style="color: rgb(0,0,255)">out</span>] ClassID *pParentClassId, [<span style="color: rgb(0,0,255)">in</span>] ULONG32 cNumTypeArgs, [<span style="color: rgb(0,0,255)">out</span>] ULONG32 *pcNumTypeArgs, [<span style="color: rgb(0,0,255)">out</span>] ClassID typeArgs[]);</pre> <p>> <p>*pTypeDefToken: This is the metadata token (mdTypeDef) for MyClass<S>.  As with the mdMethodDef in the previous section, you’ll get the same mdTypeDef for any conceivable instantiation of MyClass<S>.</p> .</p> <p>typeArgs: This is the array of type arguments used to instantiate classId, which in the above example is MyClass<<span style="color: rgb(0,0,255)">int</span>>.  So in this example, typeArgs will be an array of only one element: the ClassID for <span style="color: rgb(0,0,255)">int</span>.</p> <h2>COR_PRF_FRAME_INFO</h2> <p.</p> , <a href="">here’s</a> a place to go.</p> :</p> <ol> <li><span style="background-color: #00ff00">Via slow-path Enter/Leave/Tailcall probes</span> </li> <li><span style="background-color: #ff0000">Via your DoStackSnapshot callback</span> </li> </ol> <p.</p> <p>With a valid COR_PRF_FRAME_INFO, GetFunctionInfo2 will give you helpful, specific ClassIDs in the typeArgs [out] array and pClassId [out] parameter.  If the profiler passes NULL for COR_PRF_FRAME_INFO, here’s what you can expect:</p> <ul> <li>If you’re using CLR V2, pClassId will point to NULL if the function sits on <em>any</em> generic class (shared or not).  In CLR V4 this got a little better, and you’ll generally only see pClassId point to NULL if the function sits on a “shared” generic class (instantiated with reference types).  <ul> <li. </li> </ul> </li> <li>the typeArgs [out] array will contain the ClassID for <strong>System.__Canon</strong>, rather than the actual instantiating type(s), if the function itself is generic and is instantiated with reference type argument(s). </li> </ul> <p>It’s worth noting here that there is a bug in GetFunctionInfo2, in that the [out] pClassId you get for the class containing the function can be wrong with generic virtual functions.  Take a look at <a href="">this forum post</a> for more information and a workaround.</p> <h2></h2> <h2>ClassIDs & FunctionIDs vs. Metadata Tokens</h2> <p).</p> <p>For example, if we have code that uses MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">float</span>> and MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">long</span>>,.</p> <p>CLR’s generics sharing optimization complicates this somewhat.  You’ll really only see separate JIT notifications and separate FunctionIDs for different <em>unshared </em>instantiations, and not necessarily for every different instantiation.  So if instead we have code that uses MyClass<<span style="color: rgb(0,0,255)">object</span>>.Foo<<span style="color: rgb(0,0,255)">string</span>> and MyClass<<span style="color: rgb(0,0,255)">SomeClassICreated</span>>.Foo<<span style="color: rgb(0,0,255)">AnotherClassICreated</span>>, <em>can</em> happen, so your profiler can deal with it appropriately.</p> <p>So that covers JIT notifications—what about ClassLoad* notifications in the same example?  Although the CLR shares <em>JITted code</em> across reference-type instantiations, the CLR still maintains separate loaded <em>types</em> for each generic instantiation of a generic class.  So in the example from the paragraph above you will see separate ClassLoad* notifications with different ClassIDs for MyClass<<span style="color: rgb(0,0,255)">object</span>> and MyClass<<span style="color: rgb(0,0,255)">SomeClassICreated</span>>.  In fact, you will also see a separate ClassLoad* notification (with yet another ClassID) for MyClass<<span style="color: rgb(0,0,255)">System.__Canon</span>>.</p> <p <em>same </em>for all 3 types.  (Remember from this <a href="">post</a><<span style="color: rgb(0,0,255)">int</span>>), and then run !dumpmt on that ClassID, you’ll see an entirely different EEClass value in the output, as the CLR will not be sharing that subset of type data across generic instantiations that use type arguments that are value types.</p> <h2>Instrumenting Generic Functions</h2> <p>If your profiler performs IL rewriting, it’s important to understand that it must NOT do instantiation-specific IL rewriting.  Huh?  Let’s take an example.  Suppose you’re profiling code that uses MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">float</span>> and MyClass<<span style="color: rgb(0,0,255)">int</span>>.Foo<<span style="color: rgb(0,0,255)">long</span>>.  <span style="color: rgb(0,0,255)">float</span>, and the other with <span style="color: rgb(0,0,255)">long</span>,:</p> <ul> <li>Two threads simultaneously trying to call the same function for the first time (and thus both trying to JIT that function) </li> <li>Strange dependency chains involving class constructors (more on this in the MSDN <a href="">reference topic</a>) </li> <li>Multiple AppDomains using the same (non-domain-neutral) function </li> <li>And of course multiple generic instantiations! </li> </ul> <p>Regardless of the reason, the profiler must always rewrite with exactly the same IL.  Otherwise, an invariant in the CLR will have been broken by the profiler, and you will get strange, undefined behavior as a result.  And no one wants that.</p> <p> </p> <p>That’s it!  Hopefully this gives you a good idea of how the CLR Profiling API will behave in the face of generic classes and functions, and what is expected of your profiler.</p><img src="" width="1" height="1">David Broman a Profiler for Silverlight 4<p>The Silverlight 4 beta has been released a while ago (see <a href="">this</a>), and one of the new features in Silverlight 4 is the ability to use the very same profiling API that is available for regular CLR-based apps (referred to as “desktop” CLR apps).  In this post I’ll talk about how to create a profiler that uses the CLR profiling API on Silverlight 4.  I assume you are already familiar with creating profilers for the desktop CLR.</p> <h2>Getting Started</h2> <h3>Requirements</h3> <p>Install Silverlight 4 beta!  (See link above.)  It is sufficient to simply install the Windows Silverlight Developer runtime.  You don’t need the full Silverlight 4 Beta Tools for Visual Studio 2010 to write and test your profiler.</p> <p>Note that, in order to have full functionality, you and your users will need the Windows Silverlight <strong>Developer</strong> runtime, and not just the typical Windows Silverlight runtime.  The reason is that some of the profiling API methods require the debugging infrastructure to be completely initialized and available, and that is ensured by installing the Windows Silverlight Developer runtime.  For example, if you attempt to call SetILInstrumentedCodeMap() or GetILToNativeMapping() without a fully initialized debugging infrastructure, then they will fail with CORPROF_E_DEBUGGING_DISABLED.  Your main concern, though, should be that our testing of the profiling API on Silverlight is done with a Developer runtime only.  So we can’t comment on how well or poorly profiling will go without the Developer runtime.  Also, the number of profiling API methods that will fail on Silverlight without the Developer runtime may change at any time without notice (or without us even realizing). </p> <p>It is expected that, in most scenarios, it should not be too much of a burden to require the Silverlight Developer runtime, as your users will typically be developers of Silverlight applications (who already need the Developer runtime anyway).  However, it is true that some profiling API-based tools are not really targeted at developers (e.g., some tools may monitor the flow of multi-tiered applications throughout an enterprise, including the end user client Silverlight tier).  So for those of you in that situation, you will need to take care to ensure your users have installed the Developer runtime.  A member of the Silverlight team tells me a good way to do this is to check for the existence of this registry key/value:</p> <p>HKEY_LOCAL_MACHINE\software\(Wow6432Node on 64-bit boxes)\microsoft\silverlight\Components\Debugging <br />    Version    REG_SZ    4.0.50113.0 </p> <p>You should also ensure that the above Version string matches the Version value of the runtime which is stored in the grandparent key:</p> <p>HKEY_LOCAL_MACHINE\software\(Wow6432Node on 64-bit boxes)\microsoft\silverlight <br />    Version    REG_SZ    4.0.50113.0 </p> <p>If a user had the Developer runtime installed and “upgraded” to a newer end-user runtime the registry will still report the Debugging components but the version will be the old version, which for our purposes is the same as not having a Developer runtime installed at all.  Doing the comparison above ensures you catch that case, and can tell your user to install the latest Developer runtime.</p> <h3>Coding and Activation</h3> <p>A philosophy we have with enabling the profiling API on Silverlight is that it should be easy to reuse code and binaries from desktop CLR profilers, but we still wanted to ensure that a desktop profiler does not accidentally get activated in a Silverlight app.  To that end, we’ve kept the interfaces and their IIDs the same, but we’ve changed the environment variables.</p> <p>So, you will not need to create another copy of your callback interface implementation, or keep separate IIDs around for the Info interfaces you query for.  Your very same CLR V4-desktop-based code will now target Silverlight 4 apps as well.</p> <p>In order for your profiler to be activated against Silverlight, you will need to use the following environment variables:</p> <table cellspacing="0" cellpadding="2" width="794" border="1"><tbody> <tr> <td valign="top" width="205">CORECLR_ENABLE_PROFILING</td> <td valign="top" width="587">same meaning as COR_ENABLE_PROFILING has on desktop</td> </tr> <tr> <td valign="top" width="210">CORECLR_PROFILER</td> <td valign="top" width="587">same meaning as COR_PROFILER has on desktop</td> </tr> <tr> <td valign="top" width="214">CORECLR_PROFILER_PATH</td> <td valign="top" width="587">same meaning as COR_PROFILER_PATH has on desktop <br />(this is optional, if you want to use <a href="">registry-free activation</a>)</td> </tr> </tbody></table> <p>By keeping the environment variables different between Silverlight and desktop, we not only prevent desktop profilers from being accidentally activated in Silverlight apps, but we also prevent a profiler from accidentally getting instantiated twice in some processes.  Imagine: Someone writes a desktop CLR app that renders web pages, and inside one of those web pages is a Silverlight control that hosts Silverlight apps.  Or perhaps Internet Explorer is rendering a Silverlight page in one tab and a page with a CLR Click-Once app in another tab.  If a single set of environment variables controlled both desktop and Silverlight apps, then a profiler would get instantiated twice.  This is not necessarily bad—unless the profiler was not prepared for such an activation.  If you carefully code your profiler to avoid most global state (as would be necessary for enabling profiling of multiple in-process side-by-side CLR instances), then your profiler might well be fully capable of supporting two instances in the same process—one working with the Silverlight runtime, and the other working with a desktop CLR.  In such a case, feel free to set all environment variables (both the CORECLR_* and COR_* flavors) to enable simultaneous profiling of desktop and Silverlight runtimes, if that’s something you’d like to support.</p> <h2>Behavioral Differences Between Silverlight and Desktop Profiling</h2> <p>Although your very same code may be used to target both Silverlight and Desktop, you will likely need to have some conditional logic to deal with some behavioral differences between the two platforms.  (You may use <a href="">ICorProfilerInfo3::GetRuntimeInformation</a> to determine whether a given runtime is the desktop CLR (COR_PRF_DESKTOP_CLR) or Silverlight CLR (COR_PRF_CORE_CLR).)</p> <p><strong>No attach / detach on Silverlight</strong>.  Although the ability to attach to and detach from running processes is a new feature enabled on CLR V4, this feature is not available on Silverlight 4.</p> <p><strong>Don’t rely on receiving the Shutdown() callback on Silverlight</strong>.  Really, you can’t rely on Shutdown() on the desktop CLR either, as per the MSDN <a href="">topic</a>, though this callback looks to be even more unreliable on Silverlight.  Your profiler receives the Shutdown() callback depending on how the CLR is terminated, and it’s looking like Silverlight terminates the CLR in the “abrupt” fashion where most shutdown logic is skipped (including the call to the Shutdown() profiler callback).  So it will be best if you use your DllMain as a backup for ensuring your cleanup code gets run. </p> <p><strong>No event logging.</strong>  Silverlight generally does not use the Windows event log for anything.  When developing (or using) a profiler on the desktop CLR, the event log is useful for detecting and diagnosing problems with loading the profiler.  On Silverlight, these messages are routed to your debugger if you’re debugging the process hosting Silverlight, and the messages are not sent anywhere if you’re not debugging.  That means that, if you’re having issues diagnosing activation problems with your profiler, run the Silverlight process under a debugger like VS or windbg and look in the output window for messages that indicate whether Silverlight attempted to load a profiler, whether the load succeeded, and if not, why.</p> <h3>IL Rewriting</h3> <p>Doing run-time IL rewriting (or “instrumentation”) on Silverlight has enough differences from desktop that it warrants its own section.  As a review, the APIs involved in IL Rewriting are described <a href="">here</a>.  You will generally use these same APIs on Silverlight, but will encounter differences when your rewritten IL tries to do stuff, like call into your own managed helper assembly.</p> <p><strong>Security</strong></p> <p>First, realize that user Silverlight assemblies generally run under partial trust, with fairly restricted permissions.  (Full trust is reserved for the Microsoft-provided platform code, such as mscorlib.dll.)  So when you instrument partial trust code, keep in mind your rewritten IL will be under those same restrictions.  It is therefore best to avoid doing any security-sensitive operations, or if necessary, moving those operations to new Critical methods you dynamically add to mscorlib, with SafeCritical bridge code in the middle.  Read the security section from this <a href="">blog post</a> for more information. </p> <p><strong>Shipping Managed Helper Code</strong></p> <p>Many profilers rewrite user IL to call into managed “helper code” shipped by the profiler vendor.  This helper code is usually a centralized place to perform whatever logging is necessary to record that certain events have occurred (e.g., a call was made, a local variable was modified, etc.).  On Silverlight, you have a couple options on how to ship this helper code.</p> <p><u>Option 1: Pump helper code into mscorlib</u></p> <p>As you may recall, when profiling desktop CLR apps, it is illegal to add a reference from mscorlib to any other assembly.  Therefore, if your profiler instruments mscorlib methods, then that rewritten IL is forbidden to directly call into a separate helper assembly.  One workaround for this is for all helper code to be added into mscorlib at runtime via IMetaDataEmit.  This workaround is valid on Silverlight as well, and is therefore a perfectly valid option for how to ship your helper code—just pump it into mscorlib at run-time.  You will need to do this sort of thing anyway if any of your helper code needs to run at full-trust (e.g., if it needs to P/Invoke).</p> <p><u>Option 2: Ship separate helper managed module that you inject into the XAP.</u></p> <p>If your profiler does not need to instrument mscorlib, and can get its work done using partial-trust code (which is preferable), then option 1 is still a reasonable solution.  But your profiler may also do the following, which you may decide is easier to manage:</p> <ol> <li>Develop, compile, and ship a helper managed module containing helper methods that the instrumented code calls into (just like on desktop) </li> <li>Get this helper managed module into the XAP somehow (see below) </li> <li>Instrument user code to call into your helper managed module, the usual way (using IMetaDataEmit to generate an AssemblyRef and any necessary TypeRefs, MemberRefs, etc., from the user’s module to your helper managed module). </li> </ol> <p>For achieving step 2, you have a couple options: </p> <p>(2a) (preferred): If at all possible, you may wish to integrate your profiler with Visual Studio so that the build process itself can ensure the profiler's helper managed module finds its way into the XAP.  (More on this below.)</p> <p>(2b) (fallback): There will likely be scenarios where profilers will not be able to participate in the build process.  In these cases, the profiler's helper managed module must find its way into the XAP <em>after </em>the XAP has been built.  In order to do this, you simply treat the XAP file like the zip file it really is:</p> <p>(i) Unzip the XAP <br />(ii) Add your managed helper module to the XAP <br />(iii) Modify the XAP's manifest (this is a special file in every XAP that lists the assemblies contained in the XAP) <br />(iv) Rezip the XAP </p> <p>For (2a) (getting the VS build system to include your helper managed module in the XAP), here is what I learned from the Silverlight tools folks…</p> <p>The items that get added to the XAP at build-time are roughly the following: </p> <ol> <li>Built assembly </li> <li>AppManifest.xaml </li> <li>References marked <strong>CopyLocal</strong> and their dependencies. If you have enabled “Reduce XAP size by using application cacheing”, assemblies that support this feature will not be included in the XAP (they get their own zip file). </li> <li><span style="background-color: #ffff00">Any project items marked <strong>Content</strong></span> </li> <li>Satellite assemblies built by the project, or picked up by references, that match the <SupportedCultures> property. </li> </ol> <p>The easiest way to get your helper managed module into the XAP file is probably to add it as a <strong>Content</strong> file to the project.  For example: </p> <ol> <li>Create a new Silverlight project </li> <li>Edit the project file to uncomment the <strong>BeforeBuild</strong> target and add the following to it: </li> </ol> <blockquote> <p><span style="font-size: 9.5pt; color: blue; font-family: consolas"><</span><span style="font-size: 9.5pt; color: #a31515; font-family: consolas">Target</span><span style="font-size: 9.5pt; color: blue; font-family: consolas"> </span><span style="font-size: 9.5pt; color: red; font-family: consolas">Name</span><span style="font-size: 9.5pt; color: blue; font-family: consolas">=</span><span style="font-size: 9.5pt; font-family: consolas">"<span style="color: blue">BeforeBuild</span>"<font color="#0000ff">> <br /></font><">ContentWithTargetPath</span><span style="font-size: 9.5pt; color: blue; font-family: consolas"> </span><span style="font-size: 9.5pt; color: red; font-family: consolas">Include</span><span style="font-size: 9.5pt; color: blue; font-family: consolas">=</span><span style="font-size: 9.5pt; font-family: consolas">"<span style="color: blue">PathToMyHelperManagedModule.Dll</span>"<span style="color: blue"> /> <br /></span><">Target</span><span style="font-size: 9.5pt; color: blue; font-family: consolas">></span></p> </blockquote> <p>The key here is that we’re adding the DLL to the <strong>ContentWithTargetPath</strong> item collection, the contents of which get added to the XAP file.  You should also be able to leverage msbuild if you like.   </p> <h2>IE8 & child processes</h2> <p>Starting with Internet Explorer 8, tabs may be rendered via child processes, and not the iexplore.exe that was originally spawned by the user.  Some profiler products use a GUI shell to spawn the profilee (in this case the profilee would be iexplore.exe) and then communicate with the profiler DLL inside that process to allow the user to view and control information about the running application.  With IE8, the iexplore.exe that’s spawned by your shell may not be the process rendering the Silverlight control (and thus loading your profiler DLL).  So if you use such an architecture, and if your GUI shell needs to know the process ID of the process actually rendering the Silverlight application (and thus loading your profiler), this wouldn’t work out so well.  Thus, instead of directly spawning iexplore.exe, your shell should instead use <a href="">IELaunchURL</a>() to spawn IE and tell you the real ID of the process actually rendering the Silverlight app.</p> <p> </p> <p>There you go!  If you’ve ever thought about modifying your profiler to target Silverlight, now is the time to download the Silverlight 4 Beta and give it a whirl.  Hopefully you should be able to keep most of your code intact, only customizing specific code paths that need to be different on Silverlight.</p><img src="" width="1" height="1">David Broman V4: Profiler Attach Part 2: Ok, now what?<p>In a previous <a href="">post</a>,?”</p> <h1>Catch Up</h1> <p.</p> <table cellspacing="0" cellpadding="0" width="797" border="0"><tbody> <tr> <td valign="top" width="795"> <p align="center"><a href=""><img title="NoBirthAnnouncement" style="border-top-width: 0px; display: inline; border-left-width: 0px; border-bottom-width: 0px; border-right-width: 0px" height="444" alt="NoBirthAnnouncement" src="" width="609" border="0" /></a> </p> </td> </tr> <tr> <td valign="top" width="795"> <p align="center"><font face="Arial" size="1">Drawing by Magdalena Hermawan</font></p> </td> </tr> </tbody></table> <p> </p> <p>There are two fundamental ways your profiler can catch up on the current state of an application:</p> <ul> <li>Lazy catch-up—as the profiler encounters new IDs, the profiler queries information about those IDs as it needs them, rather than assuming it has a full cache that’s always built up as the IDs are first created.  This is analogous to Dorothy meeting a new grown-up, and gracefully accepting the fact that that person exists. </li> <li>Enumeration—for certain kinds of IDs, the profiler can (at attach time) request a complete list of the currently active IDs and query information about them at that time.  Sort of like Dorothy first going to the Oz City Hall and looking up the birth records for everyone. </li> </ul> <p).</p> <p>Enumeration, on the other hand, has some caveats and is worthwhile to describe in more detail.</p> <h1>Enumeration via Enum* APIs</h1> <p>Some kinds of IDs have new enumerator methods as part of the profiling API.  In particular:</p> <ul> <li>ICorProfilerInfo3::EnumModules </li> <li>ICorProfilerInfo3::EnumJITedFunctions </li> </ul> <p.</p> <p.</p> <h2>Race #1: When to enumerate?  ProfilerAttachComplete()</h2> ).</p> <p>Bad timeline (loading; enumerating too soon):</p> <ol> <li>Profiler attaches </li> <li>Profiler calls EnumModules </li> <li>Module starts to load </li> <li>ModuleID is now enumerable </li> <li>ModuleLoadFinished event would fire here if events were enabled (but they’re not yet!) </li> <li>CLR enables events </li> </ol> <p:</p> <p>Bad timeline (unloading; enumerating too soon):</p> <ol> <li>Module loads </li> <li>ModuleID is now enumerable </li> <li>Profiler attaches </li> <li>Profiler calls EnumModules (includes the ModuleID) </li> <li>Module starts to unload </li> <li>ModuleUnloadStarted event would fire here if events were enabled (but they’re not yet!) </li> <li>CLR enables events </li> </ol> <p).</p> <p().  <strong>The best place for your profiler to call the enumeration APIs is inside its implementation of ProfilerAttachComplete.</strong>  Since events are enabled <em>just before </em>the CLR calls ProfilerAttachComplete, your profiler is assured that events are enabled by the time it calls the enumeration API (from inside ProfilerAttachComplete).  This eliminates any potential holes in catch-up information your profiler queries.</p> <h2>Race #2: Duplicates</h2> <p <em>didn’t</em> do.  We didn’t consider IDs to be enumerable after the event.  If so, that would have led to holes.  A profiler could have attached and grabbed an enumeration in the middle and never been notified about the ID.</p> <p>Bad timeline (loading):</p> <ol> <li>Module starts to load </li> <li>ModuleLoadFinished event would fire here if events were enabled (but they’re not yet—no profiler is attached!) </li> <li>Profiler attaches </li> <li>CLR enables events, calls ProfilerAttachComplete() </li> <li>Profiler calls EnumModules </li> <li>ModuleID is now enumerable </li> </ol> <p:</p> <p>Bad timeline (unloading):</p> <ol> <li>Module loads, event would fire if profiler were attached (but it’s not), then ModuleID becomes enumerable </li> <li>Module starts to unload </li> <li>ModuleUnloadStarted event would fire here if events were enabled (but they’re not yet—no profiler is attached!) </li> <li>Profiler attaches </li> <li>CLR enables events, calls ProfilerAttachComplete() </li> <li>Profiler calls EnumModules (ModuleID is still enumerable, so profiler discovers ModuleID at this point) </li> <li>ModuleID is no longer enumerable </li> </ol> <p:</p> <table cellspacing="0" cellpadding="2" width="868" border="4"><tbody> <tr> <td valign="top" width="860"><span style="background-color: #00ff00"><strong>Golden rule: An ID’s enumerability status shall change <em>before</em> the corresponding load/unload event is fired.</strong></span></td> </tr> </tbody></table> <p>In other words, an ID becomes enumerable <em>before</em> the LoadFinished (or JITCompilationFinished) event.  And an ID ceases to be enumerable <em>before</em> the UnloadStarted event.  Or you can think of it as, “The event is always last”.  This eliminates any potential holes.  So to be even more explicit, here’s the enumerability vs. event ordering:</p> <ol> <li>ID available in enumerations snapped now </li> <li>LoadFinished </li> <li>ID no longer in enumerations snapped now </li> <li>UnloadStarted </li> </ol> <p.)</p> <p>The astute reader will notice that what we’ve done here is trade one race for another.  We’ve eliminated holes, but the cost is that the profiler must deal with duplicates.  For example:</p> <p>Good timeline (loading with duplicate):</p> <ol> <li>Module starts to load </li> <li>ModuleID is now enumerable </li> <li>Profiler attaches </li> <li>CLR enables events, calls ProfilerAttachComplete() </li> <li>Profiler calls EnumModules </li> <li>Profiler receives ModuleLoadFinished </li> </ol> <p:</p> <p>Good timeline (unloading with duplicate):</p> <ol> <li>Module loads, event would have fired if profiler were attached (but it’s not), ModuleID becomes enumerable </li> <li>Module starts to unload </li> <li>ModuleID is no longer enumerable </li> <li>Profiler attaches </li> <li>CLR enables events, calls ProfilerAttachComplete() </li> <li>Profiler calls EnumModules </li> <li>Profiler receives ModuleUnloadStarted event </li> </ol> <p.</p> <p>Good timeline (unloading without duplicate):</p> <ol> <li>Module loads, event would fire if profiler were attached, ModuleID becomes enumerable </li> <li>Module starts to unload </li> <li>Profiler attaches </li> <li>CLR enables events, calls ProfilerAttachComplete() </li> <li>Profiler calls EnumModules (ModuleID is still present in the enumeration) </li> <li>ModuleID is no longer enumerable </li> <li>Profiler receives ModuleUnloadStarted event </li> </ol> <p <em>generated</em>, even though iteration over that enumeration might occur later.</p> <h1>Catching Up on the State of GC Heap</h1> <p.</p> <h2>GC Already in Progress</h2> <p().</p> <h2>Inducing Your First GC</h2> <p.</p> <p).</p> <p> </p> <p.</p><img src="" width="1" height="1">David Broman V4: Stuff That May Break Your Profiler<p>When.</p> <p>CLR V4 is a different story.  There are certainly changes in V4 that may break older profilers (that’s what this post is all about!) but we made a bet that V2 profilers were more likely to <em>succeed </em>than fail when run against V4.  So we’ve decided to allow V2 profilers to be loaded by CLR V4.  Below are the caveats profiler writers need to be aware of when allowing their older profilers to run against V4.</p> <p>You should also read this post another way.  When you do refurbish your profiler to work against V4, you need to get the following right.</p> <p>The following changes are organized into the following sections:</p> <ul> <li>The Big Ones </li> <li>Profiling Infrastructure </li> <li>Loader / DLLs </li> <li>Type System </li> <li>Security </li> <li>Exception Handling </li> </ul> <h1>The Big Ones</h1> <p>In this section are the big caveats you need to know about.  I wanted to put them up front so you will have read them by the time you inevitably fall asleep midway through the post:</p> <ul> <li>No Love By Default </li> <li>In-Process Side-by-Side CLR Instances </li> </ul> <h2>No Love by Default</h2> <p>As I mentioned in a previous post, although you <em>can</em> load an older profiler into V4, you can’t do so by default.  Because of the caveats of running an older profiler against CLR V4, the profiler user must opt in to this by setting the <strong>COMPLUS_ProfAPI_ProfilerCompatibilitySetting</strong> environment variable appropriately.  See this <a href="">post</a> for more information.</p> <p>It’s worth stressing that this environment variable does <em>not</em> turn on some kind of compatibility <em>mode</em> <strong>COMPLUS_ProfAPI_ProfilerCompatibilitySetting</strong> environment variable.</p> <h2>In-Process Side-by-Side CLR Instances</h2> <p>CLR V4 can be loaded alongside other versions of the CLR, all living together in the same process.  For now, that means you can have one CLR V4 instance plus one CLR V1.1 <strong>or</strong>:</p> <p><a title="" href=""></a></p> <h1>Profiling Infrastructure</h1> <p>In this section are changes to the profiling API and infrastructure itself that may impact your profiler:</p> <ul> <li>FreeLibrary </li> <li>Enter/Leave/Tailcall </li> <li>CORPROF_E_UNSUPPORTED_CALL_SEQUENCE </li> <li>SetILInstrumentedCodeMap </li> </ul> <h2>FreeLibrary</h2> <p.</p> <h2>Enter/Leave/Tailcall</h2> <h3></h3> <h3>New Signatures</h3> <p.</p> <p<strong>WithInfo</strong>, and if those flags are not set, then you must use SetEnterLeaveFunctionHooks3.</p> <p>So you must be sure to call SetEventMask <em>first</em>, with the appropriate flags to establish whether you want to inspect arguments, return value, or frame information.  Then, you may call SetEnterLeaveFunctionHooks3 or SetEnterLeaveFunctionHooks3WithInfo <em>second</em>,.</p> <h3>Placement of the Calls</h3> <p>There was another change to the Enter/Leave/Tailcall interface; this one on x86 regarding the placement of the call to the Enter probe relative to the prolog.  In V2, the order was:</p> <p>Enter <br />Prolog <br />Leave <br />Epilog</p> <p>The above does not have mirror symmetry, and is also inconsistent with how the probes are called on the 64 bit platforms.  In V4, the order is now:</p> <p>Prolog <br />Enter <br />Leave <br />Epilog</p> <p.</p> <h2>CORPROF_E_UNSUPPORTED_CALL_SEQUENCE</h2> <p>As you may recall from this <a href="">post</a>,.</p> <table cellspacing="0" cellpadding="2" width="400" border="1"><tbody> <tr> <td valign="top" width="197">Unsafe-for-GC Callbacks</td> <td valign="top" width="201">May-trigger-GC Infos</td> </tr> <tr> <td valign="top" width="197"><font size="1">ThreadAssignedToOSThread <br />ExceptionUnwindFunctionEnter <br />ExceptionUnwindFunctionLeave <br />ExceptionUnwindFinallyEnter <br />ExceptionUnwindFinallyLeave <br />ExceptionCatcherEnter <br />RuntimeSuspendStarted <br />RuntimeSuspendFinished <br />RuntimeSuspendAborted <br />RuntimeThreadSuspended <br />RuntimeThreadResumed <br />MovedReferences <br />ObjectsAllocatedByClass <br />ObjectReferences <br />RootReferences(2) <br />HandleCreated <br />HandleDestroyed <br />GarbageCollectionStarted <br />GarbageCollectionFinished </font></td> <td valign="top" width="202"><font size="1">GetILFunctionBodyAllocator <br />SetILFunctionBody <br />SetILInstrumentedCodeMap <br />ForceGC <br />GetAppDomainsContainingModule <br />GetClassFromToken <br />GetClassFromTokenAndTypeArgs <br />GetFunctionFromTokenAndTypeArgs <br />GetAppDomainInfo <br />EnumModules <br />RequestProfilerDetach</font> <p></p> <p><font size="1"> </font></p> </td> </tr> </tbody></table> <h2>SetILInstrumentedCodeMap</h2> <p).</p> <h1>Loader / DLLs</h1> <p>In this section are changes to how the CLR loads assemblies, as well as the DLLs that make up the CLR itself:</p> <ul> <li>Dynamic Module Names </li> <li>DLL Name Changes </li> <li>MSCOREE’s Exported Hosting Functions </li> </ul> <h2>Dynamic Module Names</h2> <p:</p> <ul> <li>In V2, any module created via Reflection.Emit will have a base load address of 0. </li> <li>In V2, any module loaded directly by the CLR from disk will have a non-empty Name (and the Name will be the disk path).  All other modules will have an empty Name. <ul> <li>Modules loaded from disk include: [name non-empty in V2] <ul> <li>Any module loaded via fusion to facilitate execution.  (i.e., normal stuff) </li> <li>Any reflection-only-context module loaded from disk (Assembly.ReflectionOnlyLoadFrom) </li> </ul> </li> <li>Modules NOT loaded from disk include: [name empty in V2] <ul> <li>RefEmit-generated modules (AppDomain.DefineDynamicAssembly) </li> <li>Modules loaded from byte arrays (Assembly.Load) </li> <li>Reflection-only context modules loaded from byte arrays (Assembly.ReflectionOnlyLoad) </li> <li>Managed SQL modules, and any other host that overrides the module loading mechanism </li> </ul> </li> </ul> </li> </ul> <p.</p> <p>For profilers that really need to distinguish between regular disk modules, RefEmit-generated modules, byte array modules, etc., we are introducing <strong>GetModuleInfo2</strong>.</p> <h2>DLL Name Changes</h2> <p!</p> <h2>MSCOREE’s Exported Hosting Functions</h2> <p>Some profilers use C exports from mscoree.dll, typically one or more of the <a href="">Hosting Global Static Functions</a>.  These exports are almost all deprecated in CLR V4 (except for <a href="">CLRCreateInstance</a>). .</p> <p <em>below V4</em>. .</p> <p).</p> <p>Although there are ways to modify this behavior away from the default via configuration files, it is recommended that profilers (and hosts!) stop using C exports from mscoree.dll.  Instead, profilers that target CLR V4 should be upgraded to use the new <a href="">CLR V4 Hosting and Metahost interfaces</a> wherever they had been using mscoree exports.</p> <h1>Type System</h1> <p>In this section are changes related to how the CLR loads and manages type information:</p> <ul> <li>Collectible Assemblies </li> <li>Type Forwarding </li> <li>GetClassLayout and Value Type Size </li> <li>No More Frozen Strings </li> <li>String Layout </li> </ul> <h2>Collectible Assemblies</h2> <p <a href="">docs</a> for more information and background on the feature itself.  This will affect your profiler in that assemblies, modules, and classes may now unload without AppDomainShutdown callbacks being issued first (because the AppDomain is not shutting down!).</p> <p.</p> <p.</p> <h2>Type Forwarding</h2> <p>Expect to see more use of type forwarding in CLR V4.  See this <a href="">post</a> for more information.</p> <h2>GetClassLayout and Value Type Size</h2> <p.</p> <p.</p> <h2>No More Frozen Strings</h2> <p.</p> <h2>String Layout</h2> <p.</p> <h1>Security</h1> <p>In this section are changes related to security:</p> <ul> <li>Introduction to Security Changes </li> <li>Transparent code in fully-trusted assemblies </li> <li>Conditional APTCA </li> </ul> <h2>Introduction to Security Changes</h2> <p>What follows is a pathetically reduced summary of how security is changing in CLR V4, for the purpose of putting into perspective how your profiler may behave differently as a result.  Please note that my blog is <em>not </em>the place to go for getting general information about managed security.  Check out <a title="" href=""></a> and <a title="" href=""></a>.  Also, there’s a CLR Inside Out article with a great overview of all the security changes in V4: <a title="" href=""></a>.</p> <p.</p> <p <em>and </em>allows transparent callers.  This makes SafeCritical code pretty dangerous to write, and it must do thoughtful validation of parameters and careful calls into other Critical code, to ensure it’s not being used maliciously.</p> <p>“Nifty, Dave.  But how is this going to break my profiler?”</p> <h2>Transparent code in fully-trusted assemblies</h2> <p.</p> <p.</p> <p).</p> <h2>Conditional APTCA</h2> <p>It is impossible to summarize “Conditional APTCA (Allow Partially Trusted Callers Attribute)” into one sentence, but I’m going to do it anyway.  Conditional APTCA is a feature where an assembly with security-sensitive code says, “I <em>may</em> allow partially-trusted callers”, and then the host makes the final call on a per-AppDomain basis.  If that doesn’t clarify it for you (and how could it, really?) go read the CLR Inside Out article I referenced above.</p> <p.</p> <p.</p> <p.</p> <p <em>enable</em>.</p> <h1>Exception Handling</h1> <p>In this section are changes related to how the CLR implements exception handling:</p> <ul> <li>DynamicMethods </li> <li>Windows 7 Unhandled Exceptions </li> <li>GetNotifiedExceptionClauseInfo on 64-bits </li> <li>CLR Exception Code </li> </ul> <h2>DynamicMethods</h2> <p.</p> <p.</p> <p>Hopefully this shouldn’t break you at all, and if anything should make your life better.</p> <h2>Windows 7 Unhandled Exceptions</h2> <p.</p> <h2>GetNotifiedExceptionClauseInfo on 64-bits</h2> <p>A minor improvement has been made to the behavior of GetNotifiedExceptionClauseInfo during nested exceptions on x64.</p> <pre class="code"> <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(0,0,255)">static</span> <span style="color: rgb(0,0,255)">void</span> Foo() { <span style="color: rgb(0,0,255)">try </span> { <span style="color: rgb(0,0,255)">throw</span> <span style="color: rgb(0,0,255)">new</span> Exception(<span style="color: rgb(163,21,21)">"outer"</span>); } <span style="color: rgb(0,0,255)">catch</span> (Exception exOuter) { <span style="color: rgb(0,0,255)">try </span> { <span style="color: rgb(0,0,255)">throw</span> <span style="color: rgb(0,0,255)">new</span> Exception(<span style="color: rgb(163,21,21)">"inner"</span>); } <span style="color: rgb(0,0,255)">catch</span> (Exception exInner) { <span style="color: rgb(0,128,0)">// Profiler calls GetNotifiedExceptionClauseInfo </span> } } }</pre> <p.</p> <p>So hopefully this change should not break you, but if anything, should help.</p> <h2>CLR Exception Code</h2> <p. </p> <p> </p> <p.</p><img src="" width="1" height="1">David Broman V4: Profiler Attach Basics With Sample Code<P>A.</P> <P.</P> <H1></H1> <H1>The Players</H1> <P!</P> <P><IMG height=275 </P> <P.</P> <H1>Inside the Trigger Process</H1> <P". </P> <H2>Meta-whos-its?</H2> ).</P> <UL> <LI>Get ICLRMetaHost </LI> <LI>Enumerate the CLRs loaded into the target process </LI> <LI>Get ICLRRuntimeInfo for the particular CLR in the target process you want to profile </LI> <LI>Get the corresponding ICLRProfiling </LI> <LI>Call ICLRProfiling::AttachProfiler </LI></UL> <H2>Users and Integrity</H2> <P, <A href="" mce_href="">here's</A> some reference from MSDN. </P> <H2>Sample Trigger Source Code</H2> <P>For some sample code to attach a profiler to a process, take a look at the sample uploaded to the MSDN Code Gallery <A href="" mce_href="">here</A>.</P> <P.</P> <H1>Inside the Profilee Process</H1> <P. </P> <P).</P> .</P> <P. </P> <P>From your InitializeForAttach implementation, your profiler will call SetEventMask as usual to announce your intentions, and you're off to the races.</P> <H1>Limitations</H1> <P>It was impossible to enable all profiling scenarios for attach in the time we had for the V4 release. So only profilers that do <STRONG>sampling </STRONG>and <STRONG>memory </STRONG>analysis will function properly after attaching to a live process. Attempts to use other profiling APIs after attach will be met with CORPROF_E_UNSUPPORTED_FOR_ATTACHING_PROFILER.</P> <H3></H3> <H2>Specific Callback Limitations</H2> <P.</P> <H2>Specific Info Limitations</H2> <P>Most of the ICorProfilerInfo* methods are available to your attaching profiler, however some are not--particularly those involved in <STRONG>IL rewriting</STRONG>. Here's a list of all ICorProfilerInfo* methods NOT supported for attaching profilers:</P> <UL> <LI>GetILFunctionBody </LI> <LI>GetILFunctionBodyAllocator </LI> <LI>SetILFunctionBody </LI> <LI>SetILInstrumentedCodeMap </LI> <LI>SetEnterLeaveFunctionHooks* </LI> <LI>SetFunctionIDMapper* </LI> <LI>GetNotifiedExceptionClauseInfo </LI> <LI>All methods related to Enter/Leave/Tailcall </LI></UL> <P>It's expected that future releases of the CLR will enable more API methods for use by attaching profilers.</P> <H2>GC Limitations</H2> <H3>GC Modes</H3> <P>To understand limitations around the GC modes, here's a quick review of the GC modes an app can run under:</P> <UL> <LI><STRONG>Workstation Blocking mode</STRONG>. The thread that triggered the GC performs the GC while all other threads executing managed code must wait. </LI> <LI><STRONG>Workstation Concurrent / Background mode (the default)</STRONG>. Concurrent GC (V1 & V2) allows portions of a full GC to execute while other threads are allowed to run. Background GC (its replacement in V4) takes it one step further, and also allows an ephemeral GC (i.e., gen 0 or gen 1) to execute while a gen 2 GC is executing. </LI> <LI><STRONG>Server mode</STRONG>. Hosts like ASP.NET may choose to enable server mode which creates a heap + dedicated GC thread per CPU. This allows GCs to be fanned out to multiple threads. </LI></UL> <P>Of course, <A href="" mce_href="">Maoni's blog</A> is required reading for anyone who wants to understand how the GC works.</P> <P.</P> <P>So here's the catch. What if a V4 app starts up in background GC mode <EM>without</EM>.</P> <P>Of course, you could forcibly turn off concurrent / background mode every time the app starts up via a config file:</P> <TABLE class="" border=1> <TBODY> <TR> <TD class=""> <P><configuration> <BR> <runtime> <BR> <gcConcurrent enabled="false"/> <BR> </runtime> <BR></configuration> </P></TD></TR></TBODY></TABLE> <P.</P> <H3>ObjectAllocated</H3> <P>The ObjectAllocated callback is disallowed for attaching profilers (i.e., COR_PRF_ENABLE_OBJECT_ALLOCATED is not part of the COR_PRF_ALLOWABLE_AFTER_ATTACH mask).</P> <H1>Go Forth and Attach</H1> <P.</P><img src="" width="1" height="1">David Broman V4 Beta 2 Released!<P>All you profiler writers will want to try out your profiler on the latest and greatest!</P> <P>Information on getting beta 2 of CLR V4 and Visual Studio 2010 is available <A class="" title=herehere</A>.</P> <P>The beta 2 docs for the profiling API start <A class="" title=herehere</A>.</P> <P>There have been some bug fixes between beta 1 & beta 2 that may improve life for your profiler. It's also worth noting that, for those of you dabbling with getting your profiler to attach to a running process, the method to get an instance of the metahost API has been renamed (since beta 1) to <A class="" title=CLRCreateInstanceCLRCreateInstance</A>. If you're looking for more information on how to make your profiler attachable, I will be posting an entry on that as soon as I can.</P><div style="clear:both;"></div><img src="" width="1" height="1">David Broman Forwarding<p>MS <a href="">topic</a>.  If you Bing type forwarding you’ll find many blogs that talk about it as well.  Yes, that’s right.  I used Bing as a verb.  Get used to it; Bing is awesome.</p> <p.</p> <h2>Example: TimeZoneInfo</h2> <p:</p> <table border="1"><tbody> <tr> <td> <pre>.class extern /*27000004*/ forwarder System.TimeZoneInfo { .assembly extern mscorlib /*23000001*/ }</pre> </td> </tr> </tbody></table> <p).</p> <p.</p> <h2>Walkthrough 1: Observe the forwarding of System.TimeZoneInfo</h2> <p>This walkthrough assumes you have .NET 4.0 Beta 1 installed (see <a href="">here</a>) <strong>and</strong> an older release of .NET, such as .NET 3.5, installed.</p> <p>Code up a simple C# app that uses System.TimeZoneInfo:</p> <p><span style="color: rgb(0,0,255)">namespace</span> test <br />{ <br />    <span style="color: rgb(0,0,255)">class</span> <span style="color: rgb(43,145,175)">Class1 <br /></span>    { <br />        <span style="color: rgb(0,0,255)">static</span> <span style="color: rgb(0,0,255)">void</span> Main(<span style="color: rgb(0,0,255)">string</span>[] args) <br />        { <br />            System.TimeZoneInfo ti = <span style="color: rgb(0,0,255)">null</span>; <br />        } <br />    } <br />}</p> <p>Next, compile this into an exe using a CLR V2-based toolset (e.g., .NET 3.5).  You can use Visual Studio, or just run from the command-line (but be sure your path points to the pre-.NET 4.0 C# compiler!).  Example:</p> <table border="1"><tbody> <tr> <td> <pre>csc /debug+ /o- /r:"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\v3.5\System.Core.dll" Class1.cs</pre> </td> </tr> </tbody></table> <p>Again, be sure you’re using an old csc.exe from, say, a NET 3.5 installation.  To verify, open up Class1.exe in ildasm, and take a look at Main().  It should look something like this:</p> <table border="1"><tbody> <tr> <td> <pre>.method /*06000001*/ private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 4 (0x4) .maxstack 1 <strong> .locals /*11000001*/ init ([0] class [<font color="#ff0080"><span style="background-color: #ffff00"><font color="#ff0080">System.Core</font></span></font>/*23000002*/]System.TimeZoneInfo/*01000006*/ ti)</strong> IL_0000: nop IL_0001: ldnull IL_0002: stloc.0 IL_0003: ret } // end of method Class1::Main</pre> </td> </tr> </tbody></table> <p>The key here is to note that the IL uses a TypeRef for System.TimeZoneInfo (01000006) that points to <strong>System.Core.dll</strong>. !</p> <p.</p> <p>Ok, so how do we run this pre-.NET 4.0 executable against .NET 4.0?  A config file, of course.  Paste the following into a file named Class1.exe.config that sits next to Class1.exe:</p> <table border="1"><tbody> <tr> <td> <pre><configuration> <startup> <supportedRuntime version="v4.0.20506"/> </startup> </configuration></pre> </td> </tr> </tbody></table> <p…</p> <h2>Walkthrough 2: Forwarding your own type</h2> <p>To experiment with forwarding your own types, the process is:</p> <ul> <li>Create Version 1 of your library <ul> <li>Create version 1 of your library assembly that defines your type (MyLibAssemblyA.dll) </li> <li>Create an app that references your type in MyLibAssemblyA.dll (MyClient.exe) </li> </ul> </li> <li>Create version 2 of your library <ul> <li>Recompile MyLibAssemblyA.dll to forward your type elsewhere (MyLibAssemblyB.dll) </li> <li>Don’t recompile MyClient.exe.  Let it still think the type is defined in MyLibAssemblyA.dll. </li> </ul> </li> </ul> <h3>Version 1</h3> <p>Just make a simple C# DLL that includes your type Foo.  Something like this (MyLibAssemblyA.cs):</p> <pre class="code"><span style="color: rgb(0,0,255)">using</span> System; <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(0,0,255)">class</span> <span style="color: rgb(43,145,175)">Foo </span>{ }</pre> <a href=""></a> <p>and compile it into MyLibAssemblyA.dll:</p> <table border="1"><tbody> <tr> <td> <pre>csc /target:library /debug+ /o- MyLibAssemblyA.cs</pre> </td> </tr> </tbody></table> <p>Then make yourself a client app that references Foo.</p> <pre class="code"><span style="color: rgb(0,0,255)">using</span> System; <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(0,0,255)">class</span> <span style="color: rgb(43,145,175)">Test </span>{ <span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(0,0,255)">static</span> <span style="color: rgb(0,0,255)">void</span> Main() { Foo foo = <span style="color: rgb(0,0,255)">new</span> Foo(); <span style="color: rgb(43,145,175)">Console</span>.WriteLine(<span style="color: rgb(0,0,255)">typeof</span>(Foo).AssemblyQualifiedName); } }</pre> <a href=""></a> <p>and compile this into MyClient.exe:</p> <table border="1"><tbody> <tr> <td> <pre>csc /debug+ /o- /r:MyLibAssemblyA.dll MyClient.cs</pre> </td> </tr> </tbody></table> <p>When you run MyClient.exe, you get this boring output:</p> <table border="1"><tbody> <tr> <td> <pre>Foo, MyLibAssemblyA, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null</pre> </td> </tr> </tbody></table> <p>Ok, time to upgrade!</p> <h3>Version 2</h3> <p>Time goes by, your library is growing, and its time to split it into two DLLs.  Gotta move Foo into the new DLL.  Save this into MyLibAssemblyB.cs</p> <p><span style="color: rgb(0,0,255)">using</span> System; <br /><span style="color: rgb(0,0,255)">public</span> <span style="color: rgb(0,0,255)">class</span> <span style="color: rgb(43,145,175)">Foo <br /></span>{ <br />}</p> <a href=""></a> <p>compile that into your new DLL, MyLibAssemblyB.dll:</p> <table border="1"><tbody> <tr> <td> <pre>csc /target:library /debug+ /o- MyLibAssemblyB.cs</pre> </td> </tr> </tbody></table> <p>And for the type forward.  MyLibAssemblyA.cs now becomes:</p> <pre class="code"><span style="color: rgb(0,0,255)">using</span> System; <span style="color: rgb(0,0,255)">using</span> System.Runtime.CompilerServices; [<span style="color: rgb(0,0,255)">assembly</span>: <span style="color: rgb(43,145,175)">TypeForwardedTo</span>(<span style="color: rgb(0,0,255)">typeof</span>(Foo))]</pre> <a href=""></a> <p>compile that into MyLibAssemblyA.dll (overwriting your Version 1 copy of that DLL):</p> <table border="1"><tbody> <tr> <td> <pre>csc /target:library /debug+ /o- /r:MyLibAssemblyB.dll MyLibAssemblyA.cs</pre> </td> </tr> </tbody></table> <p>Now, when you rerun MyClient.exe (without recompiling!), it will look for Foo first in MyLibAssemblyA.dll, and then hop over to MyLibAssemblyB.dll:</p> <table border="1"><tbody> <tr> <td> <pre>Foo, <span style="background-color: #ffff00">MyLibAssemblyB</span>, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null</pre> </td> </tr> </tbody></table> <p> </p> <p>And this all despite the fact that MyClient.exe still believes that Foo lives in MyLibAssemblyA:</p> <table border="1"><tbody> <tr> <td> <pre>.method /*06000001*/ public hidebysig static void Main() cil managed { .entrypoint // Code size 29 (0x1d) .maxstack 1 .locals /*11000001*/ init ([0] class [MyLibAssemblyA/*23000002*/]Foo/*01000006*/ foo) IL_0000: nop IL_0001: newobj instance void [MyLibAssemblyA/*23000002*/]Foo/*01000006*/::.ctor() /* 0A000004 */ IL_0006: stloc.0 <strong> IL_0007: ldtoken [<span style="background-color: #ffff00">MyLibAssemblyA</span>/*23000002*/]Foo/*01000006*/</strong> IL_000c: call class [mscorlib/*23000001*/]System.Type/*01000007*/ [mscorlib/*23000001*/]System.Type/*01000007*/::GetTypeFromHandle(valuetype [mscorlib/*23000001*/]System.RuntimeTypeHandle/*01000008*/) /* 0A000005 */ IL_0011: callvirt instance string [mscorlib/*23000001*/]System.Type/*01000007*/::get_AssemblyQualifiedName() /* 0A000006 */ IL_0016: call void [mscorlib/*23000001*/]System.Console/*01000009*/::WriteLine(string) /* 0A000007 */ IL_001b: nop IL_001c: ret } // end of method Test::Main</pre> </td> </tr> </tbody></table> <h2>Profilers</h2> <p).</p> <p.</p> <p.</p> <p>In any case, whether you think your profiler will be affected by type forwarding, be sure to test, test, test!</p><img src="" width="1" height="1">David Broman V4: Load your profiler without using the registry<p>One of the new features in CLR V4 is the ability to load your profiler without needing to register it first.  In V2, we would look at the following environment variables:</p> <p>COR_ENABLE_PROFILING=1</p> <p>COR_PROFILER={<em>CLSID of profiler</em>}</p> <p.</p> <p>We mostly follow the same algorithm in V4, so you can continue registering your profiler if you wish.  However, in V4 we look for one more environment variable first:</p> <p>COR_PROFILER_PATH=<em>full path to your profiler's DLL</em></p> <p>If that environment variable is present, we skip the registry look up altogether, and just use the path from COR_PROFILER_PATH to load your DLL.  A couple things to note about this:</p> <ul> <li>COR_PROFILER_PATH is purely optional.  If you don't specify COR_PROFILER_PATH, we use the old procedure of looking up your profiler's CLSID in the registry to find its path</li> <li>If you specify COR_PROFILER_PATH <em>and</em> register your profiler, then COR_PROFILER_PATH always wins.  Even if COR_PROFILER_PATH points to an invalid path, we will still use COR_PROFILER_PATH, and just fail to load your profiler.</li> <li>COR_PROFILER is <em>always required</em>.  If you specify COR_PROFILER_PATH, we skip the registry look up; however, we still need to know your profiler's CLSID, so we can pass it to your class factory's CreateInstance call.</li> </ul><img src="" width="1" height="1">David Broman does Dave look like?<p>Find out on channel 9 as Jon Langdon, Thomas Lai, and I <a href="">discuss</a> some of the new diagnostics features in CLR V4.</p><img src="" width="1" height="1">David Broman your V2 profiler binary on CLR V4<p>Ok, you've installed VS 2010 beta 1, along with .NET FX 4.0 beta 1, and you're wondering--can you run your profiler against this new .NET framework without recompiling the profiler?</p> <p.</p> <p>So, how to use COMPLUS_ProfAPI_ProfilerCompatibilitySetting?  Set it to one of the following 3 values:</p> <ul> <li>EnableV2Profiler</li> <ul> <li>It enables the V2 profiler to be activated by V4 CLR.</li> </ul> <li>DisableV2Profiler (default)</li> <ul> <li>V4 CLR refuses to activate the V2 profiler, and logs an event to the event log.</li> </ul> <li>PreventLoad </li> <ul> <li>V4 CLR does not load the profiler, regardless of the profiler’s version.  This is useful for preventing problems in certain in-process side-by-side CLR scenarios.  More on that in an upcoming post.</li> </ul> </ul><img src="" width="1" height="1">David Broman V4 Beta 1 Released!<p>Now is the time to try out your profiler against the new .NET FX 4.0 Beta 1 bits.  I'll be writing about some gotchas, and how to take advantage of the new features.  But first, get started downloading:</p> <p><a title="Visual Studio 2010 Product Page" href="">Visual Studio 2010 Product Page</a></p> <p>You can find some reference documentation on the new profiling interfaces here:</p> <p><a title="" href=""></a></p><img src="" width="1" height="1">David Broman
http://blogs.msdn.com/b/davbr/atom.aspx?Redirected=true
CC-MAIN-2015-48
refinedweb
23,139
51.99
In September or October of last year, I received an email from someone who had come across CC Teamspace and was wondering if there was a demo site available they could use to evaluate it. I told them, “No, but I can probably throw one up for you.” A month later I had to email them and say, “Sorry, but I haven’t found the time to do this, and I don’t see that changing.” This is clearly not the message you want to send to possible adopters of your software — “Sorry, even I can’t install it quickly.” Now part of the issue was my own meta/perfectionism: I wanted to figure out a DVCS driven upgrade and maintenance mechanism at the same time. But even when I faced the fact that I didn’t really need to solve both problems at the same time, I quickly became frustrated by the installation process. The XML file I needed to import seemed to contain extraneous pages, and things seemed to have changed between MediaWiki and/or extension versions since the export was created. I kept staring at cryptic errors, struggling to figure out if I had all the dependencies installed. This is not just a documentation problem. If we think about the application life cycle, there are a three stages a solution to this problem needs to address:[†]_ - Installation - Customization - Upgrade If an extension is created using PHP, users can do all three (and make life considerably easier if they’re a little VCS savvy). But if we’re dealing with an “application” built using Semantic MediaWiki and other SMW Extensions, it’s possible that there’s no PHP at all. If the application lives purely in the wiki, we’re left with XML export/import[‡]_ as the deployment mechanism. With this we get a frustrating release process, Customization support, and a sub-par Installation experience. The basic problem is that we currently have two deployment mechanisms: full-fledged PHP extensions, and XML dumps. If you’re not writing PHP, you’re stuck with XML export-import, and that’s just not good enough. A bit of history: When Steren created the initial release of CC Teamspace, he did so by exporting the pages and hand tweaking the XML. This is not a straight-forward, deterministic process that we want to go through every time a bug fix release is needed. For users of the application, once the import (Installation) is complete (assuming it goes better than my experience), Customization is fairly straight-forward: you edit the pages. When an Upgrade comes along, though, you’re in something of a fix: how do you re-import the pages, retaining the changes you may have made? Until MediaWiki is backed by a DVCS with great merge handling, this is a question we’ll have to answer. We brainstormed about these issues at the same time we were thinking about Actions. Our initial thoughts were about making the release and installation process easier: how does a developer[◊]_ indicate these pages in my wiki make up my application, and here’s some metadata about it to make life easier. We brainstormed a solution with the following features: - An “Application“ namespace: just as Forms, Filters, and Templates have their own namespace, an Application namespace would be used to define groups of pages that work together. - Individual Application Pages, each one defining an Application in terms of Components. In our early thinking, a Component could be a Form, a Template, a Filter, or a Category; in the latter case, only the SMW-related aspects of the Category would be included in the Application (ie, not any pages in the Category, on the assumption that they contain instance-specific data). - Application Metadata, such as the version[♦]_, creator, license, etc. A nice side effect of using a wiki page to collect this information is that we now have a URL we can refer to for Installation. The idea was that a Special page (ie, Special:Install, or Special:Applications) would allow the user to enter the URL of an Application to install. Magical hand waving would happen, the extension dependencies would be checked, and the necessary pages would be installed. While we didn’t get too far with fleshing out the Upgrade scenario, I think that a good first step would be to simply show the edit diff if the page has changed since it was Installed, and let the user sort it out. It’s not perfect, but it’d be a start. I’m not sure if this is exactly the right approach to take for packaging these applications. It does effectively invent a new packaging format, which I’m somewhat wary of. At the same time, I like that it seems to utilize the same technologies in use for building these applications; there’s a certain symmetry that seems reassuring. Maybe there are other, obvious solutions I haven’t thought of. If that’s the case, I hope to find them before I clear enough time from the schedule to start hacking on this idea.
https://www.yergler.net/2010/01/25/thoughts-on-deploying-and-maintaining-smw-applications/
CC-MAIN-2018-43
refinedweb
856
56.89
As you probably know by now from the abundant number of postings on the issue, VS 2005 beta 1 and the Express editions are now available. A few months ago, when we released the PD5 build of VS, the Add-in wizard had a listing of about 8 different applications that you could build Add-ins for; these included all the Express IDEs. We got a lot of questions about what Express meant, and since it was not an announced product yet, we had to side step the issue. Now you know. Beta 1 also has a few new items that we are now talking about. One of these is what we are calling VSTemplates. Previously, when you needed to create a new project you had two options. The first way was to copy the project into a special directory, then an item would appear in the New Project dialog box. The user would then select the item and the project would be copied into the destination directory, and loaded into VS. This would not be ideal because the source files would not be modified to rename the main class to match the name of the project – in other words, there would be no token replacement on the files, they would just be copied into the destination directory and opened. The second way of creating new projects would be to write a wizard. You would create a component that implements IDTWizard, create a .vsz file that would point to that component, and then put the .vsz file in the correct directory. When the user selected your .vsz file in the New Project dialog box, your wizard would be invoked. Your wizard could then do whatever it wanted to fix up the project templates (such as parameter replacements), then import it into the solution. But writing wizards like this are a real pain, they can be tedious to write, and hard to get right. For this version of VS we have the VS Template Wizard. First you create a project that serves as your templates. Next, you create an XML file with the extension .vstemplate. The New Project dialog box can read the .vstemplate file and do all the work of generating a project for you. It will open the .vstemplate file, read in the list of files that are contained within the project, search for special tokens in the files and replace them with values meaningful to that project, then load the project into the solution. For example, suppose you entered the project name “MyProject” into the New Project dialog box. When you try to open a template from the New Project dialog box, the wizard will automatically open the files, look for the string “$projectname$”, and replace it with the text MyProject – and you did not need to write a bit of code. That does not mean you cannot write code to customize the template wizard. Suppose you have a standard that you want to name everything with lower case, thus the value $projectname$ will be replaced with myproject, or you have a token in your template that you want to replace (put “ public class MyWizardComponent : IWizard { void RunStarted(object automationObject, System.Collections.Generic.Dictionary<string, string> replacementsDictionary, WizardRunKind runKind, object[] customParams) { replacementsDictionary.Add("$myaddress$", " } //Other methods of the IWizard interface go here. } Not only does this mechanism work for new projects, but also new project items also. Need to add file(s) to a project such as a custom form which consists of, for example, the files Form.cs, Form.designer.cs, and Form.resx and do modifications on those files? You can set up a .vstemplate to import those files into your project. Packaging up your templates is also easy. Put the files into a .zip file, put them on disk (in the install path to make them available to anybody or in the ProjectTemplates/ItemTemplates folders buried in …\My Documents\Visual Studio, and they will appear in the appropriate dialog box. Need to quickly generate a .vstemplate file and the project? No problem, in the next beta we should have a wizard that will package up a project for you. If you would like some examples of how this works, most of the wizards in VS Beta 1 are written using the new vstemplate format. Search your disk for *.vstemplate files to see how to get started. I like it! Great idea. I’ve heard some rumors that the Express editions won’t support addins at all. Your post seems to contradict this. What’s the real story? For Beta 1, we did no special work to enable or disable Add-ins – they are available because the Express editions derive from the core Visual Studio DLL. However, future releases of the Express editions may not support Add-ins, Macros, or VSIP packages. Each AppID (our internal terminology – VS, Express C#, Express Web, etc. is each an AppID) has the ability to turn extensibility features on or off, and the current plan of record is that the Express SKUs will turn this off. There are a couple reasons for this decision, some to control the download size, some are marketing related. The intent of an Express edition is to support hobbyists, and while I know many of you can be considered a hobbyist extensibility developer (some do it for a living, some to just make their jobs easier), the focus of the Express edition is on a certian set of scenarios that extensibility development does not support. One other thing, this does not mean that you cannot write wizards or use the Template wizard to generate projects or project items, just that you will not be able to create Add-ins, Macros, or VSIP packages. I would like a .vstemplate that would allow the wizard to intercept and change the disk location of the .csproj (or whatever) file that being created. Basically I don’t want to have to specify a <TemplateContent> element and just have the Wizard emit the files to disk and call dte to load the project into VS: dte.Solution.AddFromFile(projectPath, false); solutionFolder.AddFromFile(projectPath, false); The wizard already receives all the arguments from the open file dialog in $destinationdirectory$ $projectname$ and customParameters gives me the path to the .vstemplate file. It would also be nice if the IWizard interface would pass the SolutionFolder the user right-clicked on to add a new project so my Wizard ddn’t have to track it down. This is a really great feature! Thanks for getting this into the product. I can’t seem to open the .vstemplate file for read. I use the path that I find inside customParameters[0]. Is it possible that VS opened the file for readwrite access? If so, could that be changed to readonly? It would be cool if my wizard could open that file to parse the content. Thanks again for the cool feature! Chris
https://blogs.msdn.microsoft.com/craigskibo/2004/06/29/beta-1-and-the-new-template-wizard/
CC-MAIN-2017-47
refinedweb
1,157
72.76
A new HSDS version: 0.6.3 is available. You can pull the source from with tag: v0.6.3, or grab the image on Docker hub: hdfgroup/hsds:v0.6.3 Updates for this release: Updated “Quick Start” guide to use posix driver rather than OpenIO Added support for KeyCloak Authentication Fix for compound types that contain variable length fields Added tool (hsds/tools/link_mod.py) to update the URI for linked files Return 200 rather than 409 for idempotent PUT link requests Added support for Kubernetes namespaces Various documentation updates Please respond here if you have any questions or encounter issues with this release.
https://forum.hdfgroup.org/t/hsds-version-0-6-3-released/8270
CC-MAIN-2021-17
refinedweb
106
63.9
- Tutoriais - User Interface (UI) - Foundation game play Foundation game play Verificado com a versão: 5.3 - Dificuldade: Principiante We have our buttons, but they currently don’t actually do anything. We now need to set up the functionality that happens when a player clicks one of our buttons. To keep things simple, let’s start with assigning an "X" to the grid space when a player clicks a button and then "locking" that button to prevent any further changes. To do this we will need a new script attached to the button prefab. - In the Project Window, create a new folder called "Scripts". - Select the Grid Space prefab in the Project Window. - With the Grid Space prefab selected, - ... create and add a new Script called "GridSpace". - File the GridSpace script in the Scripts folder. - Open the GridSpace script for editing. In this script, to be able to manipulate the local Button component and the associated Text component on the child GameObject, we will first need an appropriate Namespace to be able to use Unity's UI toolset, and then we will need references to the Button component and the associated Text component to set their properties. We will also need to hold the value of the current side, which for now will simply be an "X". - Add the UI Namespace to the top of the script. using UnityEngine.UI; For more information on namespaces, please see the information linked below. - Remove all of the sample code from the GridSpace class. - Create a public variable for the local Button component called "button". - Create a public variable for the Button’s associated Text component called "buttonText". - Create a public string variable for the "X" called "playerSide". public Button button; public Text buttonText; public string playerSide; UI Buttons can call a public functions in associated scripts. We need to create a public function to Do Something when the button is clicked.. In this function we want to set the value of "X" to the grid space and then disable the button functionality by making it non-interactable. A Button can be set to either accept or ignore input by using the Button component's interactable property. - Create a public *function that returns *void called "SetSpace". - In SetSpace, - ... assign the text property to be "X" from playerSide. - ... make the button itself non-interactable. public void SetSpace () { buttonText.text = playerSide; button.interactable = false; } Later on in this lesson, we will set up the Grid Space's Button component to call the SetSpace function. The final script should look like this: GridSpace Code snippet using UnityEngine; using System.Collections; using UnityEngine.UI; public class GridSpace : MonoBehaviour { public Button button; public Text buttonText; public string playerSide; public void SetSpace () { buttonText.text = playerSide; button.interactable = false; } } - Save the script. - Return to Unity. We now need to set up the references we have just created in the Inspector. - Select the Grid Space prefab in the Project Window. - With the Grid Space prefab selected, - ... drag the Grid Space prefab onto the Button property. - ... drag the child Text GameObject from the Grid Space prefab onto the Button Text property. - ... set the Player Side property to "X" (or any other string value you choose to test with). This sets up the Grid Space component. Now we need to set up the Button itself. - Select the Grid Space prefab in the Project Window. - With the Grid Space prefab selected, - ... add a new row to the On Click list in the Button component using the "+" button. - ... drag the Grid Space prefab onto the Object field in the new row. We are dragging the Grid Space GameObject onto itself, onto the Button component, as the Grid Space GameObject carries an instance of the GridSpace script and we want to call a public function from that instance of the GridSpace script. - With the Grid Space prefab selected, - ... on the Button component, from the function pull-down list, select GridSpace > SetSpace. - Save the scene. - Enter Play Mode. - Click on any of the spaces in the grid. Clicking on any of the grid spaces should now assign the Player Side character to the space and disable the button. This is hardly a game, but it does present the foundation of our game and game play.
https://unity3d.com/pt/learn/tutorials/tic-tac-toe/foundation-game-play
CC-MAIN-2019-39
refinedweb
707
67.25
Re: Is removing elements of AA in foreach loop safe? On Thursday, 29 August 2019 at 10:11:58 UTC, berni wrote: Do you agree? Or is there a better way to achieve this? An alternative would be to reassign the AAA to the output of std.algorithm.filter()... but assignment between AAs and Ranges isn't so type-direct. Re: Is removing elements of AA in foreach loop safe? On Thursday, 29 August 2019 at 10:11:58 UTC, berni wrote: Iterating of some structure and removing elements thereby is always errorprone and should be avoided. But: In case of AA, I've got the feeling, that it might be safe: foreach (k,v;ways) if (v.empty) ways.remove(k); Do you agree? Or is there a better way to achieve this? It compiles and it runs without throwing any RangeError... So it appears to be safe. Otherwise it'd be a bug that there's not error. Re: How do I execute a sql-file inside D code On Tuesday, 20 August 2019 at 11:33:33 UTC, Anders S wrote: Use this code to check conn.exec("CREATE DATABASE IF NOT EXISTS boxweb;"); however haven't found a way to run the sql file that create the tables. The file is in the source folder I understand you're using some API to some SQL implementation which allows you to run SQL commands from strings, but not from files which is what you want? Just read the file into a string with the D std lib: import std:file; conn.exec( readText(fileName) ); Re: 1 new On Friday, 2 August 2019 at 18:25:28 UTC, jmh530 wrote: When I navigate to I have a message that says "1 new reply" to "your posts." Normally, I click on that "1 new reply" and find the post that's new, go to it, and the message disappears. However, it doesn't seem to go away anymore. I tried looking at many different old posts without luck. At one point it was up to "2 new replies," but I viewed that other post and it went back down to "1 new reply." Does anyone else have this? For me everything seems to work OK. Re: Is it possible to disallow import for certain functions? On Saturday, 27 July 2019 at 11:54:09 UTC, BoQsc wrote: I would like to make sure that function in module that I have won't be imported, is this possible to achieve? In general, make the function private. But indeed, on your case, it is a terrible idea to define a main() function in a module that you plan to import. Move it out into its own main module. Re: Any easy way to check if an object have inherited an interface? On Monday, 22 July 2019 at 21:34:18 UTC, solidstate1991 wrote: It seems that I've to write my own function that searches in the given object's classinfo.interfaces since I couldn't find anything related in Phobos. Do you mean...? interface I {} class C : I {} void main() { C c1; writeln(is(typeof(c1) : I)); } No need for Phobos, core language: Re: assert in unittest has access to private member? On Sunday, 30 June 2019 at 17:24:03 UTC, Robert M. Münch wrote: I have a case, with templates, where an assert in a unittest can access a private memember and I don't know how this can happen. Modules are the units of encapsulation in D: Re: Options for unit testing in D? On Sunday, 23 June 2019 at 01:26:29 UTC, Mike Brockus wrote: I think we made a lot of progress, suddenly it's working and I don't need to include main. Is there a way to indicate based on console output that one executable is the tester and the other is the application? unittest blocks are skipped completely by the compiler when the -unittest command line option is not passed. So you can leave unittest code embedded in between the rest (specially for proper unit tests of functions and classes) and there is no need to worry about file separation. Even when you write a separate file with tests, all its code inside unittest blocks can be skipped for the compiler. In the case of dub, it has a dedicated option, "dub test" instead of "dub build" or "dub run" Re: Options for unit testing in D? On Friday, 21 June 2019 at 04:08:42 UTC, Mike Brockus wrote: I am wondering as to what options are available for a Meson build user when unit testing? Unit tests are part of the language in D: These are compiled when you (or whatever build system you use) pass the argument -unittest to the compiler. If you never herd about Meson before: 樂. D has an "official" build manager, called dub. Of course you're free to use another one you prefer. PS also embedded documentation is part of the language (no need for e.g. doxygen): Re: Where can find fix length array memory layout document On Tuesday, 18 June 2019 at 12:26:14 UTC, lili wrote: Hi guys: Is the Dlang fix-length array alloc on stack? when a test writeln([1]).sizeof //16 writeln([2]).sizeof //16 Why, What is the fix-length array memory layout. You are quite confused... [...] is an array literal, not a static array. Those aren't the same thing. When you pass a array literal anywhere in your code, it will in principle be referred as a slice variable. This will not reallocate the contents. However the slice reference is another variable that takes up two words of space (see code below). This slice type is the same variable type that stores dynamic arrays -- be they allocated or null. Array literals are not necessarily allocated. The compiler is free to embed them into the program machine code itself. If you want a static array, you can just declare it directly e.g. int[n] arr. Of course you can also generate is out of an array literal with the staticArray std library function. PS the layout of D arrays is of course linear and contiguous. Both static or dynamic, just like C/C++ static arrays or std::vectors respectively. Hopefully this code makes things clear: /*/ enum lenInts = int.sizeof; static assert(lenInts == 4); int[1] arrStatic; static assert(lenInts == arrStatic.sizeof); auto slice = arrStatic[]; alias sliceType = typeof(slice); static assert(is(sliceType == int[])); enum lenPointers = size_t.sizeof; // fyi (unsinged) pointers static assert(ptrdiff_t.sizeof == lenPointers); // fyi signed pointer diff static assert(sliceType.sizeof == 2 * lenPointers); // because a D array reference remembers a pointer (like C) plus the length (stored in a word-length integer) Re: DIP 1016 and const ref parameters On Thursday, 20 June 2019 at 00:30:35 UTC, Jonathan M Davis wrote: Ultimately, if you want a function to accept both rvalues and lvalues as efficiently as posible, just templatize it and use auto ref. I'm aware of auto ref, and I've used it to solve this same problem when I had a template, but as you say it works with templates only, not plain non-templated functions.++). Either way, const has nothing to do with any of this. You're free to mark a ref or auto ref parameter as const, but it has nothing to do with whether rvalues are accepted, and it will never have anything to do with whether rvalues are accepted. D's const is far too restrictive for it to make any sense to base rvalue stuff on it like they do in C++. The DIP has nothing do with const and everything to do with ref. [...] Regardless, the refness of a parameter is part of its type, and I'd be very surprised if it were ever changed so that any parameter that was not marked with ref was ever ref. I know. That's why I look to the general solution to bind ref rvalues as a solution to bind const ref in particular. By the way it looks to me that most of the demand in the D community to bind to ref is for chain return ref functions. I wonder why no one in D is bothered about not being able to use const ref parameters easily, while in C++ everyone is bothered to do it, and passing a read-only struct/class by value won't pass by the least alert reviewer. My guess is that in D it's very often ranges that get passed, and these are passed as slices, which are by nature refs that don't reallocate, and can also be decorated const. Still the const ref concern stays more or less. Rust has a solution too (&) of course. My new idea about const only would be a compiler optimization, not part of the language/ABI. The same way as RVO, which under the hood (but not at the ABI level) is implemented as changing the function signature completely. This const optimization would not change the function's ABI either. However I see a practical problem to implement this optimization idea. RVO changes the function signature under the hood, but in the same way for every occurrence/call. This const optimization would need to change the signature, but only in the occurrences where the function is called with lvalues instead,not rvalues. So in practice turning every function with const struct parameters as a template; but under the hood, maintaining a single plan signature in the ABI. I wonder if this is as easy or feasible as RVO. Re: The problem with the conversion. On Wednesday, 19 June 2019 at 17:28:38 UTC, XavierAP wrote: Also, the return type of SDL_LoadBMP is a pointer, SDL_Surface* not just SDL_Surface. Or just use auto of course if you prefer: void load (string path) { import std.string : toStringz; auto ab = SDL_LoadBMP (path.toStringz); Re: DIP 1016 and const ref parameters On Wednesday, 19 June 2019 at 21:06:48 UTC, XavierAP wrote: Now with an rvalue returned from get, interesting, no copy. Still, I wonder what really happened. Again, moving between stacks would still be work. And a different optimization can explain this non copy, for example inlining. My guess as to what may be happening (I've never used a disassembler and I wasn't planning on starting today yet) is simple. The rvalue returned by get() is possibly not popped out from the stack, but rather left in the same place as sum() is called on it. This is actually optimum, but hardly an optimization, rather it's the easiest and least effort for the compiler. Again this would mean no moving -- which is good, because moving is work. And also, this doesn't work in the general case. If parameters are by value, everything works perfect when I pass rvalues (but we already knew that, not answering my question); however if I pass lvalues they will be copied every time. So my question is open... what about const ref rvalue parameters?++). Re: Is it possible to escape a reserved keyword in Import/module? On Wednesday, 19 June 2019 at 18:56:57 UTC, BoQsc wrote: I would like to make sure that my modules do not interfere with d lang. Is there any way to escape reserved words? The only reason C# allows this is for interop or code generation for other languages that use the same keyword. For example "class" is an HTML attribute. There is no excuse to do this for any other reason -- and C# gurus would also agree. I would like to make sure that my modules do not interfere Then don't name them as keywords :) Re: DIP 1016 and const ref parameters Hmmm I know about move semantics, and C++11 etc. I just don't know how related all that is to my original question. :) On Wednesday, 19 June 2019 at 19:25:59 UTC, Jonathan M Davis wrote: though if I understand correctly with RVO, it may just place the return value outside of the function in the first place to avoid needing to move. Indeed, unrelated: func(foo(), bar(42), baz("hello")); assuming that none of these functions return by ref, func is taking temporaries from several functions, and the spec actually guarantees that they will not be copied. Where does the spec say this?? (This case is actually my question.) However, please understand that naive moving is not an answer for me. Moving is still work! It would still be more efficient if foo's parameters were references/pointers -- if D functions were able to bind rvalues as such. Theoretically a compiler could optimize by realizing that if a value parameter is not modified by the function (and it doesn't fit in a register etc), it can be read at its original location/address in the caller's stack, i.e. by reference/pointer. Again, nothing to do with moving. But I really doubt the D compilers do this, first because C++ probably don't, or else all C++ programmers are wasting their fingers and screen real state typing const & zillions of times; and second because D would not be able to bind as ref in case the argument happened to be an rvalue. __ OK so I try to experiment myself: /**/ struct XY { int x, y; ~this() { writeln("Destroyed!"); } } int sum(XY p) { return p.x + p.y; } void main() { XY p; p.sum; } /**/ Destroyed! Destroyed! Note that the compiler didn't realize that p isn't needed after the last statement of main, as you thought. /**/ XY get() { XY p; return p; } void main() { get.sum; } /**/ Destroyed! Now with an rvalue returned from get, interesting, no copy. Still, I wonder what really happened. Again, moving between stacks would still be work. And a different optimization can explain this non copy, for example inlining. __ Again, does the spec really mention any of this moving or eliding? I have found nothing. Re: DIP 1016 and const ref parameters On Wednesday, 19 June 2019 at 12:55:09 UTC, Jonathan M Davis wrote: Even in C++, using const ref is not as good a practice as it once was, because they added move constructors, finally making object moveable. The result is that in many cases, it's actually more efficient to just copy values in C++ rather than use const &, but which is better does depend on the code. As for D, unless you're dealing with large objects, odds are that worrying about passing by value is pointless. D classes are reference types, and D structs have move semantics built-in. So, you don't get as many copies as you would in C++98, and the situation is probably better than newer versions of C++, since IIRC, C++ classes aren't moveable by default, whereas D structs are. In general, you're probably better off just passing by value unless you find that a particular piece of code is inefficient when benchmarking. Either way, you don't want to be slapping const on everything the way you would in C++, because D's const is far more restrictive. So, while it still can be quite useful, odds are that if you start using it heavily, you're going to run into problems fast - especially since casting away const and mutating an object is undefined behavior in D. D's const has no back doors. If something is const, then you can't mutate it unless you also have a mutable reference to the same data. And because const is transitive, you pretty much can't get mutable stuff from const stuff like you frequently can in C++ (e.g. in C++, it's possible to have a const container of mutable objects, wherein D, once part of something is const, everything within that part is const). As for the DIP, I'd suggest watching Andrei's recent dconf talk on the subject: - Jonathan M Davis I am not talking about cases that would be candidate for moving, or where const would be any problem. If you want an example for the sake of argument: struct Matrix3D { Matrix3D opBinary(string op)(const ref Matrix3D rhs) const; } unittest { auto a = Matrix3D.random; assert(a == a * Matrix3D.identity); assert(a == a + Matrix3D.zeros); } I did watch Andrei's talk, actually this is where I started and learned about the DIP(s), then I was confused that 1016 had been rejected, and smelling that it may be "reopening" I was not sure where I can find the "index" of DIPs under discussion or whatever... :) IIRC, C++ classes aren't moveable by default, whereas D structs are. What do you mean that structs are movable? I know about RVO (in both D and C++, supposedly guaranteed by all compilers in practice, but not by language spec -- why not D?), but what about passing up the stack as here? Re: The problem with the conversion. On Wednesday, 19 June 2019 at 14:58:44 UTC, drug wrote: 19.06.2019 17:52, Den_d_y пишет: void load (const (char *) path) { SDL_Surface ab = SDL_LoadBMP (path); a = SDL_CreateTextureFromSurface (ab); SDL_FreeSurface (ab); } try the following: ``` void load (string path) { import std.string : toStringz; SDL_Surface ab = SDL_LoadBMP (path.toStringz); // toStringz converts string to null terminated char* auto a = SDL_CreateTextureFromSurface (ab); SDL_FreeSurface (ab); } Also, the return type of SDL_LoadBMP is a pointer, SDL_Surface* not just SDL_Surface. Indeed, in your code do use the D string path, not char*, and use toStringz when passing to C APIs. Also, in D const(char)* would not be the same as const(char*). But don't worry about this and use string. DIP 1016 and const ref parameters I often use a pattern of having const ref struct parameters (as in C++) but this doesn't work in the case of rvalues. The workaround of defining an overload that calls its own name is terrible. I understand there was a DIP 1016 by Manu asking for this case to work. As far as I can tell, this was rejected, but later reconsidered, and now Andrei is starting up a new one[1]? Apologies but I'm not sure where these discussions are centralized. But if anyone has any idea or guess how seriously and in what kind of time this could be expected, that would be my first side question. My main learning question is whether the const ref parameter pattern is good in D? In C++ I see it everywhere, but are there better alternatives, in particular in D, or is there no point because some copy elision optimization may be guaranteed? In short am I right in writing const ref parameters, or am I doing something silly (and as important as this DIP may otherwise be, it wouldn't affect me as much as I think)?? As far as I can see, this DIP would be helpful for two use cases: const ref, and return ref with method chains. Are there others? __ [1] Re: How does this template work? On Sunday, 16 June 2019 at 15:11:29 UTC, Robert M. Münch wrote: How does the observerObject Template and function work? I'm struggling because both use the same name and how is the template parameter R deduced/where is it coming from? Looks like it's somehow implicitly deduced. Eponymous templates: "Templated types" are actually particular cases of eponymous templates: class ObserverObject(R, E...) {...} is equivalent to tempalte ObserverObject(R, E...) { class ObserverObject(R, E...) {...} } So this is I think how everything is made to work with the same compiler engine, both individual "templated types" and "eponymous templates". It's considered idiomatic, but if you don't like it in your case, it's very easy for the author to avoid it: just make the names different in any way. template Observer(E) { ObserverObject!(R, E) Object(R)(R range) { return new ObserverObject!(R, E)(range); } } auto observer = Observer!int.Object(TestObserver()); Re: Proper desctructor for an class containing dynamic array of objects On Friday, 14 June 2019 at 11:10:58 UTC, rumbu wrote: On Friday, 14 June 2019 at 07:52:24 UTC, Marco de Wild wrote: On Thursday, 13 June 2019 at 16:08:52 UTC, Mike wrote: Opposed to Java, D's member variables are static initialised. Is there any documentation about this? I find it unexpected. «All member initializations must be determinable by the compiler at compile time, hence there is no order-of-evaluation dependency for member initializations, and it is not possible to read a value that has not been initialized. Dynamic initialization is performed by a static constructor» Re: Does slicing have an effect? On Tuesday, 21 May 2019 at 20:44:49 UTC, rikki cattermole wrote: On 22/05/2019 8:31 AM, Dennis wrote: Does slicing have an effect I'm not aware of, or is this a bug? It could have an effect if a was a struct/class via operator overloads. But in this case it should probably be a bug. It doesn't look right, even for custom types; because D (and in particular Walter) is against changing the meaning of operators when overloading (e.g. << in C++). At least unofficially, although I may recall it is enforced at a few places elsewhere. Going back to the original question, it may be a bug (for arrays). And if so and if D wants to enforce consistent operator semantics when possible, it may be considered a bug for any type. Re: alias parameters, what for? Thanks, I get your points. I do think they make more sense for the standard library, than in every general case (packages for specific uses). Namely, alias parameters provide absolute genericity (instead of overloading every possible use case, or else constraining the API by design), and ultimate runtime performance (always at the expense of compile time). alias parameters, what for? What are the benefits of alias parameters, compared to specifying the template parameters fully? In most examples, at places in Phobos, and in Andrei's and Ali’s books, alias parameters are used for functions (in the general sense). Why this instead of specifying and typing the parameter functions or delegates? This brings another question, why is it so widespread in Phobos etc. to template these function parameters instead of declaring them as run-time parameters? Is this really always considered beneficial, why? For one it looks like it saves a lot of typing and makes the whole declaration more readable (not to mention possible attribute soups); but the type-checking code just moves to additional template constraints. And finally what’s the advantage of alias parameters in general, besides function/delegates? But I don’t see other uses in practice, although they’re possible. Re: Subtyping of an enum On Monday, 15 April 2019 at 12:38:59 UTC, XavierAP wrote: More generally you insist on modules and namespaces to be different concepts, which they are (pointlessly) for C++, but not for D (purposely). Here I should say packages instead of modules... but the general argument stays. Anyway your design is up to you :) but sub-typing is not reflexive, in D or any language. Re: Subtyping of an enum On Monday, 15 April 2019 at 10:34:42 UTC, Anton Fediushin wrote: The problem here is that I want to keep methods that are related to an enum inside of this enum for purely aesthetic and organizational purposes. ... These global functions pollute global namespace. If you have defined `x.toString` as a method, UFCS means of course you can also call `toString(x)`... So it's debatable what polluting means. More generally you insist on modules and namespaces to be different concepts, which they are (pointlessly) for C++, but not for D (purposely). Re: Subtyping of an enum On Monday, 15 April 2019 at 10:34:42 UTC, Anton Fediushin wrote: On Monday, 15 April 2019 at 10:06:30 UTC, XavierAP wrote:? Isn't this how subtyping works for integers and other types? For example, you have subtyped an integer and added some new methods to it? Yes (leaving aside whether stuff is private or nested) but you are using the types' relationship the other way around. You have: static assert(is(Enum : internal)); But you are defining and calling fun() as if it were the other way around (internal : Enum) Re: Subtyping of an enum On Monday, 15 April 2019 at 08:39:24 UTC, Anton Fediushin wrote: Hello! I am currently trying to add a custom `toString` method Several remarks... First of all, strings can be compared (alphabetically) as well as integers, e.g. assert("foo" > "bar") Perhaps not your use case, but worth noting.? You obviously need to re-think your problem and your design :) Obvious solution is to wrap an enum in a structure and utilize 'alias this' for subtyping like this: Actually the obvious solution (not sure if it otherwise works for you) would be to take advantage of D's Uniform Function Call Syntax [1] and define toString as a global function that can be called as a method: enum Fubar { foo, bar } string toString(Fubar fb) { return "It works."; } void main() { import std.stdio; writeln(Fubar.foo.toString); } _ [1] ; } } It's not a bug. I finally found it in the spec: "All member initializations must be determinable by the compiler at compile time, hence there is no order-of-evaluation dependency for member initializations, and it is not possible to read a value that has not been initialized. Dynamic initialization is performed by a static constructor"; } } May this be a bug? Static mutable is a theoretically valid use case. It looks to me that D often optimizes by evaluating at compile time. But in this case what is a possible optimization breaks a valid program when the optimization is found not to be possible. Other languages e.g. C# would by spec run these static initializations at run-time during the first use of C. Re: What Does @ Mean? On Monday, 8 April 2019 at 11:58:49 UTC, Ron Tarrant wrote: And while I'm asking, does an underscore have special meaning when used either at the beginning or end of a variable name? In D, @ is used as Adam has explained as a prefix indicating attributes (either user-defined ones or, confusingly enough, some of the standard ones). The only other example of language using @, in an almost but not quite completely different way, is C#. It's also a prefix that allows you to define names that would collide with reserved words, for example string @class = "menu"; Of course you should never do this unless you absolutely need for interop. Underscore prefixes are used in some languages by pure user convention mainly for private members (fields), to avoid name clashing. For example in D you could have a public property length and a private member _length. Python takes this a step further. Since it supports classes but no public/private visibility at all, users and IDEs have convened to use (one or two) underscore prefixes to signal members that aren't meant to be accessed publicly from outside the class, even if there's nothing stopping you (besides auto code completion not showing them). For C and C++ the convention (recognized by the standards) is different: names prefixed by any number of underscores are all reserved; basically because the global namespace is so badly polluted already. In this case I've seen some other annoying conventions, for example private member variables being prefixed with m_ Re: Poor regex performance? On Thursday, 4 April 2019 at 09:53:06 UTC, Julian wrote: Relatedly, how can I add custom compiler flags to rdmd, in a D script? For example, -L-lpcre Configuration variable "DFLAGS". On Windows you can specify it in the sc.ini file. On Linux: Re: Distinguish float and integer types from string On Monday, 11 March 2019 at 15:03:39 UTC, XavierAP wrote: What compiler version are you using? I on the other hand was surprised that I needed the try-catch above, after having already checked isNumeric. The documentation claims that the conversion to int or long would truncate, but my compiler (v2.084.0) throws instead. Of course now I realize that using try-catch I no longer need to check isNumeric... My design didn't use try-catch but I had to add it because std.conv:to behaves differently from the documentation: Not sure if I need to update my DMD, or it's the documentation that's out of date, or something else is wrong. Re: Distinguish float and integer types from string On Saturday, 9 March 2019 at 18:11:09 UTC, Jacob Shtokolov wrote: One of the task was to take a string from STDIN and detect its type. There were a few options: Float, Integer, string and "something else" (which, I think, doesn't have any sense under the scope of the task). Another std-based solution I came up with: bool isInteger(string str) { if(str.isNumeric) { try { return str.to!long == str.to!real; } catch(ConvException) { return false; } } else return false; } I tried to use std.conv.to and std.conv.parse, but found that they can't really do this. When I call `data.to!int`, the value of "123.45" will be converted to int! What compiler version are you using? I on the other hand was surprised that I needed the try-catch above, after having already checked isNumeric. The documentation claims that the conversion to int or long would truncate, but my compiler (v2.084.0) throws instead. Re: Best practices of using const On Wednesday, 13 February 2019 at 11:32:46 UTC, envoid wrote: Is there an article that explains best practices of using const in D? Chapter 8 of Andrei Alexandrescu's book The D Programming Language. Re: static arrays at runtime with templates ? On Monday, 4 February 2019 at 19:14:38 UTC, Emil wrote: Can std.array.staticArray build static arrays with size known only at run time ? Now that I am not overcome with enthousiasm it looks like it too needs to know the size. A static array's size must be known (or computed) at compile time, by definition... There can be no exception; staticArray() gets or infers the size from its template/compile-time arguments. For one, the trivial example on dlang.org/phobos auto a = [0, 1].staticArray; static assert(is(typeof(a) == int[2])); is equivalent to int[2] a = [0, 1]; static assert(is(typeof(a) == int[2])); Is it better? Depends on.? No it really isn't... However one of the possibilities of D is the ability to generate and execute quite some code at compile time by means of meta-programming; and I guess here's where staticArray() may end up being useful enough to have merited introduction into the std library; not for trivial uses where a straightforward T[n] declaration is preferable, being possible... If there is something you want to solve in a program of yours, there may be another way, as H.S. Teoh suggests. If you're just trying out stuff it's also fine, :) there's still more. Re: static arrays at runtime with templates ? On Sunday, 3 February 2019 at 16:33:48 UTC, Emil wrote: Is this for real, static arrays at runtime without manually allocating memory ? Is this legitimate or should I expect problems ? Static arrays are always allocated at run-time. It's the size of the array that must be known at compile-time (in this case via a template parameter). What's the advantage (or the essential difference) of auto data = static_array!(int, 5); instead of int[5] data; ? Just asking ;) but it's good to play. What does not compile on my end are the run-time parameters (3) and (2)...? Return Value Optimization: specification, requirements? I've heard here and there that D guarantees RVO, or is even specified to do so... Is it spelled out in the language specification or elsewhere? I haven't found it. Do you know the exact requirements for RVO or NRVO to be possible in theory, and to be guaranteed in practice in D? Does it depend only on what is returned, or does it depend how it's constructed? I know I can debug to find out case by case, but that's the kind of C++ stuff I want to avoid... I want to know the theory/norm/spec. Thanks Re: Doubt about this book: The D Programming Language On Sunday, 16 December 2018 at 18:37:15 UTC, Marko wrote: On Amazon The D Programming Language has good reviews but it's 8 years old. So is this book still relevant today? Yes, I would recommend it. It is meant to be comprehensive but introductory, so many language or library changes since are out of the scope anyway. It's then also quite different from a cookbook approach for example -- depends what you're looking for. You may perhaps compare it more closely with Ali's book, but unfortunately I haven't read this one. Re: Nested template arguments On Wednesday, 22 August 2018 at 14:48:57 UTC, Alex wrote: Because it could be meant as the argument to some templates to the left. Like (foo!bar)!x Sure, it would be a coincidence, if both will work. However, templates are not something where you can simply imply the associative property, I think. Of course there isn't an associative property... But I was thinking that without brackets the parser could fall back to whatever default "left to right" precedence, as would happen with operators, which needn't be associative either. Nested template arguments Why foo!bar!x is not understood as foo!(bar!x) but instead gives an error "multiple ! arguments are not allowed"? Precisely because multiple "!" can never belong to the same instantiation, why does the parser not understand without needing brackets that the rightmost template should be nested as the argument for the next one to the left? Re: Templated operator overloading On Wednesday, 22 August 2018 at 12:36:39 UTC, Simen Kjærås wrote: Since both your opOpAssigns match equally, the compiler throws up. The solution is to add some sort of restriction: This doesn't happen apparently: the operator has a left and a right side, even if both types define the operator, only one of them is on the left at each call. It works now after Ali corrected my stupid syntax :) Re: Templated operator overloading On Wednesday, 22 August 2018 at 13:20:01 UTC, aliak wrote: "void opOpAssign(string op, T)(ref Tthis, const ref T x)" looks like the wrong signature for opOpAssign. Oh I'll put on my stupid hat now... I realize I had copy-pasted the wrong syntax from the global function attempt, but I swear I thought I had re-typed and tested the right one... It's working now :) Templated operator overloading I've been trying some things to template operator overloads. The reason is that I want very similar code for different types, but I can't use polymorphism, as they're structs rather than classes. Perhaps this choice is not as advantageous as I think, and I may change this design from structs to classes, or else the code duplication would be small and never subject to change. But now I'm just trying for the sake of learning to find out what works or not in terms of templated operator overloading, and whether the reason something doesn't work is by design and if mentioned in the specification, or just an arbitraty result of some unspecified parsing/lowering step order, or it depends on the compiler (I'm using dmd). Since there are in my case two similar types (below just a minimal dumb proof of concept), I want the operator(s) to work within the same type, or also with the other. The following code actually works, including type parameter inferrence, and const ref to avoid struct copying: // import std.stdio; struct S1 { void opOpAssign(string op, T)(const ref T x) { writeln(this, op, x); } } struct S2 {} void main() { S1 s1; S2 s2; s1 *= s2; } // When I want to have the same operator overloading code in both types however, I can't make it work: // private mixin template operator(Tthis) { void opOpAssign(string op, T)(ref Tthis, const ref T x) { writeln(this, op, x); } } struct S1 { mixin operator!S1; } struct S2 { mixin operator!S2; } void main() { S1 s1; S2 s2; s1 *= s2; // Error: s1 *= s2 is not a scalar s1.opOpAssign!"*"(s2); // Error: template test.S1.operator!(S1).opOpAssign cannot deduce function } // And a final try with a global templated function instead of a mixin template: // private void opOpAssign(string op, Tthis, T)(ref Tthis that, const ref T x) { writeln(that, op, x); } struct S1 {} struct S2 {} void main() { S1 s1; S2 s2; s1 *= s2; // Error: s1 *= s2 is not a scalar s1.opOpAssign!"*"(s2); // OK! } // Re: Auto keyword and when to use it On Tuesday, 21 August 2018 at 21:37:00 UTC, QueenSvetlana wrote: I had a misunderstanding about the keyword auto because I wrongfully believed that it made the code like Python Exactly, you are thinking still like D is Python or also dynamically typed. :) You will get when compiling errors that Python wouldn't detect until run-time (or with your private methods). - A declaration with auto needs to include an initialization. - The code will be equivalent as if replacing "auto" with the inferred type. It is not left for later to check. I'm not terribly bothered btw by "Type = new Type()" but often type names get too long or include namespaces... "mylib.numeric.squareObjectWithPointyCorners = new mylib.numeric.squareObjectWithPointyCorners()" Re: Auto keyword and when to use it On Monday, 20 August 2018 at 17:52:17 UTC, QueenSvetlana wrote: So I can't declare class level variables with auto, correct? only local method variables? One difference between D's auto and C#'s var or C++'s auto is that the latter languages allow automatically typed declarations only for local (method/function-scope) variables, and forbid them for class or struct member variables (aka fields); whereas D allows auto anywhere (even function/method return type! -- which C# and C++ allow as well but only case of anonymous methods/lambdas). I'm in favor of the AAA ("Auto" Almost Always) paradigm, but as long as the type if obvious to a human reader. I don't favor them for numeric types for this reason (non obvious bitsize, signedness...) It's up to each programmer. Only if someone likes "Type x = new Type()" instead of "auto x = new Type()" I would say they're clearly wrong. Re: New programming paradigm On Wednesday, 6 September 2017 at 23:20:41 UTC, EntangledQuanta wrote: So, no body thinks this is a useful idea or is it that no one understands what I'm talking about? I think it may be a good use, although I haven't invested so much time looking into your particular application. It looks like a normal, sane use of templates. This is what they are primarily intended for. And yes, combining them with mixins provide some great possibilities that are not available in many other languages. Have you seen how D recommends avoiding duplicate code when overloading operators, also by means of mixins: I thought you may come from C since you mention void pointers as an alternative. But that is not considered the normal way in D, your new way is far better, and more "normal". It looks you may be mistaking what happens at "run-time", or it may be a way of speaking. In D, templates called with different types generate different code already at compile-time -- even if in the source code you write, it all looks and works so polymorphically. This is a similar approach as in C++ and it's why D generics are called "templates"; as opposed for example to C#, where generics are not compiled into static types and keep existing at run-time. Andrei discusses both approaches in his book, and why the first one was chosen for D. Re: multi-dimensional array whole slicing On Tuesday, 25 April 2017 at 20:46:24 UTC, Ali Çehreli wrote: I think it's still consistent because the element type is not int in the case of multi-dimensional arrays. [...] int[3][4] b; b[] = [1, 1, 1]; It is consistent, I just miss the possibility to more easily initialize multi-dimensional arrays uniformly in the same way as uni-dimensional ones. I do not mean it would be good to change the current behavior. I think the best solution would be for D to implement built-in truly multi-dimensional arrays like T[,] as well as the existing (jagged) arrays of arrays T[][]. That's what C# does. The former could maybe even be lowered into jagged arrays (together with their initializations and slicings). But again most people probably don't miss T[,] built-in arrays, specially since we can implement such [,] indexing for custom types. So there's not a strong use case. But actually where I'm using multi-dimensional built-in arrays right now is in the private storage of a custom multi-dimensional type. Then I have the choice of either use them and live with this, but forward indexing transparently; or use uni-dimensional as private storage and map from 2d to linear during indexing... Re: function type parameter inference not working On Tuesday, 25 April 2017 at 19:57:30 UTC, Ali Çehreli wrote: This is an intentional limitation of D. It's not possible to bind rvalues (temporaries) to reference parameters. The best option here is 'auto ref': Aha I had forgotten about the ref (obviously, otherwise I wouldn't have passed a temporary even in the unit test -- I'm embarrassed). If that's the reason why it doesn't work I'm satisfied. It would be helpful if the error message talked about the ref qualifier as the root cause instead of the consequent failure to infer template parameters. Re: function type parameter inference not working On Sunday, 23 April 2017 at 19:40:39 UTC, ag0aep6g wrote: Please post self-contained code. When I fill the gaps you left, it works for me: Found it! It stops working (DMD v2.073.0 for Windows) if it has to infer the type of a temporary local variable -- constructed in place of the argument: struct Matrix(size_t nr, size_t nc) {} struct Vector(size_t n) {} void assembleMass1D(Mat, Vec)(ref Mat M, const ref Vec x) { /* ... */ } Matrix!(2,2) M; Vector!2 V; assembleMass1D(M, V); // OK assembleMass1D(M, Vector!2()); // ERROR template cannot deduce function Is this a bug? Re: function type parameter inference not working On Sunday, 23 April 2017 at 19:40:39 UTC, ag0aep6g wrote: Please post self-contained code. When I fill the gaps you left, it works for me: Interesting, thanks a lot. I'll test and narrow down what's in my code preventing this from working (I can't really think of anything) and I'll report back. function type parameter inference not working It's not working for my case, while I see no special reason why it couldn't. Also I can't find specific inference rules at Is it a problem that the types to be inferred are in turn also templated? Any workaround that can make inference work? Otherwise I would re-consider my design rather than having to specify types already available in the runtime arguments :( void assembleMass1D(Mat, Vec)(ref Mat M, const ref Vec x) { /* ... */ } Matrix!(2,2) M = /* ... */; Vector!2 V = /* ... */; assembleMass1D(M, V); // ERROR template cannot deduce function from argument types Re: multi-dimensional array whole slicing On Sunday, 23 April 2017 at 09:06:35 UTC, Ali Çehreli wrote: It took me a while to convince myself that there is no bug here. The problem, as is obvious to others, ;) a whole slice of a whole slice is still the same slice. Ha, you're right, I hadn't realized. But I still have a problem. For both multi-dimensional and uni-dimensional arrays a[] and a[][] are the same. And yet, a[] has different type in both cases and a[]=1 compiles for uni-dimensional but not for multi-dimensional. Re: multi-dimensional array whole slicing On Saturday, 22 April 2017 at 22:25:58 UTC, kinke wrote: int[3][4] arr = void; (cast(int[]) arr)[] = 1; assert(arr[3][2] == 1); Thanks... I think I prefer to write two loops though :p I wish D built-in arrays supported [,] indexing notation like C# (or as you can do in D for custom types) multi-dimensional array whole slicing I can do: int[3] arr = void; arr[] = 1; But apparently I can't do: int[3][4] arr = void; arr[][] = 1; What is the best way? What am I missing? Re: DMD requirements (VC runtime version) where? On Friday, 21 April 2017 at 11:37:07 UTC, Mike Parker wrote: sc.ini manually is the better option if you don't need or want the 2015 build tools. Thanks! Nevertheless I think it would be good that the supported version of VS is documented on the website of DMD, just like it is on VisualD's. When the installer is updated, this document/webpage can be updated as well. I don't immediately know how to find VS paths/exes and how to insert manually into sc.ini... But the VS extension manager is giving me some weird error message, so I'll just switch to 2015, even uninstall 2017. DMD requirements (VC runtime version) where? Visual D has just added support for VS 2017 (kudos), whereas before I had to stick with 2015 at the latest. But DMD has an additional dependency on VS (for x64), and it is not documented, as far as I've been able to find, which versions. So I just tried, and now the DMD Windows installer complains that it can't find any compatible versions of MSVC and the Windows SDK, after having installed 2017. Is this documented somewhere that I've missed? Otherwise, would it be possible to add this information somewhere, namely which versions of what dependencies are required for what features? Re: Can we disallow appending integer to string? On Wednesday, 19 April 2017 at 14:50:38 UTC, Stanislav Blinov wrote: On Wednesday, 19 April 2017 at 14:36:13 UTC, Nick Treleaven Why is it legal to append an integer? Because integrals implicitly convert to characters of same width (byte -> char, short -> wchar, int -> dchar). Huh... I hadn't used but I'd been assuming probably biased from C# that str ~ i would be equivalent to str ~ to!string(i) instead of str ~ cast(char) i Now I see the problem too... Re: Use of "T" On Wednesday, 12 April 2017 at 14:46:20 UTC, solidstate1991 wrote: Yes, templates. I've looked this up a bit, and I found it. I want to use it to use the dictionaries for different things than string<->int conversion. T is just the common name of a (type) parameter, mostly whenever the template is more generic that you can't think of a more informative (template) parameter name. Just like you could use "str" for a string or "i" for an int name. But in you case you could use a more informative name such as "keyType" since you are describing keyType -> valueType dictionaries, also called associative arrays. Moreover these dictionaries are built-in basic types in D: This should be the Dictionary(int), string<->string conversion should be done with Dictionary(string). Int<->string should be done as Dictionary(string,int) if possible. So according to the spec linked above, those examples would be declared: string[int] dict1; string[string] dict2; int[string] dict3; Re: Why is this legal? On Wednesday, 29 March 2017 at 09:50:10 UTC, abad wrote: Is this on purpose and what's the rationale? In Andrei's book, chapter 6.9.1 "the non virtual interface (NVI) idiom" answers your question. It cites this article by Herb Sutter as the originator of the idea: Re: How to continue after the book? On Tuesday, 28 March 2017 at 07:27:31 UTC, I Lindström wrote: I do have a need for which I've been trying out a few languages and D seems by far the best for me. Should I just start doing that project and learn as I go by googling and asking here, or are there some other things you did before starting your first "real" project. If you have a project in mind and that's the reason why you've looked into D, just start it now. After reading a book and preferably before, doing is the way to learn programming. Worst case, you'll decide later to re-design a lot of your code. But you will have used your time in learning much more, more relevant for your specific needs, than with any toy exercises. Re: how to define my own traits On Monday, 27 March 2017 at 16:28:13 UTC, Gary Willoughby wrote: Even Andrei was baffled: I see... And Walter went further and reported it as a DMD bug (still open clearly). It's what I mean. This strange behavior is more typical of C++, in D this is a rare corner case, but I can sympathize if Andrei and Walter don't want to accumulate issues like this one in the language, and on top of that fill the standard library and user code with this kind of workarounds. First the solution is a hack, but ideally it wouldn't be needed, the original code should have worked with inout ranges all the same. So ideally DMD should be fixed to make the hack unnecessary. Andrei's proposal of deprecating inout entirely is also consistent at the expense of losing a feature. When I first read about inout as a device to obviate code duplication typical in C++ const ref overloads, I liked it but I assumed it was implemented by lowering it into the actual duplicate overloads. Though I'm not even sure right now if such overloading is allowed in D. I haven't tried if this happens with other compilers than DMD... Re: how to define my own traits On Monday, 27 March 2017 at 00:49:14 UTC, Moritz Maxeiner wrote: Have you tried it without the dummy parameter on the example given in the bug report [2]? I see, thanks for finding it! Looks a bit hacky but I can live with it. Indeed if I remove the argument from Phobos, Martin's example breaks again. Incidentally, everything keeps working if I qualify the function literal as (inout int = 0) pure how to define my own traits I've looked into Phobos to emulate it when defining my own trait template, and when I see this: module std.range.primitives; // ... })); I wonder, why that unused parameter (inout int = 0)? In my project () { /* ... */ } works the same for a custom trait. Re: really why module declarations? On Sunday, 26 March 2017 at 20:58:24 UTC, Adam D. Ruppe wrote: Module declarations are only optional in the most trivial case that is rarely useful in real world code. I recommend you ALWAYS use them (and always put a ddoc comment on them!), and moreover that you avoid top name modules (use `myproject.modname` instead of `modname`) to avoid conflicts. OK I was already doing it all this in the multi-file project. I was curious but I guess it's long to explain the different things that can go wrong if one doesn't declare module names. Followup question: if I am inside module myproj.pack1.mod1 and want to import myproj.pack1.mod2... should I import myproj.pack1.mod2; or import mod2; ? really why module declarations? I've perused both the spec[1] and Andrei's book, and I the idea I get is that module declarations are optional, recommended only in case of file names not being valid D names. But in the community (and Phobos) I see it's strongly recommended and used throughout. What's the reason? If the declaration overrides the path (provided the file is found) rather than enforcing path consistency by outputting a compile error, then what's the benefit of module declarations, if we have to be disciplined to keep it consistent with paths anyway? I'm busy starting my first big multi file D project, thanks for any feedback! [1] Re: recommend Git GUI client for Linux? On Thursday, 2 March 2017 at 06:16:09 UTC, Patrick Schluter wrote: Here [1] is the official git page listing all GUI clients for different plartforms. I use GitExtensions[2] and I like it a lot. It works very well and all the complicated stuff can be done from the GUI interface and also from command line. Patrick thanks for the great recommendation! I'm using GitExtensions now on Windows. In comparison I wonder how GitHub's desktop client is even allowed on the Internet. For Linux (Lubuntu) any recommendation among these? Re: GitHub detects .d source as Makefile? On Sunday, 19 March 2017 at 21:53:17 UTC, Seb wrote: FWIW this has been fixed by Martin last summer, but the people at GitHub aren't very responsive. The PR is still pending :/ More info: Thanks for the info. Whole thing looks beyond me, plus I'm new to GH so not sure what's polite, but I'll try to remember pinging after a while whenever no one does. In the meantime my file is finally ok, after I added an override in the .gitattributes file,* it looks like there was a time delay from the language being re-classified by the GitHub Linguist until it was processed anew by the appropriate highlight parser. Even though the majority language stat for the repository had updated instantly. * Fyi these are the lines I added to my .gitattributes file: *.d linguist-language=D *.di linguist-language=D Re: Error: out of memory On Sunday, 19 March 2017 at 20:50:50 UTC, Gand Alf wrote: just use DMD with the -m64 parameter ;) then you should get a x64 DMD No, at least afaik, then you tell DMD to make a x64 exe, but DMD itself (this particular Windows version) is still a 32-bit exe. Re: Error: out of memory On Saturday, 18 March 2017 at 20:39:20 UTC, StarGrazer wrote: about 2GB before it quites. It also only uses about 12% of cpu. I have 16 GB total memory and about that free. Surely dmd could do a better job? Any way to get it to do such a thing like set the maximum amount of memory it can use? Any 32-bit process gets 2 GB of memory space, regardless of how much physical memory you have. If you used a 64-bit version of dmd your problems should go away... If the binary for Windows isn't available from the downloads here, you can try compiling it from source yourself... But I'm sure someone somewhere has done it already. Or you can try another compiler such as GDC, which is available for Windows x64. Also 12.5% probably means 100% of one of 8 cores in your CPU. Re: GitHub detects .d source as Makefile? On Saturday, 18 March 2017 at 10:52:31 UTC, XavierAP wrote: Thanks! It seems I can also override the language detection in .gitattributes, and it is now fixed :) The majority language assigned to the whole repository is fixed, but alas syntax highlighting in heatsim.d is still wrong. Looks like this is handled by code in a different repository[1] from linguist. I'll look at the issues on both... [1] Re: GitHub detects .d source as Makefile? On Saturday, 18 March 2017 at 01:33:13 UTC, David Nadlinger wrote: The code GitHub uses to infer source file languages is open-source, and – fittingly – available on GitHub: You should check the issues for reports of similar D-related problems, and if there are none, create a new one. Or, better yet, submit a pull request with an appropriate fix. Thanks! It seems I can also override the language detection in .gitattributes, and it is now fixed :) I'll take a look around and file an issue if it doesn't exist. Probably not PR myself as I don't know Ruby. As a workaround, adding a "module …;" declaration to your file should help. You probably want to be doing that anyway. I know about this and I've read the spec and Andrei's book yet I'm not entirely clear why it is such a mandatory practice. I'll ask in a new thread... [1] GitHub detects .d source as Makefile? So I have put my first code even on GitHub (comments welcome :)) and GitHub seems to detect the wrong language, even if I'm not familiar with this GH feature. The repository itself ("etc") is flagged as been written in Makefile? Right now I have only two source files (and a couple of images and a pdf). If I look at syntax highlighting online, one of them main.d seems highlighted in D ok. But the other one heatsim.d is not correctly highlighted. Is this a known issue with D on GitHub? Should I report it I guess? How smart is GH that it doesn't look at the file extension? What happened? Re: first try On Friday, 17 March 2017 at 00:35:32 UTC, Philip Miess wrote: aceyducy.d You don't need string literals to be verbatim (r"") in order to insert newlines as in the code (without escape sequences). All string literals behave this way in D -- this is different from C# for example. Re: Sorting Assosiative Arrays and Finding Largest Common Substring On Thursday, 16 March 2017 at 16:02:13 UTC, helxi wrote: 1. .length is of type ulong Either use auto or if needed size_t. As Thedeemon says this is an alias of ulong on 64-bit and uint on 32. Re: Phobos function to check if files are identical? On Tuesday, 14 March 2017 at 18:26:52 UTC, flamencofantasy wrote: import std.mmfile; auto f1 = new MmFile("file1"); auto f2 = new MmFile("file2"); return f1[] == f2[]; Nice! I don't have experience with memory-mapped files. What are the pros and cons? Re: Phobos function to check if files are identical? On Tuesday, 14 March 2017 at 08:12:16 UTC, Andrea Fontana wrote: First I would check if the files have different size or if they are the same file (same path, symlink, etc). Good idea. Good reason to have it in std.file. There might also be platform dependent shortcuts? Re: Phobos function to check if files are identical? On Monday, 13 March 2017 at 17:47:09 UTC, H. S. Teoh wrote: Binary comparison is easy. Just read the files by fixed-sized chunks and compare them. Follow up question... What is the best @safe way? Since File.byChunk() is @system. Just out of curiosity, I would rather use it and flag my code @trusted, although I guess there could be concurrency issues I have to take into account anyway... anything else? Re: code folding On Tuesday, 14 March 2017 at 00:38:12 UTC, Vladimir Panteleev wrote: If you have enough declarations in one file that they call for code folding, it may be better to move them to a separate module. Public imports and aliases allow doing this without breaking any code. [...] Generally speaking, I would recommend to simply avoid code folding altogether: Indeed good point: Re: Phobos function to check if files are identical? On Monday, 13 March 2017 at 17:47:09 UTC, H. S. Teoh wrote: Why it is not easy to do by hand? Sorry typo, I had intended to type "I know it is easy" Re: code folding On Monday, 13 March 2017 at 17:29:41 UTC, Inquie wrote: I have been using static if(true) { ... junk } Indeed #region is part of the C# specification, even if it has no effect on the code. (The specification does not say anything about folding/collapsing, just about "marking sections of code", although I guess most IDEs supporting it will follow the example of MS's reference implementation.) Short answer, D does not have this, as far as I know. I don't really think it's good substitute practice to insert meaningless static if(true)... Even if you're really used to that feature, and even if you're right that it does the job and doesn't change the generated code. Unfortunately you can't get this folding easily (I'm sure some Vim wizard would come up with something). Instead if you want to mark regions of code, that's what comments are for. You can't get the folding you want unfortunately (outside of naturally existing bracket pairs) but you can use your editor to search forward and backward in the file for whatever text, e.g. //region: foo// Re: Declaring interfaces with a constructor On Monday, 13 March 2017 at 02:15:21 UTC, David Zhang wrote: What it says on the tin. Is there a way to create interfaces with a constructor or must I use an abstract class. What do you want to do in your constructor? I can't think of anything that wouldn't change some state, either of the class (but interfaces aren't allowed to have fields either, precisely because they may not have state), or the global state (worse...). Just curious. Additionally, is there a way to force the linker to link a function in a class without an implementation with another that does have an implementation? I'm not sure if you mean the same as generating "interface files"? [1] Re: Can you fix this code to avoid using pointers? On Monday, 13 March 2017 at 14:47:20 UTC, H. S. Teoh wrote: On Sat, Mar 11, 2017 at 08:07:39PM +, XavierAP via Digitalmars-d-learn wrote: [...] But I still like the version with pointers ;) There's nothing wrong with using pointers in D. The fact that D alleviates most cases of (explicit) pointers is a testament to just how awesome it is. ;-) But that shouldn't deter anyone from using (explicit) pointers once in a while. In fact, D makes it such that it's a lot safer using pointers than in C/C++, mainly because it alleviates most of the dangerous use cases for pointers so what's left is generally harmless stuff. Unless your code for whatever reason is involved in heavy pointer hackery (like OS writing), but that's not a typical use case. I did it again ;) enum params = [ "Radius of disk in m", "Integration step size along radius", "Total integration time in s", "Integration step size along time", "Time step size for output", "Initial temperature in C", "Edge temperature in C", "Flow temperature in C", "2D conductivity in W K^-1", "2D diffusivity in m^2 s^-1", "Convective coefficient in W m^-2 K^-1" ]; real*[params.length] vals = [ , , , , , , , , , , ]; import std.conv: to; /* ... */ foreach(i, param; params) { /* ... */ *vals[i] = to!real(val); } I had the same requirement to make separate named scalars instead of an array; but also an array was handier to assign the values inside a loop; and I needed a readable and simple way to map between them. An array of pointers is perfect I think. And the only dangerous thing I could do with the pointers is arithmetic on them instead of the pointed values, but: "pointer arithmetic not allowed in @safe functions." So all is good :) Phobos function to check if files are identical? It's not easy to do by hand of course, but I was wondering if there was one simple function taking two file names and just returning a bool or something like that. I haven't found it in std.file. If such a function doesn't exist in Phobos but there's a good implementation in some other library, I'm interested to know. Although this time it's for a unit test so I'd rather implement it in two lines than add a dependency. And otherwise to write it by hand, how do you think is the best way? And in terms of performance? By chunks in case of a binary comparison? And what about the case of a text comparison? Thanks Re: Code style for property On Sunday, 12 March 2017 at 11:15:04 UTC, Nicholas Wilson wrote: On Sunday, 12 March 2017 at 10:47:35 UTC, Andrey wrote: And I want make access to read x, y and bar. Probably I should add prefix for private members, that is a question: what prefix should I use? Now I use prefix p_ (from the word property), but maybe prefix m_ is better and you need to use it for all private members? A single leading underscore is usually used to denote a private variable ( names prefixed with two leading underscores are reserved for use by the compiler). If you need any prefix at all, a single underscore is enough, and it's also the tradition in other languages such as Python, C#... Whether a private member is exposed as property or in some other way, can be seen in the getter/setter, no need to classify it into the member declaration. C++ kind or requires a letter on top such as m_ simply because any identifiers starting with an underscore are (mostly and certainly at the global scope) reserved (namespace pollution anyone?). It's really up to you, we won't call the police ;) Re: Best ways to declare associative arrays On Sunday, 12 March 2017 at 07:58:40 UTC, helxi wrote: string[string] change(ref string[string] arg_array){ //.. arg_array["first"] = strip(readln()); //.. arg_array["second"] = strip(readln()); //.. return def; } Nicholas clarified why your declaration was wrong, but there are several strange things in your code that you may want to re-think. Also it looks to me that an associative array is not the most appropriate type for what you want to do. To call a function you just pass the names of the arguments, not their types. So simply change(test), NOT change(string[string] test) arg_array is an in-out (ref) parameter, but change() returns another value of the same type, def, not defined in your code, and which you do not use in main(). I think you may be interested only in changing arg_array, so the signature could be instead: void change(ref ...) What you seem to want from your associative array is to associate two strings, "first" and "second" with two values (strings from the user), and only two. An associate array is more flexible than that, which is bad, you want your code to restrict you away from errors. For example if you keep using an associative array you could at the end of change(): assert(arg_array.length == 2); I wonder if it's not enough and better for you to use a plain array. Keys "first" and "second" are not more informative than numeric indices. You may also use the advantage that an array can be hard-typed as fixed-length if this is known at compile time (and if you don't declare it with new), so it restricts your code in the perfect way: void change(ref string[2] arg_array) { arg_array[0] = strip(readln()); arg_array[1] = strip(readln()); } void main() { string[2] test; change(test); } Also another disadvantage of associative arrays is that they are not ordered, so if for example in main() you read through the values in test with a foreach loop, you may get the result in any order (second first, and first second is possible). A simple array will keep order 0, 1. If you were so bummed about using 0-1 instead of "first"-"second" you could define: enum lineKey :size_t { first = 0, second } void change(ref string[2] arg_array) { arg_array[lineKey.first ] = strip(readln()); arg_array[lineKey.second] = strip(readln()); } But at least to me it looks worse. As a programmer you already know that the first index is 0 and 1 comes next. Re: Can you fix this code to avoid using pointers? On Saturday, 11 March 2017 at 19:15:59 UTC, H. S. Teoh wrote: What about just: foreach (const ref p; [in1, in2, in3, in4]) I would think there will be already one copy from the local parameter variables to the in situ array. Then from that one into the for each element it's ref'd all right. But I'm afk and can't test. Like the other copy I missed and Adam spotted when passing the arguments with missing ref qualifiers.. I realized that the code that sparked the question made no sense and should be done in a different way... As is always the case when these questions come up. But I still like the version with pointers ;) Re: Can you fix this code to avoid using pointers? On Saturday, 11 March 2017 at 13:44:30 UTC, Satoshi wrote: void calc(in double[] array...) { foreach (x; array) { } } To do what I want it should be foreach(ref x; array) -- or const ref. But also I don't want to modify the function signature, certainly in this way. In another situation yes, but the arguments are very different magnitudes, for example temperatures, conductivity, heat power, etc. They should be separate arguments with self-documenting names. And it's not worth the bother to define a struct type for them as a set. Specially since this is an internal implementation "problem" that shouldn't affect the outer interface. I know there's something in std.algorithm for this, but afaik it would be relatively bloated compared to this pointer solution. In C++ I would use a instead of a *pointer, but I actually think C++ references are redundant with pointers, not much safer, and plain confusing. I guess it's a not a common case because if a type is non trivial to copy it should probably be a class, which is already assigned by reference so I wouldn't need the pointer/ref. Re: Can you fix this code to avoid using pointers? On Saturday, 11 March 2017 at 12:35:42 UTC, XavierAP wrote: { /* ... */ } Please imagine double is a type that I wanted to avoid copying, just check by ref. Same question :p Re: Can you fix this code to avoid using pointers? Oh... please forget it What a terrible example :p I forgot why I was using pointers at all... I must have had a reason to write this in the past ??? Can you fix this code to avoid using pointers? { /* ... */ } Re: DMD default safety command line switch On Friday, 10 March 2017 at 01:17:57 UTC, Jack Stouffer wrote: On Friday, 10 March 2017 at 01:13:26 UTC, XavierAP wrote: :) Changing default behavior which results in incompatible code. Aha of course I agree. No language wants to do this, it goes beyond what is referred as community. But yeah look what happened to Python 3.x Re: DMD default safety command line switch On Friday, 10 March 2017 at 01:13:26 UTC, XavierAP wrote: What behavior? Anyway my question is answered, thanks :) What behavior is a rhetorical question, meaning that I don't really want it to be answered 0;) Re: DMD default safety command line switch :) Re: @safe console input? On Thursday, 9 March 2017 at 23:55:35 UTC, Adam D. Ruppe wrote: Just wrap it in a @trusted function. I knew this answer already of course ;) but I take it as implying that there is no other way. Actually I really wonder why std.stdio.readln() itself is not flagged @trusted. I wouldn't think such a function skips any buffer bounds checking, even in -release -- having to wait for user input anyway performance is no issue. can I overload operators as extension methods? The same way as T.foo() is lowered to foo(T) if no such member is defined inside the type. It would allow me to extend 3rd party types with operator notation without wrapping them. After trying and reading the specification, looks like nuts, but just wanted to confirm. Thx @safe console input? I was surprised by a compiler message saying that std.stdio.readln() (and specifically the overload without arguments) is not safe but @system. Actually I was using it only to pause execution until the user presses Enter. So how else could I do this within a @safe environment? And more generally, is it possible to get user console input in a @safe way? DMD default safety command line switch Andrei's 2010 book states that the default safety level can be changed from @system to @safe by means of a -safe command line switch, in the case of the DMD compiler. Now I've tried it and it's not recognized. Was this feature remove on purpose? I could imagine that. The default safety keeps being @system, right? PS I've found this old thread... I'm looking for a bit less long answer to read ;) Re: DUB specify version identifier on command line? On Wednesday, 8 March 2017 at 02:15:00 UTC, Nicholas Wilson wrote: Setting version identifiers is done by the `-version=ident` command line flag (this is equivalent to `version = ident` at source level) . This should therefore be settable by the "dflags" dub configuration setting. The way I would do it would be to have a custom configuration that sets "dflags" : [ "other normal flags", "-version= MemoryDebug"] and then build the MemoryDebug dub configuration. Hope that makes sense. Yes... Although I was looking for a command line parameter for dub, not dmd, but apparently it's impossible. So thanks for pointing to the DFLAGS possibility, it has worked. :) I still prefer this for building different versions rather than changing the dub.json file every time. Thanks! Re: Best memory management D idioms On Tuesday, 7 March 2017 at 18:21:43 UTC, Eugene Wissner wrote: To avoid this from the beginning, it may be better to use allocators. You can use "make" and "dispose" from std.experimental.allocator the same way as New/Delete. OK I've been reading on std.experimental.allocator; it looks really powerful and general, more than I need. I see the potential but I don't really have the knowledge to tweak memory management, and the details of the "building blocks" are well beyond me. But even if I don't go there, I guess it's a good thing that I can change my program's allocator by changing one single line or version assigning theAllocator, and benchmark the results among different possibilities. I see the default allocator is the same GC heap used by 'new'. Just for my learning curiosity, does this mean that if I theAllocator.make() something and then forget to dispose() it, it will be garbage collected the same once no longer referenced? And so are these objects traversed by the GC? I've also looked at mallocator, [2] can it be used in some way to provide an allocator instead of the default theAllocator? As far as I can tell mallocator is not enough to implement an IAllocator, is there a reason, or where's the rest, am I missing it? [1] [2] DUB specify version identifier on command line? I'm talking about the conditional compilation keyword "version", not about version strings. I've looked in DUB's help and reference [1][2] but can't seem to find how to solve my problem. On the command line it seems to be possible to specify debug identifiers, but not version identifiers. [3] It came up while trying to do something specific, so I'll explain this. I'm learning and trying things, and I was playing with dlib.core.memory. Before moving to the next thing I wanted to try printMemoryLog(). This outputs memory debugging info, only when compiled with version(MemoryDebug) [3]. I'm working with Visual D. However for 3rd party package dependencies it's simpler to compile them with dub, and have VS find the lib for my client project. Without the version identifier, my program works: compiles, links to dlib, and runs ok. Then I instruct VS to define version(MemoryDebug) for some configuration. No matter how I re-run dub to build dlib, I get linking errors from the additional functions defined in the imported dlib source which aren't found in the binary lib. I guess it's also possible to specify this by adding to the dub.json file [2], but for me it's more flexible if I can leave it alone and compile different versions from the command line alone. But if the json is the only way please let me know. Otherwise what am I missing? Thanks in advance. [1] [2] [3] [4]
https://www.mail-archive.com/search?l=digitalmars-d-learn%40puremagic.com&q=from:%22XavierAP+via+Digitalmars%5C-d%5C-learn%22&o=newest&f=1
CC-MAIN-2022-21
refinedweb
12,966
63.09
- Overview - Creating a subgroup - Membership - Mentioning subgroups - Limitations Subgroups Introduced in GitLab 9.0. GitLab supports up to 20 levels of subgroups, also known as nested groups or hierarchical groups. By using subgroups you can do the following: - Separate internal / external organizations. Since every group can have its own visibility level (public, internal, or private), you. For more information on allowed permissions in groups and projects, see visibility levels. Overview A group can have many subgroups inside it, and at the same time a group can have only one immediate. The setting can be changed for any group by: - A group owner. Select the group, and navigate to Settings > General > Permissions, LFS, 2FA. - An administrator. Navigate to Admin Area > Overview > Groups, select the group, and choose Edit. For more information check the permissions table. For a list of words that are not allowed to be used as group names see the reserved names. Users can always create subgroups if they are explicitly added as an Owner (or Maintainer, if that setting is enabled) to an immediate parent group, even if group creation is disabled by an administrator in their settings. To create a subgroup: In the group’s dashboard click the New subgroup button. Create a new group like you would normally do. Notice that the immediate parent group namespace is fixed under Group path. The visibility level can differ from the immediate parent group. Click the Create group button to be redirected to the new group’s dashboard page. Follow the same process to create any subsequent groups. Membership When you add a member to a group, that member is also added to all subgroups. Permission level is inherited from the group’s parent. This model allows access to subgroups if you have membership in one of its parents. Jobs for pipelines in subgroups can use runners registered to the parent group(s). This means secrets configured for the parent group are available to subgroup jobs. In addition, maintainers of projects that belong to subgroups can see the details of runners registered to parent group(s). deduce the following things: - There are 5 members that have access to the group four. - User 0 is a Reporter and has inherited their permissions from group onewhich is above the hierarchy of group four. - User 1 is a Developer and has inherited their permissions from group one/twowhich is above the hierarchy of group four. - User 2 is a Developer and has inherited their permissions from group one/two/threewhich is above the hierarchy of group four. - For User 3 the Source column indicates Direct member, therefore they belong to group four, the one we’re inspecting. - Administrator is the Owner and member of all subgroups and for that reason, as with User 3, the Source column indicates Direct member. Members can be filtered by inherited or direct membership. Overriding the ancestor group membership To override a user’s membership of an ancestor group (the first group they were added to), add the user to the new subgroup again with a higher set of permissions. For example, if User 1 was first added to group one/two with Developer permissions, then they inherit those permissions in every other subgroup of one/two. To give them Maintainer access to group one/two/three/four, you would add them again in that group as Maintainer. Removing them from that group, the permissions fall back.
https://docs.gitlab.com/13.12/ee/user/group/subgroups/
CC-MAIN-2022-05
refinedweb
570
55.44
This site uses strictly necessary cookies. More Information I'm working in Unity 3.5.7f6, and I created my own script called GameManager, a standard MonoBehaviour script, added it to a object, and I noticed that I got a cog wheel as a icon for the script (on the left far left side): Normally you get, what looks like, a white paper with a corner tucked in: But I didn't think much about it since the script worked as it should and everything was fine while working in the editor. But then I noticed, when I play my game as a standalone (*.exe), that the public values which I set through the inspector didn't get stored, and the values always was set to the default values. No matter how I did. Same thing when using a custom editor script and using SerializedProperty variable. So I created a new script, with the exact same code, but with a different name (GameplayManager), and that worked as it was intended, both in editor and as a standalone. This time I didn't get the cog wheel, but the standard icon. So my question is, what does this cog wheel mean, what does it do, and how can I remove/change the icon if I do get it? Maybe is the name (GameManager) somehow flagged in Unity to be something special. Does anyone have an idea? bump !! I have the same problem Any insights on this? I can get the gear icon with a script called Game$$anonymous$$anager in 4.6. I haven't tried a build. I'm just really curious if it means anything. Answer by Flickayy · Apr 16, 2015 at 11:55 PM Sorry to bump an old post, but I have recently ran into this confusing issue. From my experience, and I could be wrong or not completely right. In any case, the gear icon has always appeared (to me anyway) when I have created a ScriptableObject class. My conclusion; Somewhere in Unity there is a ScriptableObject named GameManager, not accessible to us. This is why I believe that Unity - Writing The Game Manager has encapsulated it into a namespace, to remove any conflicts within the program itself. I hope this helps anyone else who has encountered this issue. I believe that is correct. The script will still work if it's not put in a separate namespace though, it's just the icon that is determined wrong. Answer by dpoly · Mar 01, 2019 at 04:33 AM This is baked into Unity, but it's a fake problem. It arises only when you put your user-written code into the global namespace. Don't do it! The real answer is that all user-written script code should be in its own namespace -- that's what the feature is there to do. It would help if the Unity templates adopted that. 5.3 Scripts Appear Ghosted in The Inspector? 1 Answer Can a GameObject's full inspector be drawn in a custom Editor Window ? 1 Answer Call inspector on custom window editor 4 Answers Instances not destroyed when editor is stopped 1 Answer Reset a SerializedProperty to it's default value. 2 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/678637/cogwheel-icon-in-inspector.html
CC-MAIN-2021-25
refinedweb
540
72.56
If it may be a wiki-wide problem, you can also check Wikia:Report a problem. See also: #uncyclopedia on irc.freenode.net - Uncyclopedia Content Problems: Report/discuss problems involving unfunny/plagiarized/bad pages at Uncyclopedia: Pages for deletion or Uncyclopedia: Village Dump. - Reporting Vandalism: Report vandalism to the Ban Patrol. Notice: On special occasions (about two times a month) the Main Page gets a special theme for the day. If it looks themed to be weird/bad/broken/assy, don't panic if it stays that way for a day or less. If it stays that way for more than a day and a half, report it. Issues New questions go on top! Pages stuck on Special:BrokenRedirects I serviced broken redirects yesterday, deleting all I could, but there are eight entries there again today that are either struck through or are already deleted but won't leave the report. Spıke ¬ 18:46 8-Mar-13 Should there be pop-up ads on images? When I mouse over...well, quite a few diff. images today, on a number of diff pages, (relevant) ads appear: Nottingham Forest shirts, for example, on the Nottingham page. I use ad-block plus and a few other little ad-blocking programs, so these are the first pop-up ads I've seen in yrs. I don't like it. Do the advertisers have permission to do this or is it 'vandalism'? I've not been on site for a good while, so maybe things have changed in my absence? Thanks anyway. Codeye (talk) 16:36, March 8, 2013 (UTC) - I went to Codeye on his talk page and he recalled that he is currently using a borrowed machine....Unless someone at Wikia wishes to report a policy change, or some other user sees these pop-ups too, this problem should be considered solved. Spıke ¬ 18:46 8-Mar-13 That fancy new IPv6 stuff has an IPv6 address of 2001:470:1d:28::6, but it is not reachable. The last hops end up in Hurricane Electric's network: 9 tserv1.tor1.he.net 76.535 ms 83.345 ms 89.022 ms 10 tserv1.tor1.he.net 91.558 ms !A 92.239 ms !A 82.673 ms !A API error: "text search is disabled" Hi, I've written a small bot which uses MediaWiki web API () for querying content in Uncyclopedia. It used to work in the past but now the server is always returning the following error: <error code="srsearch-text-disabled" info="text search is disabled" /> Can you guys please enable it and keep it as such? Thanks in advance. —The preceding unsigned comment was added by 186.205.128.47 (talk • contribs) - This is not something that Uncyclopedia has control over. Our API is governed by out host, Wikia. Please use this page to contact Wikia. -- Brigadier General Sir Zombiebaron 03:34, August 31, 2012 (UTC) NICE BOAT ZH-TW UNCYCLOPEDIA IS DOWN COMPLETELY.........R.I.P. ZH-TW UNCYCLOPEDIA 2005-2012.--59.126.178.14 01:03, April 25, 2012 (UTC) Broken image on Uncyclopedia:No Adverts The image on Uncyclopedia:No Adverts has stopped working. Is there a backup image on UnCommons? I don't seem to recall exactly how it looked (I believe it was photoshoped a little) but the folks at the Spanish Wikipedia might have borrowed it ([1]). MadMax (talk) 16:13, March 26, 2012 (UTC) ROBERTO FUCK AGAIN!!!!!!!!!!!!!!! As my title said...--Q7gcosmolite (talk) 00:51, March 11, 2012 (UTC) - Go yell at carlb. Don't expect a response, though. He doesn't believe in communicating with people. ~ 01:09, 11 March 2012 Delete my account Want to delete my account here. How?! --Xubnormal 8:00, January 7, 2012 (UTC) - Generally on mediawiki, you can't. But you should ask sannse or some other Wikia lackey; even if the account can't be deleted, if you're trying to avoid folks finding you, or something, they could change the name to something else less... you. ~ 16:48, 7 January 2012 Article does not exist, yet it exists The link for my newly-posted article for Microsoft Office Source Code exists this way --> [2] but doesn't exist this way --> [3]. I would rather that it exists both ways if possible. Phrank Psinatra 15:08, December 25, 2011 (UTC) - I redirected it for you. -- Sir Xam Ralco the Mediocre 15:19, December 25, 2011 (UTC) Problem with Special:WantedPages Special:WantedPages is listing articles which already exist. I haven't checked all of them but they seem to be mostly UnProject-related pages. MadMax 01:14, December 19, 2011 (UTC) - Could be leftover detritus from the big namespace thingy. Blame Wikia! -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 06:14, January 10, 2012 (UTC) Image Removal I own this image: it was used without my consent and I insist that it be taken down. - First of all, that's on the other wiki, so I'd take it up with them. Second of all, can you prove you own the image? -- 20:09, September 7, 2011 (UTC) - Magic Man's right, there's very little we can do about taking down an image cross-wiki. Try talking to one of the administrators on that site or, if you haven't brushed up on Portugese, sannse, who is Staff and all-powerful. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 07:25, November 2, 2011 (UTC) UnNews link Why did you make it so difficult to get to the news page from the main page? If you bring up the main page and say quick, how do I get to the news page, you end up having to scroll down to the bottom of the page (2 pages down on my system @ 1200x1600) to get to it. There used to be a link at the top of the page but that seems to have been removed for some reason(mistake, idiocy). - If you try clicking the "UnNews" link in at the bottom of the "In the news" segment of the main page or the "Current events" link to the side, you'll notice that they'll take you to what you're referring to as the news page. —Sir Socky (talk) (stalk) GUN SotM UotM PMotM UotY PotM WotM 22:15, 3 August 2011 {{nologo}} There seems to be a problem with this. The logos overlap each other (the original is still there). Is this just a problem with Vector or is it a problem for all skins? --Gamma287 ☭Tetяis? 06:56, July 25, 2011 (UTC) - I or someone else may have broken something in the site js somewhere along the line... or possibly your browser is just a piece of joke. Or maybe both. You don't use firefox by any chance, do you? ~ 08:29, 30 July 2011 - Occasionally. I'm using Chrome now. Firefox's plugins keep crashing (7/10 times). I find Google's browser, err, efficient... --Gamma287 ☭Tetяis? 23:20, August 8, 2011 (UTC) - Turns out it isn't vector... --Gamma287 ☭Tetяis? 01:33, August 22, 2011 (UTC) Category issues with UnNews? I notice a lot of UnNews articles listed on Special:UncategorizedPages. Just thought I'd check here before readding categories to them. MadMax 17:17, June 10, 2011 (UTC) - There's a similar problem on LonelyPages. You've got a greenlight, if you haven't done it already yet, knowing you. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 02:58, June 17, 2011 (UTC) It looks like every UnNews article has been orphaned (and making LonelyPages unusable in the process). I'm not sure what to do about it aside from listing every individual article on one page. MadMax 10:04, July 24, 2011 (UTC) - Apparently the old UnNews Archive was VFD'd last month. I'm assuming its ok to restore the old system at least for the time being? MadMax 10:41, July 24, 2011 (UTC) - Certainly. Not sure why Skullthumper (I seem to remember it was him) had such an urge to delete it. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 20:02, July 24, 2011 (UTC) - Damn, I put this together months before and only remembered it now, but this was actually the fault of UnNews suddenly being counted as articles because of the whole content namespaces thing, not because the archives were deleted. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 11:39, July 25, 2011 (UTC) Yeah he recently left me a message about that. Would the digest be worth keeping around as a backup or should I go ahead and delete them? MadMax 07:42, July 30, 2011 (UTC) - Doesn't seem like it would be; if folks really need a backup, the normal way of navigating categories is still an option, as that's really all any of the setups would do, just put a prettier face on it. ~ 08:30, 30 July 2011 Ok, thanks. MadMax 16:29, August 6, 2011 (UTC) What is wrong with this website? This may just be the weirdest website on the internet. Why is it like this? - What's wrong with it? It sucks, that's what. Weirdest? In your dreams! Why is it like what? Sucky? I don't know. --Roman Dog Bird 19:29, May 29, 2011 (UTC) Media Wiki Setup I posted something like this on Chiefjustice's talk. Can someone tell me wikia or is it me? --Gamma287 ☭Tetяis? 11:26, May 5, 2011 (UTC) - I just reload the page after about 10 seconds, or an hour later, depending on whether I remember the tab I was in, or get to do something else in the mean time, but it is a peculiar thing, that started happening to me about 30 minutes ago, too. -- DameViktoria - (Contribs) - (Talk) - (Block log) 11:30, 5 May - The two of you have posted illegally. You first have to do a Media Wiki Setup. Please turn yourself in to the nearest wikia official. Thank you for your time. Aleister 12:16 5-5-'11 I can't make an account! The thing won't let me create an account! I've been trying to make one since september, but it won't let me! HELP! zh-tw is down The front page is fine but when you open articles, it will show an error message (the picture). It has been for some days.--Sunny周 11:54, April 22, 2011 (UTC) - Unfortunately, most of the wikis hosted by carlb seem to be having this problem to some extent, with apparently random pages and actions failing. It looks like they just need some database poking, but as said hoster guy has not been responding to attempts to contact him of late, I'm not really sure what anyone can do. ~ 17:33, 22 April 2011 Problem editing protected page I was trying to remove a deleted category from Papa_Smurf and apparently the word is listed on the spam protection filter and prevents the article from being edited. Is there any way around this or does it have to be removed from the list? MadMax 10:55, March 29, 2011 (UTC) - Probably want to talk to Mordillo about this one, as he was the one who added it, but to me, the existence of that page at all seems like sufficient reason to remove it from the filter at this point. Not that anyone's apt to care what I think, especially about these things... ~ 11:10, 29 March 2011 Ok thanks. MadMax 20:59, March 29, 2011 (UTC) THIS MAY BE IMPORTANT What is mirror.uncyc.org??? its says that its Uncyclopedia, the content-free encyclopedia but it seems like it may have a virus... By Midnight89 - It's a mirror, a... replica or copy, I suppose you could say, hosted by a third party, that keeps backups imported from here with a script. It's rather useful to have around when Wikia screws things up, for instance. ~ 18:39, 4 March 2011 This is a classic example of a scum bastard using a wiki for hate speech. So... Is there any way I can see what images I have uploaded? --~ 15:55, January 14, 2011 (UTC) - look at your logs. ~ 16:41, 14 January 2011 Chromium image from commons: [4] license description not available. Please delete this image! This is a copyright violation! --89.246.45.39 12:22, January 7, 2011 (UTC) - I've followed the link you provided and it appears that this image was covered by CC 3.0 license, which was not included when the file was originally uploaded. I have added further licensing details and emailed through to the original uploaded on creative commons, and am currently waiting upon a response. Reading through the comments on the original upload though he does not restrict the usage of the image in a CC area like uncyclopedia, and asks only that if it appears in print that his name is included in as the originator. It's also under a GNU and fair-use clause, has not been copyrighted, and it's inclusion here is covered by the fact that it is used for parody purposes. However, if you are able to find an alternate image that is not covered by copyright law I'm happy to upload it in the place of this one, but we do prefer to use fair use images where ever possible. • Puppy's talk page • 00:40, June 5, 2009 Sunday, 14:33, Jan 9 2011 UTC I don't know how to report a problem! Oh well... what's up with mirror.uncyc.org? is it legit? The Hell...? Why was our GOAT page of our website removed by some faggot named Roman Dog Bird? I demand it back with all of it's content. - "Your" website? Perhaps you should visit VFD more often. It was removed by committee vote. You might find a copy in someone's userspace if it had any merit at all, but don't recreate the deleted page, rewrite it completely. Then get it Pee Reviewed, to avoid the article getting deleted again by a different admin. -- Simsilikesims(♀UN) Talk here. 06:02, January 10, 2012 (UTC) - You do have one thing correct, though. Roman Dog Bird is a faggot. -- TKFUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUCK 06:25, January 10, 2012 (UTC) - I was initially going to leave a lengthier response (nobody voted for this on VFD, I deleted it on sight), but all I have to say is Wikipedia doesn't think you're relevant, just because this site is a piece of shit doesn't mean we'll take it either, and yes indeed I am a fag. How long did it take for you to come to that conclusion? --Roman Dog Bird 08:14, January 10, 2012 (UTC) Th Down! Not even show error message! Absolutely can't access-able! --61.90.110.10 09:41, March 30, 2012 (UTC) - Same as above.Nice boat.[5]--Q7gcosmolite (talk) 13:56, April 24, 2012 (UTC) Search box broken When I put a topic into the search box, say "Cow Dung Parade", it now goes to page of lists which says at the top "There is an article named Cow Dung Parage on this wiki" instead of taking me to the article. I then have to click on the article itself to get to it. Broken. Searching for a fix. Aleister 11:19 26-4-'12 - It appears to be a Wikia issue. Try Special:Contact and nag them about it. It's it's across all Wikia though it may rectify over the next 24 hours • Puppy's talk page • 11:38 26 Apr - Even when it was working search sucked. I use google to search Uncyc, it actually does it better. MrN Fork you! 11:41, Apr 26 - Really? It always has worked fine for me, with the giving of options and all. I have my search box over on the left side and not on top, using the old skin. And I'll copy and paste my question over at the Contact, thanks. Nag nag nagging I will go. Aleister 11:57 26-4-'12 - Oh, it's changed. It did work (kinda) but now is totally broken. Probably turning on patrolled edits caused the generator to overheat and misfire or something I expect. But yea, I search Uncyc using google. Just put "Uncyclopedia whatever you want" into google and it finds a lot more useful stuff sometimes. MrN Fork you! 12:04, Apr 26 - I'm also on ye olde skin, but the search function is the same on both. It seems to be being fixed as we speak though. The [[cow thing wasn't working 20 minutes ago, but it is now. I actually have saved a bookmark which is Googke search with test site:uncyclopedia.wiki.com and just replace "test" with whatever I'm looking for. For image searches especially it's much more reliable. • Puppy's talk page • 12:11 26 Apr - Dittoes to you all. This is how I gain access to any Uncyclopedia page not on my watchlist, and it's broken. It was briefly broken last year. Is anyone still on speaking terms with Wikia? MrN is also correct on the separate issue that the Uncyclopedia search field has never worked as well as Google. Spıke ¬ 23:48 30-Apr-12 Not a bug but a feature Memory Alpha, where it's also broken, points users to a page where Wikia states that it has elected to make it work this way: that users typing the exact name of a page into the search field probably want to do a search beyond the exact match and not be taken quickly to the page they selected. Spıke ¬ 10:00 1-May-12 JUST GOT EVEN WORSE The format of the list of search matches has now changed. If you search for the exact name of an existing page, it no longer calls that to your attention, nor necessarily regards that page as the best match. If you deliberately type a nonexistent page, it no longer asks you if you want to create it. The only option is to hack the URL yourself. Search is now bleepin' useless! especially because, as we agree above, you would never actually use the search bar to search Uncyclopedia. Spıke ¬ 21:08 10-May-12 A fix For Mozilla, it works to manually edit the uncyclopedia-en.xml file in two places to replace the call to Special:Search with a normal URL. (I also change its name in two places.) This results in a search bar that assumes what you entered was the exact name of an Uncyclopedia page you want to see or create. Spıke ¬ 23:00 16-May-12 Reply from Dopp of Wikia I used Special:Contact to advise Wikia of this conversation and Dopp replied: - Thanks for contacting Wikia and reporting this discussion to us. Yes, we're in the process of redoing our search tool, and one component of that is to send people to results pages instead of actual articles. This is not a bug, and it's unlikely to be reverted. Other related issues, however -- such as the quality of results, what appears on a results page, and how effective the search suggestions dropdown menu is -- are under constant development, and we are improving them as we go. - Uncyclopedia is in a particularly unusual situation, since many of the changes we're making are for the standard Wikia skin (not monobook), and you're using a heavily modified version of monobook. - The original version of monobook actually provides two buttons: "go" and "search". I understand it would change the style of your theme to implement such a solution, but it's an option to you. (It's not an option, however, to *only* provide a "go" button. We're working to improve the quality of our search results pages, and it's important that users be able to find them.) - Regardless of your approach, you can expect to see further improvements as we go. Thanks for being part of Wikia! - Best, - Dopp Fixed? The above dialogue happened a few days before I reported it. This afternoon the search bar is back to normal. "Cåm" on Dopp's blog writes ("6 days ago"): - You can now change it back to a "go" functionality through your preferences. If Uncyclopedia still have the go enabled for all users it'll be some bit of code in their MediaWiki:Wikia.js that staff have yet to notice. Essentially, it shouldn't do that, and if you should feel the need to you can contact staff to have them look into it. And I see that Preferences, which now has an "Under the Hood" tab (groan!) has a check-box for "Enable Go-Search". Spıke ¬ 00:00 28-May-12 Left side of the screen Recent changes and other options now is sporadically not showing up on the left side of the screen. What is going on??? -- Simsilikesims(♀UN) Talk here. 21:38, May 11, 2012 (UTC) - They appear for me way down on the left, beyond the page content. Lyrithya? Answers? Spıke ¬ 22:24 11-May-12 - PS--Now see the Forum. Spıke ¬ 13:33 12-May-12 - Issue is fixed for me now (at least when using Firefox 12). -- Simsilikesims(♀UN) Talk here. 15:28, May 12, 2012 (UTC) russian absurdopedia Russian absurdopedia moved to absurdopedia.net . You have old link in left side . - Also, new user GermanPug has added comments in two places that we ought to link to the German "Stupidedia". I get the impression Germany has two analogs to us, as indeed there are alternative websites in the US. The structure of the "In other languages" list in our left margin, and the coding an article uses to assert an interwiki link, assumes there is exactly one Uncyclopedia equivalent per language, and this might have to be rethought. Spıke ¬ 12:40 2-Jun-12 --188.242.128.102 19:33, June 2, 2012 (UTC)if you edit it. they'll send you a message thats old cite please use absurdopedia.net May be they forgot to warn you. Sirmolenko. WALT DISNEY take it down.now.Walt disney was not a fucking sexist, and you're forgetting all the great things he ever fucking did for us. WHY THE HELL ARE YOU SAYING THAT WALT HAD SEX 2,000 TIMES YOU DUMBASSES! This guy is one of my idols. I don't care a bit about all these shit people write about him! R.I.P. Walt Disney, and thanks for all your wonderful, timeless animated movies!!! GODDAMIT THIS FUCKING SITE JUST FUCKING RUINED MY CHILDHOOD YOU MORONS!! - waltdisneyfan999 - Umm... If you're offended by the article, don't read it. Or you try nominating it for deletion. It's not very good anyway. -- Sir Xam Ralco the Mediocre 22:21, June 21, 2012 (UTC) - Also, this page is intended for technological problems. -- Sir Xam Ralco the Mediocre 22:24, June 21, 2012 (UTC) User preferences doesn't save Why come this be? I would like to be male (and not an "undisclosed gender") because it's more fun, and I get a penis.--Timthe3nchanter (talk) 05:15, April 2, 2013 (UTC) - Fucked if I can work that one out. Mine seem to save okay. It may be server lag, but if not blame Wikia. • Puppy's talk page • 08:20 02 Apr 2013 - Well, shit, I blame Wikia. Doesn't work in Chrome or Firefox so far. Thanks for the link. Timthe3nchanter (talk) 16:57, April 2, 2013 (UTC) - This may not be it, but there is a lot of stuff on that page regarding the need to flush the cache. Namely, your browser might show you the version of the page as of before you changed your preferences, which means that your old preferences are still in effect. If you are in Firefox, Ctrl-F5 should clear this. Nevertheless, it's also true that Wikia is continually toying with new things and breaking stuff in the process. Spıke ¬ 17:15 2-Apr-13
http://uncyclopedia.wikia.com/wiki/Uncyclopedia:Report_a_problem?diff=prev&oldid=5668566
CC-MAIN-2014-23
refinedweb
4,011
73.37
ncl_entsr man page ENTSR — Called by a user to set recovery mode in NCAR Graphics. Synopsis CALL ENTSR(IROLD,IRNEW) C-Binding Synopsis #include <ncarg/ncargC.h> void c_entsr(int *irold, int irnew) Description The FORTRAN statement "CALL ENTSR(IROLD,IRNEW)" is normally used to enter recovery mode and save the previous value of the internal error-recovery flag, but it can also be used to exit from recovery mode and save the previous value of the flag, or to just get the value of the flag, without changing it. If recovery mode is turned off by a call to ENTSR at a time when the internal error flag is non-zero, this is treated as a fatal error; the error message is printed, the dump routine FDUM is called, and a STOP is executed. The arguments of ENTSR are as follows: - IROLD (an output variable of type INTEGER) - Receives the old value of the internal flag that indicates whether recovery mode is in effect or not. In the former case, the returned value will be a 1; in the latter case, it will be a 2. Normally, the value returned is saved for a later call to RETSR. -. C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions. Examples Use the ncargex command to see the following relevant examples: tseter, arex02. Access To use ENTSR or c_entsr, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. See Also Online: eprin, errof, error_handling, fdum, icfell, icloem, nerro, retsr, semess, seter, ncarg_cbind University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_entsr
CC-MAIN-2018-05
refinedweb
277
55.47
ListMetrics List the specified metrics. You can use the returned metricsStatistics. Request Parameters For information about the parameters that are common to all actions, see Common Parameters. - Dimensions.member.N The dimensions to filter against. Type: Array of DimensionFilter objects Array Members: Maximum number of 10 items. Required: No - MetricName The name of the metric to filter against. Type: String Length Constraints: Minimum length of 1. Maximum length of 255. Required: No - Namespace The namespace to filter against. Type: String Length Constraints: Minimum length of 1. Maximum length of 255. Pattern: [^:].* Required: No - NextToken The token returned by a previous call to indicate that there is more data available. Type: String Length Constraints: Minimum length of 0. Maximum length of 1024.:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html
CC-MAIN-2017-34
refinedweb
121
54.39
Summary: To get a cron like scheduler in Python you can use one of the following methods: Cron (also called a cron job) is a software utility that helps a user to schedule tasks in Unix-like systems. The tasks in cron are present in a text file that contain the commands to be executed for a scheduled task to be operational. The name of this file is crontab. To learn more about the corn scheduler, you can refer to this link. In this article, we will focus on discussing how we can leverage the functions of a cron like scheduler in Python to manage scheduled jobs. So without further delay, let us jump into our mission-critical question: Problem: Given a scheduled job; how to set a cron like scheduler for the job using Python? Example: Given a text file ( test.txt) and a python script ( test.py). How to schedule a task in Python so that the Python script can be run at scheduled intervals? The Python script is as follows: from datetime import datetime myFile = open('test.txt', 'a') myFile.write('\nAccessed on ' + str(datetime.now())) myFile.close() Upon execution of a certain scheduled task in Python, the desired output is: Now that we have an overview of our problem statement, let us jump into the probable solutions: Method 1: Using the schedule API schedule is an in-process scheduler that provides a very user friendly syntax to schedule tasks using Python. Some of its key features include: - Compatible with Python 2.7, 3.5, and 3.6. - Simple syntax and easy to use API. - Lightweight. - No external dependencies. Since schedule is not a part of the standard Python library, you have to install it using the following command: $ pip install schedule Let us have a look at the following program to see how we can use the schedule module, to schedule tasks: import schedule import time from os import system def job(): system('python test.py') # schedule the job to run at intervals of 1 min schedule.every(1).minutes.do(job) while True: schedule.run_pending() time.sleep(1) Output Method 2: Using Advanced Python Scheduler The Advanced Python Scheduler (APScheduler) is a lightweight and powerful task scheduler which helps us to run routine jobs. The key features of the APScheduler are: - Does not include external dependencies. - Available and tested on CPython 2.5 – 2.7, 3.2 – 3.3, Jython 2.5.3, PyPy 2.2 - Multiple, simultaneously active job stores – RAM, File-based simple database, SQLAlchemy, MongoDB, Redis. - Thread-safe API It provides three basic configurable mechanisms: - Cron-like scheduling - Delayed scheduling of single run jobs (like the UNIX “at” command) - Interval-based (run a job at specified time intervals) To be able to use the APScheduler, the apscheduler module must be installed since it is not a part of the regular Python library. Use the following command to install it: $ pip install apscheduler The following program demonstrates how we can use the APScheduler to run cron like jobs in Python (Please follow the comments in the code given below to get a better grip on the concept): import time import os from apscheduler.schedulers.background import BackgroundScheduler def job(): os.system('python test.py') if __name__ == '__main__': # creating the BackgroundScheduler object scheduler = BackgroundScheduler() # setting the scheduled task scheduler.add_job(job, 'interval', minutes=1) # starting the scheduled task using the scheduler object scheduler.start() try: # To simulate application activity (which keeps the main thread alive). while True: time.sleep(1) except (KeyboardInterrupt, SystemExit): # Not strictly necessary but recommended scheduler.shutdown() Output Method 3: Using the Timeloop Library Another way of executing scheduled tasks is the timeloop library. If you are looking for something simple that can be implemented in your web or standalone application then timeloop could be a good choice. However, if you intend to work with complex operations then this library is not recommended. Use the following command to install the timeloop library. $ pip install timeloop Let us have a look at the following code to understand how timeloop works: from os import system import time from timeloop import Timeloop from datetime import timedelta tl = Timeloop() @tl.job(interval=timedelta(seconds=10)) def train_model(): system('python test.py') tl.start() while True: try: time.sleep(1) except KeyboardInterrupt: tl.stop() break Output Method 4: Using The Crontab Module The crontab module uses a direct API for reading and writing crontab files and accessing the system cron automatically. Crontab is not a part of the standard Python library and has to be installed manually using the pip command. The following syntax can be used to install the crontab module in your system: $ pip install python-crontab Let us understand how the crontab module works in a step-by-step approach: Step 1: Getting Access To Crontab There are five ways of accessing the crontab using the cron module in Python. Among these, three methods work in Unix based environments and require necessary permissions while the remaining two methods will work in Windows too. The Unix specific methods are: - cron = CronTab() - cron = CronTab(user=True) - cron = CronTab(user=’username’) The two other ways that work for Windows as well are: - file_cron = CronTab(tabfile=’filename.tab’) - mem_cron = CronTab(tab=”””* * * * * command”””) Step 2: Creating A New Job creating a new job is very simple and can be don using the following command: job = cron.new(command='/usr/bin/echo') Step 3: Setting The Job Restrictions The crontab module provides us with the ability to set time restrictions upon the jobs without having to use cron’s syntax. Job restrictions can be set using the following commands: # to run the job every minute job.minute.every(1) # to schedule hourly jobs job.hour.every(4) # to run jobs on certain days of week job.dow.on('SUN', 'THU') # to schedule tasks/jobs on specific months job.month.during('APR', 'NOV') Each restriction will clear the previous restriction. If you want to clear all job restrictions you can use the command: job.clear() Now let us have a look at the different options that we can use in the crontab module (Please follow the comments to understand the significance of each command): # enable a job: job.enable() # disable a job: job.enable(False) # to check if a task is enabled or disabled: job.is_enabled() # Chek whether a task is valid or not job.is_valid() # List all available cron jobs for job in cron: print job # Finding a cron job cron.find_command("command") # Find according to command cron.find_comment("comment") # Find according to comment cron.find_time(time schedule) # Find according to time # Removing a Job cron.remove(job) # Defining Environmental Variables job.env['VARIABLE_NAME'] = 'Value' Now that we have an overview of the crontab module and its functionalities, let us have a look at the following code to understand how it works: from crontab import CronTab cron = CronTab(user='finxter') job = cron.new(command='python test.py') job.minute.every(1) cron.write() Conclusion Thus in this article, we learned various methods which can be used to get a cron like scheduler in Python. These were: - Using schedule - Using APScheduler - Using timeloop - Using crontabmodule I hope you learned something from this article and it helps you in your coding journey. Please!
https://blog.finxter.com/how-to-get-a-cron-like-scheduler-in-python/
CC-MAIN-2020-50
refinedweb
1,212
63.29
Hi, I'm reading "Python Essential Reference 2nd ed" by David Beazley, and have encountered the topic of Python's nested scopes in functions, and the lack thereof in Python 2.0 or earlier. I was hoping someone could clarify this for me. The book gives this example: def bar(): x = 10 def spam(): print 'x is ', x while x > 0: spam() x -= 1 Beazley writes: "In this case, when the nested function spam() executes, its global namespace is the same as the global namespace for bar(), the module in which functions is defined. As a result, spam() is unable to resolve any symbols in the namespace of bar() and fails with NameError." Is that another way of saying that Python puts all of its function declarations into the same scope in pre-2.1 Pythons? Thanks, Erik
https://mail.python.org/pipermail/python-list/2003-February/193933.html
CC-MAIN-2016-30
refinedweb
138
65.86
It's redundant to prefix a member name with the class name. Take, for example, a class name of Author. Instead of creating a member named AuthorName,which would then be written out as Author.AuthorName, instead use Name as the member name. Summary Writing, compiling, and executing a program written in C# is an important first step in the exploration of the language. Although in general it doesn't matter what program you use to edit your C# source files, there are benefits to using more powerful editors and environments designed for C# development. Knowing the options and switches of the C# compiler will allow you to take control of how MSIL code is generated by the compiler. You can explore that code by using tools such as ILDASM, included with the Microsoft .NET Framework SDK. The structure of C# programs provides for a number of features designed to make programs safer, easier to write, and less bug-prone. Included in this feature set are namespaces and the using directive. Finally, naming conventions, which include specific casing conventions, can make programs more readable and easier to maintain.
http://www.brainbell.com/tutors/C_Sharp/Class_Members.htm
CC-MAIN-2018-43
refinedweb
188
55.95
. Continuing on the same vein as my last topic where I discussed differences between C# const and readonly keywords (here) and their uses, today I’m going to look at C# enums and their pitfalls. I will only be discussing the basic enums today (the post got kind of long so I’ll discuss enums marked with the [Flags] attribute next post). If you didn’t use enums, you would probably set up a bunch of string or integer constants instead. String comparison is, of course, more expensive than numeric comparison (though, of course, integers are harder to interpret through their raw values). A sample set of int constants to represent an AccountType might look something like this: 1: public static class AccountType 2: { 3: public const int Checking = 0; 4: public const int Savings = 1; 5: public const int MoneyMarket = 2; 6: public const int IndividualRetirementAccount = 3; 7: public const int CertificateOfDeposit = 4; 8: } The problem is, since the type of these consts are int, declaring a variable to hold them gives you no hints as to what possible values it could have: 1: public class Account 3: public int Type { get; set; } 4: } Notice that if you just use integer constants, your variable holding the value is type int (or some other numeric). Looking at this class alone, we have no idea what the possible values might be! IntelliSense won’t help you either as it would only bring up members of object (since primitives are boxed as objects, you can call any object members off a primitive – unlike C++ and Java – such as: 5.ToString()). There is also no way to really verify whether the value assigned to the int is valid other than to check against all possible defined consts for that domain manually, which can be a maintenance nightmare if they change often. Thus enters the enum. It allows you to define a series of values that the enumerated type can hold: 1: public enum AccountType 3: Checking, 4: Savings, 5: MoneyMarket, 6: IndividualRetirementAccount, 7: CertificateOfDeposit, This definition does two valuable things for us: The first point simply means that C# will assign numeric values for the enumerated values. For example, this definition is equivalent to the one above: 3: Checking = 0, 4: Savings = 1, 5: MoneyMarket = 2, 6: IndividualRetirementAccount = 3, 7: CertificateOfDeposit = 4, Note that if you don't explicitly set values, the first value will default to zero and each subsequent value will be the next higher. If you wish you can choose to only assign certain values, but this can lead to confusion: 1: public enum AnotherType 3: One = 1, // starts enum at one 4: Two, // next value increments from previous 5: Three, 6: Ten = 10, // gives this value explicit 10 7: AnotherTen = 10, // can even repeat a value if you like 8: Eleven, // next value increments from previous 9: } In practice, though, unless you’re making an enum values match a set of domain values (and even then I recommend adapting between the domain values and your enum to reduce coupling) or unless you’re defining bit-flags (more on this in a later post), I don’t recommend manually assigning values - let C# do it for you. The second point is important because it can save you from silly mistakes. If you have an instance of AccountType, you can’t implicitly assign it a numeric or any other enumerated value: 1: AccountType value; 2: 3: // compile time error, can’t implicitly convert int to AccountType 4: value = 13; 5: 6: // compile time error, can’t implicitly assign another values to this instance. 7: value = AnotherType.One; 8: 9: // the only thing that can be directly assigned are other instances of AccountType 10: // or the AccountType values: 11: value = AccountType.Checking; // good, assigning an AccountType const value. 12: 13: AccountType other = value; // good, assigning one AccountType to another. Also, because enums are strongly typed, you will get full IntelliSense when you are looking at the possible values. For example, if you know a parameter is of type AccountType, all you need do is type in “AccountType.” and you will be presented with the list of AccountType values (Checking, Savings, etc). So they sound great, right? Well yes, they are, but there are some things to watch out for. First of all, while I said that you can’t implicitly convert an int to an enumerated value, you can explicitly request it (through a cast): 1: AccountType value = (AccountType) 65327; This compiles, and yes it runs. So what is the AccountType value? Well, it is somewhat undefined. That is, it actually quite happily accepts the value 65327 you forced on it (with a cast), but since this corresponds to none of the defined values, your value is really incorrect. This is one of the other reasons I recommend avoiding casting int to enum and back. If you ever need to cast an int to an enum (or parse a string to an enum) you can check for validity by first calling Enum.IsDefined(): 1: AccountType accountTypeValue; 3: int value = int.Parse(Console.ReadLine()); 4: 5: if (Enum.IsDefined(typeof(AccountType), value) 6: { 7: accountTypeValue = (AccountType)value; Same thing is true if you’re reading a string and want to parse it. Note that if you want to use a string, it can either be a string representation of a number (e.g. “4”) or of the enumerated value identifier (e.g. “Checking”): 3: string value = Console.ReadLine(); 5: If (Enum.IsDefined(typeof(AccountType), value) 7: // the third parameter is to ignore case (true) or be strict (false) 8: accountTypeValue = (AccountType)Enum.Parse(typeof(AccountType), value, true); While you can assign virtually any values to your enum, you should be careful to make sure there is always a valid zero value. For example, take a look at this: 1: public enum OrderType 3: Buy = 1, 4: Sell = 2 5: } Looks fine so far, right? But then what if you had a class like this: 1: public class Order 3: public OrderType TypeOfOrder { get; set; } If nothing in the constructor initializes TypeOfOrder, what OrderType value is it? Is it the first value (Buy)? Remember that all struct and class fields are automatically initialized with their default values. For reference types, this is null, but for numeric types (of which enums are a specialization, of sorts), this is zero (0). So, the answer is that (int)TypeOfOrder == 0 which is undefined in our set of values. This can be very dangerous. If switch statements processing our enum don’t have a default clause, we may never even see it till it really bites us! So, we should always have a good zero value so that the enum value can be defaulted correctly. But what should that value be? Consider if we would have accepted the default numbering: 3: Buy, // zero since unspecified 4: Sell // 1 Ah, this is better, somewhat… but now consider that Order constructor again. This means that every time we create an Order object it will default to TypeOfOrder == OrderType.Buy. While this is better than being undefined, it may be a logically incorrect assumption. So, in those cases where you can’t assume that the first enum value (zero value) is the default, you should either create an Unknown or consider using a nullable (System.Nullable<T>) enum (more on this in a later post, essentially this lets you make value types optional). So, if we wanted to use an Unknown: 3: Unknown, 4: Buy, 5: Sell 6: } Now, an Order in its default constructed state will have TypeOfOrder == OrderType.Unknown which fits logically (since we haven’t assigned it). In my last post (here), I talked about the difference between const and readonly and how const is a compile-time constant and the pitfalls of it. Well, enum value definitions are compile-time constant as well! This means that if you declare your enum publically in a class library and then later change the values, and assemblies that use those values need to recompile as well or they will use the version of the value they were compiled with! As a quick example, let’s assume a class library named Ordering.DLL is created and has this enum and a method that uses it: 1: // sample enum to illustrate compile-time const-ness of values 2: public enum Ordering 3: { 4: // int value == 0 5: First, 6: // int value == 1 7: Second, 8: // int value == 2 9: Third, 10: } 11: 12: public static class PotentiallyConstant 13: { 14: // given an ordering value, print it's string name and integer value. 15: public static string WhatOrderAmI(Ordering myOrder) 16: { 17: return string.Format("{0} [{1}]", myOrder, (int)myOrder); 18: } 19: } 20: Now let’s assume that another assembly named OrderProcessor.EXE consumes it: 1: public static class Program 3: public static void Main() 4: { 5: // grab the first part of the enum, since the enum is defined and used by the other assembly, this is fine. 6: Console.WriteLine("This should be second place: {0} [{1}] - {2}", Ordering.Second, (int)Ordering.Second, PotentiallyConstant.WhatOrderAmI(Ordering.Second)); 7: } 8: } We would expect after compiling this that we would get the following output: 1: This should be second place: Second [1] – Second [1]. The first value is output from Program.EXE, and the second is generated from the class library. Now let’s say we go in and tweak our enum value to add a new value called None and put it at the front of the enum: // sample enum to illustrate compile-time const-ness of values. 1: public enum Ordering 3: // int value == 0 4: None, 5: // int value == 1 6: First, 7: // int value == 2 8: Second, 9: // int value == 3 10: Third, 11: } Notice that this has bumped all the existing values of the enum up by one. Now, if we just deployed the new class library without recompiling everyone who uses it, we’d see this: 1: This should be second place: First [1] – First [1]. See what’s happened? Even though we used Ordering.Second in our program, at the time it was compiled, Ordering.Second had the value 1. We recompiled the library with the enum definition so that now Ordering.First has the value 1, and thus the erroneous output. You may wonder why it output the correct name for the value 1 in the program even though the program was not recompiled. This is because even though Ordering.Second was compiler-replaced with 1 in the program, interpretation of the values of the enum (parsing, converting to string) is still part of the enum itself and thus it stays correct. In other words, the enum const values (Ordering.First, Ordering.Second, etc) when used explicitly in a program are replaced at compile-time with their numeric values. The solution? Well, there’s two things really. First of all, if you want to modify an enum, you should probably try to always add to the end of the enum and not re-order the contents or change existing values. If you must change or reorder the values, make sure all assemblies that use that enum are recompiled correctly. Enums are a powerful tool that lets you simplify development by providing a range of values that a variable can be. This both makes your code more readable and aids the user by supplying them readily with the list of values the variable can be. There are a few “gotchas” to watch out for, but in general as long as you avoid casting int to enum (or at least check them with Enum.IsDefined()), make sure you have a good default value, and avoid re-ordering enum values (or rebuilding completely if you do), you should be fine! Stay tuned next time when I dive into what exactly the [Flags] attribute does to the enum definition and, more importantly, what it doesn’t do… Print | posted on Thursday, July 8, 2010 5:53 PM | Filed Under [ My Blog C# Software .NET Fundamentals ]
http://blackrabbitcoder.net/archive/2010/07/08/c-fundamentals-the-joys-and-pitfalls-of-enums.aspx
CC-MAIN-2018-34
refinedweb
2,014
57.91
Hello everyone, I am sorry for the long post but would really appreciate any help you guys can offer ! I am trying to create a custom RNN that would apply n different recurrent connections (that are in fact n biquadratic filters) on the input. Another way of thinking about it would be to have n different RNN that works on the input and to concatenate thair results afterwards, however I believe that would lead to very poor performances (please tell me if I am wrong). For instance : I have a mini-batch of size [32, 16000], and want to apply 128 filters on it, which means that my output size is [32,128,16000]. What I did so far is : - Expand and clone the input so I have a tensor of size : [32, 128, 16000]. - Permute axes to get a size of [16000, 32, 128]. - Iterate on the sequence and use matrices products to compute the input, since the filters are linears. In fact, I use this recurrence relation that works for only one sequence of size N (except the first two samples ofc) : where the a_i and b_i are the learnable weights, x[n] is the n-th sample of the input, y[n] is the state at the time frame n, and i is for the i-th filter (or the i-th recurrence relation if you prefer). I already tried two methods to make it work (see below). The problem is that my versions are too slow and I don’t have a good enough understanding of pytorch to optimize them. So I would really appreciate any help you can provide on these points : - Is there a better way of implementing an RNN with n different recurrence relations ? - Do you see improvements I could make to my code (see below) that would yield to good performances ? - May computing the outputs of the different RNNs in parallel and concatenate (with torch.cat) the results yield to better results ? - May implementing in C++ (as it is the case for pytorch for the recurrence relation) become necessary to achieve good performance ? Links for the pytorch RNN : - RNN.py - RNN.cpp - QuantizeLinear.cpp which seems to contain the function that achieve the loop on the sequence : fbgemm_linear_int8_weight_fp32_activation. Please, tell me if there is anything unclear or if you need more info. Thanks for reading this post, and thanks for any piece of advice you can provide ! Code : In each of the following version, the loop on the sequence is the piece of code that is the longest to execute. - Version A : def forward(self, X) : bs = X.size()[0] X = X.unsqueeze(1).expand(-1,self.kernels_number,-1).clone() B0, A1, A2 = self.filters() if(self.is_cuda): out = torch.zeros(self.points_per_sequence, bs, self.kernels_number).cuda() else: out = torch.zeros(self.points_per_sequence, bs, self.kernels_number) out[0] = torch.mul(X[0],B0) # [bs,1]*[1,128] = [bs,128] out[1] = torch.mul(X[1],B0) - torch.mul(out[0],A1) for n in range(2, X.size()[0]): out[n] = self.f_2(out[n-1], out[n-2], X[n], X[n-2], B0, A1, A2) out = torch.flip(out, dims=[2]).permute(1,2,0) return out (Since I am using pass-band filters, I only need the three tensors B0, A1, A2 of size [1,n_channels] each, there are computed from only two weights but it does not matter here). The function self.f_2 : def f_2(self, y_1, y_2, x, x_2, b0, a1, a2): """ Computing y[n] with y[n-1], y[n-2], x[n], x[n-1], x[n-2], b0, a1, a2 Sizes : x : [bs,128] b0,a1,a2 : [1,128] y_1, y_2 : [bs, 128] """ return torch.mul(x-x_2,b0) - torch.mul(y_1,a1) - torch.mul(y_2,a2) I have not tried this version on the backward pass but the forward pass works. - Version B : For this one, I used the function lfilter from torchaudio. Since the filters are all differents, I started by looping over the filters and applying lfilter which did not work well : it took longer than the previous version and had RAM issues. Then I modified the function lfilter so it now accepts different filters. It now behaves, performance wise, as the version A. Here is my version of the filter : def m_lfilter( waveform: torch.Tensor, a_coeffs: torch.Tensor, b_coeffs: torch.Tensor ) -> torch.Tensor: r"""Perform an IIR filter by evaluating difference equation. NB : contrary to the original version this one does not requires normalized input and does not ouput normalized sequences. Args: waveform (Tensor): audio waveform of dimension of `(..., number_of_filters, time)`. a_coeffs (Tensor): denominator coefficients of difference equation of dimension of `(n_order + 1)`. Lower delays coefficients are first, e.g. `number_of_filters*[a0, a1, a2, ...]`. Must be same size as b_coeffs (pad with 0's as necessary). b_coeffs (Tensor): numerator coefficients of difference equation of dimension of `(n_order + 1)`. Lower delays coefficients are first, e.g. `number_of_filters*[b0, b1, b2, ...]`. Must be same size as a_coeffs (pad with 0's as necessary). Returns: Tensor: Waveform with dimension of `(..., number_of_filters, time)`. Note : The main difference with the original version is that we are not packing anymore the batches (since we need to apply different filters) """ shape = waveform.size() # should returns [batch_size, number_of_filters, size_of_the_sequence] assert (a_coeffs.size(0) == b_coeffs.size(0)) assert (len(waveform.size()) == 3) assert (waveform.device == a_coeffs.device) assert (b_coeffs.device == a_coeffs.device) device = waveform.device dtype = waveform.dtype n_channel,n_filters, n_sample = waveform.size() n_order = a_coeffs.size(1) assert (a_coeffs.size(0) == n_filters) # number of filters to apply - for each filter k, the coefs are in a_coeffs[k] and b_coeffs[k] n_sample_padded = n_sample + n_order - 1 assert (n_order > 0) # Pad the input and create output padded_waveform = torch.zeros(n_channel, n_filters, n_sample_padded, dtype=dtype, device=device) padded_waveform[:,:,(n_order - 1):] = waveform padded_output_waveform = torch.zeros(n_channel, n_filters, n_sample_padded, dtype=dtype, device=device) # padded_output_waveform = torch.zeros(n_channel, n_sample_padded, dtype=dtype, device=device) # Set up the coefficients matrix # Flip coefficients' order a_coeffs_flipped = a_coeffs.flip(1).unsqueeze(0) b_coeffs_flipped = b_coeffs.flip(1).t() # calculate windowed_input_signal in parallel # create indices of original with shape (n_channel, n_order, n_sample) window_idxs = torch.arange(n_sample, device=device).unsqueeze(0) + torch.arange(n_order, device=device).unsqueeze(1) window_idxs = window_idxs.repeat(n_channel, 1, 1) window_idxs += (torch.arange(n_channel, device=device).unsqueeze(-1).unsqueeze(-1) * n_sample_padded) window_idxs = window_idxs.long() # (n_filters, n_order) matmul (n_channel, n_order, n_sample) -> (n_channel, n_filters, n_sample) A = torch.take(padded_waveform, window_idxs).permute(0,2,1) # taking the input coefs input_signal_windows = torch.matmul(torch.take(padded_waveform, window_idxs).permute(0,2,1),b_coeffs_flipped).permute(1,0,2) # input_signal_windows size : n_samples x batch_size x n_filters for i_sample, o0 in enumerate(input_signal_windows): windowed_output_signal = padded_output_waveform[:, :, i_sample:(i_sample + n_order)].clone() # added clone here for back propagation o0.sub_(torch.mul(windowed_output_signal,a_coeffs_flipped).sum(dim=2)) o0.div_(a_coeffs[:,0]) padded_output_waveform[:, : , i_sample + n_order - 1] = o0 output = padded_output_waveform[:, :,(n_order - 1):] return output As for the the forward function : def forward(self, X): # creating filters A, B = self.filters() # A = [[a1_0, a2_0, a3_0],...], A = [[b1_0, b2_0, b3_0],...] - size : [128, 3] X = X.unsqueeze(1).expand(-1,self.kernels_number,-1).clone() # we have to expand the input to the size : [bs, n_filters, n_samples] # applying the filters X = m_lfilter(X,A,B) return X This method works for the backward pass even if it takes ages to perform (I am working on implementing the TBPTT in parallel to improve these algorithms).
https://discuss.pytorch.org/t/how-to-create-a-rnn-that-applies-n-different-recurrence-relations-to-the-input/87709
CC-MAIN-2022-21
refinedweb
1,227
50.73
CodePlexProject Hosting for Open Source Software I'm trying to debug a multi-threaded script and cannot set a breakpoint in the callback function. from multiprocessing import Pool def f(x): return x*x # cannot set breakpoint here! if __name__ == '__main__': pool = Pool(processes=4) print pool.map(f, range(10)) exit(0) Is there a way to get around this? I know I can set a breakpoint using pywin's debugger by calling brk(). Perhaps there's a similar mechanism in pytools? The problem here is that you're spawning new subprocesses and the debugging doesn't automatically flow to those subprocesses. You can do Debug->Attach to Process to attach to the subprocesses once they're up and running. Or you can use ThreadPool instead of Pool to run everything in the same process. I am also using multiprocessing and am attaching to the spawned processes after they start, however, after I attach, the interface locks up and I cannot reach the breakpoints I have set. In other words, the debug interface is not enabled. I'm stuck. This feature (when added) will help for this case: Our F5 debugger (as opposed to Attach to Process) runs a different script that connects to the debugger before running your script. The multiprocessing package doesn't know about this, so when it starts new Python processes it doesn't use our script. For now, switching to a thread pool while debugging will work best. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://pytools.codeplex.com/discussions/391294
CC-MAIN-2017-34
refinedweb
280
65.22
I wrote an article on how to create a common object response wrapper for your ASP.NET Core and Web API applications. I also made two versions of NuGet packages for the wrapper. A few months ago, I wrote an article on how to create a common object response wrapper for your ASP.NET Core and Web API applications. I also made two versions of NuGet packages for the wrapper which can be found below. I was surprised that both NuGet packages had hundreds of downloads now and I got a few comments and emails from developers asking for a tutorial on how to actually use them in a project. This article aims to answer those frequently asked questions that I received; so here you go. Before we start, I’d like to thank those folks who are looking into this library and probably tried out. I really appreciate all your feedback on this library and I hope somehow this helps in your projects. Without further ado, let’s see in action how we are going to use it in the application. Click OK and it should take you to the next screen, as shown in the figure below. Select Empty and then click OK to let Visual Studio generate the default project files and dependencies for you. Here’s the default generated files. Let’s take a quick overview of each file generated. If you already know the core significant changes of ASP.NET Core then you may skip this part, but if you are new to ASP.NET Core then I would like to highlight some of those changes. If you have worked with previous versions of ASP.NET before then you will notice that the new project structure is totally different. The project now includes these files: Let’s create a few simple Models that we are going for this demo with the following properties, Models are nothing but just plain classes that house a few properties for holding information. I’ll try to make this demo as simple as possible, so I will not be using any database here because it’s not really the intent for this article. If you want to learn how to work with real data from the database, then check out my other articles about, Now, let's get back to work. Assuming that we have the following class that defines the mocked data. The class above is nothing but just a plain class with a public static method called GetBands(). The method defines a List of type Band and added some default records to the collection. Let’s create a new ASP.NET Core API Controller and define some endpoints that we can use for testing. Here’s how my Controller class looks like, The Controller class above contains the four basic HTTP methods such as GET, POST, PUT and DELETE. This is how a typical RESTful API looks like. Notice that you can’t find any code implementation for POST, PUT and DELETE. That’s because we’re not dealing with a database or in-memory data store here. I just included them there, so you can visualize how the endpoint looks like. Let’s build and run the application. Here are sample screenshots of a couple of tests I made from POSTMAN, GET: /api/v1/bands GET: api/v1/bands/{id} At this point, the API works but the problem is it doesn’t give the developers a meaningful response. We know that the data is a very important part of the response. However, spitting out just the data as the JSON response isn’t really helpful especially when there’s an unexpected behavior happens between each request. As a quick recap, if you are taking a RESTful approach to your API, then you will be utilizing HTTP verbs such as GET, POST, PUT and DELETE. Each of these actions may return different types depending on how your method/action is designed. Your POST, PUT and DELETE end-points may return a data or not at all. Your GET end-point may return a string, a List<T>, an IEnumerable, a custom class or an object. On the other hand, if your API throws an error, it will return an object or worst an HTML string stating the cause of the error. The differences among all of these responses make it difficult to consume the API because the consumer needs to know the type and structure of the data that is being returned in each case. Both the client code and the service code become difficult to manage. That’s why I came up with a library that provides consistency of response format for both successful and error results. With just a few steps, you can turn your API Controller to return something meaningful response without doing much development effort on your part. All you have to do is, STEP 1 You can install the package via NPM as shown in the figure above or using the following command in NPM Console: PM> Install-Package VMD.RESTApiResponseWrapper.Core -Version 1.0.4 The latest version as of this time of writing is v1.0.4 which is targeted for ASP.NET Core 2.1 version. STEP 3 Note Make sure to register it "before" the MVC middleware. That simple! Now try to build and run the application again. Based on our example, here’s how the response is going to look like, You notice that the response object now contains a few properties such as Version, StatusCode, Message and that the actual data is being stored in the Result property. Here’s another sample output when we try to point to a URL which doesn’t exist. Any unexpected error that could possibly happen will be handled automatically without doing anything on your side. You can notice that the response output is dynamic. By dynamic, I mean instead of including the Result property, we omitted that and use the ResponseException property instead for errors and exceptions information. Enable Custom Response Let’s move on by modifying our existing API endpoints to return a message for other HTTP verbs. GET Result PUT Result DELETE Notice that the response object are consistent for every HTTP action requests. This definitely gives better and meaningful information to your API consumers. Model validations allow you to enforce pre-defined validation rules at a class/property level. You'd normally use this validation technique to keep a clear separation of concerns, so your validation code becomes much simpler to write, maintain, and test. As you have already known, ASP.NET Core 2.1 introduced the APIController attribute which performs automatic model state validation for 400 bad request error. When the Controller is decorated with APIController attribute, the framework will automatically register a ModelStateInvalidFilter which runs on the OnActionExecuting event. This checks for the model state validity and returns the response accordingly. This is a great feature, but since we want to return a custom response object instead of the 400 bad request error, we will disable this feature in our case. To disable the automatic model state validation, just add the following code at ConfigureServices() method in Startup.cs file, Data annotations are attribute classes that lives under System.ComponentModel.DataAnnotations namespace that you can use to decorate classes or properties to enforce pre-defined validation rules. To enable Data Annotation model validation, we need to register the following namespace below, Let’s modify our CreateBandDTO class to implement a basic model validation using Data Annotation. Here’s the modified code below. Now when we run the app again and issue a POST request, it should result in something like in the figure below when a Name property is left out empty. Notice that the wrapper captures the validation errors and put it inside the ValidationErrors property for easy tracing. For more information about Model Validation, see: If for some reasons you don’t want to use the System.ComponentModel.DataAnnotations for validating your Models and wanted to use FluentValidation, you can also do that. Let’s take a look at a quick example how can integrate FluentValidation. First, download and install the NuGet package as shown in the figure below, You can also use the NPM console to install it by running the following command. Install-Package FluentValidation.AspNetCore After the installation, we can now start using the FluentValidation API. You should declare the following namespace below to where you declare your Models: Let’s revert the CreateBandDTO to its original state and add a new class called CreateBandValidator. Here’s the modified code below, You notice that we are not using the Required and MaxLenght Attributes anymore for enforcing pre-defined validations rules to the Model. Instead we keep them plain and simple. What I like about FluentValidation is we can separate the logic for the validation by creating a Validator class for each Model that we want to implement some constraints and other validation rules. The final piece to make this work is to configure FluentValidation in Startup.cs file as shown in the code below, For more information, see: Here’s a sample screenshot of the response when a Model validation fails. Having an informative, consistent and meaningful response like this should help developers easily consumes your API and troubleshoot issues. You can use the ApiException object to return error and exception message. For example, the following code handles and simulates an unexpected error that could happen to your code using try-catch block. The code above tries to convert a string that contains non-numeric values into an integer type which will cause an error at runtime. The response output is going to look like this, You can also use the ApiException to throw your own message when your custom code validation fails. For example, if your code validates a user credentials and it fails, you could do something like this. Swagger provides an advance documentation for your APIs where it allows developers to reference the details of your API endpoints and test them when necessary. This is very helpful especially when your API is public and you expect many developers who will use it. To enable swagger to your API application, go ahead and download and install Swashbuckle package via NPM, as shown in the figure below. Add the following code at the ConfigureServices() method of Startup.cs file. Next, we need to enable the middleware for serving the generated JSON document and the Swagger UI. To do that, add the following code at the Configure() method of Startup.cs file, Now run your app and append “/swagger” to the URL and it should display the Swagger UI as shown in the figure below, And here’s a sample POST request/response issued from Swagger UI, For more information, see here. In this article, we’ve learned how to incorporate the VMD.RESTApiResponseWrapper.Core package library to your ASP.NET Core 2.1 application. Feel free to try it out. Comments and suggestions are welcome, so feel free to drop a message and I’d be happy to answer any queries as I can. View All
https://www.c-sharpcorner.com/article/asp-net-core-2-1-integrating-vmd-restapiresponsewrapper-core-to-your-rest-api/
CC-MAIN-2018-51
refinedweb
1,868
62.68
Find Union of Two Arrays in C++ In this tutorial, we will learn how to find the union of two unsorted arrays in C++. Before that let’s first understand what is array. Array: is a derived data type that contains the same type of data. Like integer array stores values of only integer type, float array stores values of only float type. Derived data type: is a data type that is defined by the user itself. Other derived data types are Structure, Class, Union, Enumeration, and Pointers. The union of two arrays: is the set of all elements that are either in A or in B. Example: Array1: { 1, 2, 3, 4, 5 } Array2: { 4, 5, 6, 7, 8, 9, 10 } The union of given two arrays: { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }. Because all these elements are present either in Array1 or in Array2. (*Note: Array should not contain 1 element twice.) Program to Find Union of Two Unsorted Arrays in C++ #include<iostream> using namespace std; int main() { int n1,n2,i,j,flag; cout<<"Enter the no. of elements of the 1st array: "; cin>>n1; /* declaring arr1 of size n1 */ int arr1[n1]; cout<<"\nEnter the elements of the 1st array: "; for(i=0;i<n1;i++) { cin>>arr1[i]; } cout<<"\nEnter the no. of elements of the 2nd array: "; cin>>n2; /* declaring arr2 of size n2 */ int arr2[n2]; cout<<"\nEnter the elements of the 2nd array: "; for(i=0;i<n2;i++) { cin>>arr2[i]; } /* printing elements that are either in array1 or in array2 */ cout<<"\nUnion of the two arrays: "; /* First print all the elements of array1 */ for(i=0;i<n1;i++) { cout<<arr1[i]<<" "; } /* Then print all the elements that are in array2 but not in array1 */ for(j=0;j<n2;j++) { flag=0; for(i=0;i<n1;i++) { if(arr1[i]==arr2[j]) { flag=1; break; } } /* flag!=1 means element of array2 is not present in array1 */ if(flag!=1) { cout<<arr2[j]<<" "; } } return 0; } Input/Output: Enter the no. of elements of the 1st array: 4 Enter the elements of the 1st array: -3 0 4 7 Enter the no. of elements of the 2nd array: 6 Enter the elements of the 2nd array: 4 1 9 7 2 8 Union of the two arrays: -3 0 4 7 1 9 2 8 Time Complexity O(n1*n2), where n1 is the no. of elements of the first array and n2 is no. of elements of the second array. You may also read:
https://www.codespeedy.com/find-union-of-two-arrays-in-cpp/
CC-MAIN-2020-50
refinedweb
430
69.62
Using Machine Learning to learn how to Compress Project description You can read the introductory blog post or try it live at Features - ✓ Compress your data smartly based on Machine Learning - ✓ Takes User Requirements in the form of weights for size, write_timeand read_time - ✓ Trains & caches a model based on compression methods available in the system, using packaged data - ✓ CLI for compressing and decompressing - ✓ Works with CSV, JSONand Bytesin general CLI shrynk compress myfile.json # will yield e.g. myfile.json.gz or myfile.json.bz2 shrynk decompress myfile.json.gz # will yield myfile.json shrynk compress myfile.csv --size 0 --write 1 --read 0 shrynk benchmark myfile.csv # shows benchmark results shrynk benchmark --predict myfile.csv # will also show the current prediction shrynk benchmark --save --predict myfile.csv # will add the result to the training data too Usage Installation: pip install shrynk Then in Python: import pandas as pd from shrynk import save, load # save dataframe compressed my_df = pd.DataFrame({"a": [1]}) file_path = save(my_df, "mypath.csv") # e.g. mypath.csv.bz2 # load compressed file loaded_df = load(file_path) If you just want the prediction, you can also: import pandas as pd from shrynk import infer infer(pd.DataFrame({"a": [1]})) # {"engine": "csv", "compression": "bz2"} Add your own data If you want more control you can do the following: import pandas as pd from shrynk import PandasCompressor df = pd.DataFrame({"a": [1, 2, 3]}) pdc = PandasCompressor("default") pdc.run_benchmarks(df) # adds data to the default pdc.train_model(size=3, write=1, read=1) pdc.predict(df) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/shrynk/
CC-MAIN-2022-05
refinedweb
284
57.57
< SummerOfCode | 2007 Man/info page wiki editor - Student: Ville-Pekka Vainio - Mentor: Karsten Wade Introduction This is not a Google Summer of Code project, this is under a similar Finnish event called COSS Summercode 2007. Here are some links about the event: - COSS Main Page - Summercode Finland 2007 - Summercode Finland FAQ - My project page at MoinMoin wiki - My project blog During the summer I will be extending MoinMoin with man and info page publication and editing capabilities. The idea came from FedoraBounties, "Publication of all man and info pages for each release". There is also a GSoC 2007 project based on the same idea, but with different implementation plans, see Planned features - Import man/info pages from different releases, FC6, F7, F8, etc. - Also man/info pages from released updates and Rawhide should be included - Clean URLs that can be used as a reference - Searching from those pages (this comes pretty much automatically from MoinMoin, may require some tweaking) - Comparing and taking diffs from pages between releases, released updates and Rawhide (basics come from MoinMoin but requires work) - Regular users can edit the man/info pages in the wiki - Administrators can accept or deny users' edits - Admins can take a specific set of edits that they want to send upstream - The wiki helps them to get a diff of those edits and make a Bugzilla report upstream Preliminary schedule Some basic points here, I'll add more details as the project goes on. I have reserved 13 weeks for my project, calendar weeks 23 — 35 Open questions and talking points Please see discussion on these points in the Moin wiki page of this project . I may not keep these two pages "synchronized". Instead, I suggest that MM specific points are discussed on their wiki and Fedora specific points here. Internal storage format of man/info pages? - Basic txt format and then generate textual diffs for Bugzilla reports - Probably the easiest to implement but requires manual work from upstream people - *roff, DocBook, TeX, etc. - These are more difficult to implement but it could be maybe possible to then generate diffs that upstream could merge more easily - During the first phase (publication) doclifter will be used to store the man/info pages as DocBook XML. Moin can parse that XML so that it's viewable as "normal" wiki pages. Where to look for these man/info files? - Repositories, CVS, etc. - The system should handle a lot of different sources and be extendable for different version management systems and package managers - Remember, this is something that we would like other distros to use eventually, too :) Diff tool enhancements - Moin's diff tool probably has to be changed somehow to support diffs between different pages, not just different revisions of the same page. - Or do we need / should we have a better versioning scheme in general, where Moin could have multiple "main" versions of a page? - Probably more points to come during the summer ;) TODO Here's a list of things to be done soon, this may not be in any particular order, just a reminder to me and the people watching this project. - As ThomasWaldmann suggested on #moin-dev, contact man/info maintainers and ask what they would want and need from this kind of system. - Linux man-pages project (I've contacted Michael Kerrisk already) - GNU Texinfo project (I should maybe get to know texinfo a bit better before discussing it on their mailing list) - Maybe fedora-devel to hopefully reach developers who maintain Fedora packages and man pages in those packages (My mail to fedora-devel-list ) Phase 1 1. Test doclifter 1. Make an importer for Moin that converts man source into DocBook XML through doclifter and saves the results in a clean namespace hierarchy 1. Modify Moin's diff functionality or make an action that takes the diff of two different pages 1. Extend the importer to handle info pages too. The conversion can be made through GNU makeinfo, but it needs Texinfo sources, it doesn't work on Info sources. Completed - Which MoinMoin version to base the work on? - 1.6 is the next stable, that will probably be taken into production use at Fedora a while after its release - It's possible that these changes won't make it to 1.6 upstream anymore, since that is aimed to be stable soon - 1.7 is the development version that all MoinMoin GSoC students use - It could be easier getting these changes eventually merged into 1.7 upstream - It'll take time until 1.7 is ready and released, I think the timeline is "this year" - It'll maybe take even longer before Fedora upgrades to 1.7 -.7 will be used, see the corresponding fedora-infrastructure-list thread . - It's decided that I'll use 1.7, so now talk to the Moin developers about maybe getting a Mercurial branch for my project on - (./) My project has a repository now, at - Introduce this project and myself on fedora-docs-list, fedora-websites-list and maybe fedora-infstructure-list - (./) fedora-docs-list self-introduction , project introduction on fedora-websites-list Code The code will be kept in MoinMoin's Mercurial, in a separate 1.7 branch. The address of the repository is [1] , see Moin's Mercurial guide on how to get and update source code from Mercurial. Fedora has packaged Mercurial, so it can be installed through yum/pirut etc. easily. About me You can find some info about me from my Wiki page , too. I'm a Computer Science student from University of Helsinki . I have taken all the courses required for a B.Sc. degree, but haven't officially graduated yet. My B.Sc. thesis was about Generics in Java, C++ and C#. I'll continue my M.Sc. studies after the summer. I've been programming for over ten years and using Fedora since FC2. I'm one of the Finnish Fedora translators. If you have any comments and ideas, please feel free to add them here :) This project can also be discussed on docs-list or #fedora-docs. I'm reading both of them.
https://fedoraproject.org/w/index.php?title=SummerOfCode/2007/VillePekkaVainio&oldid=30720
CC-MAIN-2021-39
refinedweb
1,021
59.64
*Edit* Just so you know, we are on the chapter in the book introducing ArrayLists *Edit* I am having trouble with this problem. The problem is stated: Implement a class Polygon that contains an array list of Point2D.Double objects. Support methods public void add(Point2D.Double aPoint) public void draw(Graphics2D g2) Draw the polygon by joining adjacent points with a line, and then closing it up by joining the end and start points. Write a graphical application that draws a square and a pentagon using two Polygon objects. We are given two completed classes called PolygonComponent and PolygonViewer(code below) and have to create one class named Polygon import javax.swing.JFrame; public class PolygonViewer { public static void main(String[] args) { JFrame frame = new JFrame(); final int FRAME_WIDTH = 300; final int FRAME_HEIGHT = 400; frame.setSize(FRAME_WIDTH, FRAME_HEIGHT); frame.setTitle("PolygonViewer"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); PolygonComponent component = new PolygonComponent(); frame.add(component); frame.setVisible(true); } } import javax.swing.JComponent; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.geom.Point2D; /** Displays two polygons. */ public class PolygonComponent extends JComponent { public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D) g; Polygon square = new Polygon(); square.add(new Point2D.Double(100, 100)); square.add(new Point2D.Double(100, 150)); square.add(new Point2D.Double(150, 150)); square.add(new Point2D.Double(150, 100)); square.draw(g2); Polygon pentagon = new Polygon(); double centerX = 200; double centerY = 200; double radius = 50; for (int i = 0; i < 5; i++) { double angle = 2 * Math.PI * i / 5; pentagon.add(new Point2D.Double( centerX + radius * Math.cos(angle), centerY + radius * Math.sin(angle))); } pentagon.draw(g2); } } The above classes are given and are not to be changed. Here is what I have written for my Polygon class so far import java.awt.Color; import java.awt.Graphics; import java.awt.Point; import java.awt.geom.Line2D; import java.awt.geom.Point2D; import java.util.ArrayList; import java.awt.Graphics2D; import javax.swing.JPanel; public class Polygon extends JPanel { private ArrayList<Point2D.Double> myPolygon = new ArrayList<Point2D.Double>(); private Point2D.Double point; public void add(Point2D.Double aPoint) { myPolygon.add(aPoint); // NullPointerException gets thrown here } protected void draw(Graphics2D g2) { for(int i=0; i<myPolygon.size(); i++) { g2.draw(myPolygon.getX()); } } } I have spent two evenings on this same problem, rewriting and trying different things and I have hit a wall. I cannot get this thing to draw and I could use some guidance. Obviously my draw method is not working as you see it, I have been going through the java API and cannot seem to get a grasp as to what I need to do. Thank you in advance for any help!
https://www.daniweb.com/programming/software-development/threads/416695/draw-polygon-help
CC-MAIN-2017-34
refinedweb
448
54.59
Sometimes you want to use a library that is only available as C or C++ code. Traditionally, this is where you give up. Well, not anymore, because now we have Emscripten and WebAssembly (or Wasm)! The toolchain I set myself the goal of working out how to compile some existing C code to Wasm. There's been some noise around LLVM's Wasm backend, so I started digging into that. While you can get simple programs to compile this way, the second you want to use C's standard library or even compile multiple files, you will probably run into problems. This led me to the major lesson I learned: While Emscripten used to be a C-to-asm.js compiler, it has since matured to target Wasm and is in the process of switching to the official LLVM backend internally. Emscripten also provides a Wasm-compatible implementation of C's standard library. Use Emscripten. It carries a lot of hidden work, emulates a file system, provides memory management, wraps OpenGL with WebGL — a lot of things that you really don't need to experience developing for yourself. While that might sound like you have to worry about bloat — I certainly worried — the Emscripten compiler removes everything that's not needed. In my experiments the resulting Wasm modules are appropriately sized for the logic that they contain and the Emscripten and WebAssembly teams are working on making them even smaller in the future. You can get Emscripten by following the instructions on their website or using Homebrew. If you are a fan of dockerized commands like me and don't want to install things on your system just to have a play with WebAssembly, there is a well-maintained Docker image that you can use instead: $ docker pull trzeci/emscripten $ docker run --rm -v $(pwd):/src trzeci/emscripten emcc <emcc options here> Compiling something simple Let's take the almost canonical example of writing a function in C that calculates the nth fibonacci number: #include <emscripten.h> EMSCRIPTEN_KEEPALIVE int fib(int n) { int i, t, a = 0, b = 1; for (i = 0; i < n; i++) { t = a + b; a = b; b = t; } return b; } If you know C, the function itself shouldn't be too surprising. Even if you don't know C but know JavaScript, you will hopefully be able to understand what's going on here. emscripten.h is a header file provided by Emscripten. We only need it so we have access to the EMSCRIPTEN_KEEPALIVE macro, but it provides much more functionality. This macro tells the compiler to not remove a function even if it appears unused. If we omitted that macro, the compiler would optimize the function away — nobody is using it after all. Let's save all that in a file called fib.c. To turn it into a .wasm file we need to turn to Emscripten's compiler command emcc: $ emcc -O3 -s WASM=1 -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' fib.c Let's dissect this command. emcc is Emscripten's compiler. fib.c is our C file. So far, so good. -s WASM=1 tells Emscripten to give us a Wasm file instead of an asm.js file.. -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' tells the compiler to leave the cwrap() function available in the JavaScript file — more on this function later. -O3 tells the compiler to optimize aggressively. You can choose lower numbers to decrease build time, but that will also make the resulting bundles bigger as the compiler might not remove unused code. After running the command you should end up with a JavaScript file called a.out.js and a WebAssembly file called a.out.wasm. The Wasm file (or "module") contains our compiled C code and should be fairly small. The JavaScript file takes care of loading and initializing our Wasm module and providing a nicer API. If needed it will also take care of setting up the stack, the heap and other functionality usually expected to be provided by the operating system when writing C code. As such the JavaScript file is a bit bigger, weighing in at 19KB (~5KB gzip'd). Running something simple The easiest way to load and run your module is to use the generated JavaScript file. Once you load that file, you will have a Module global at your disposal. Use cwrap to create a JavaScript native function that takes care of converting parameters to something C-friendly and invoking the wrapped function. cwrap takes the function name, return type and argument types as arguments, in that order: <script src="a.out.js"></script> <script> Module.onRuntimeInitialized = _ => { const fib = Module.cwrap('fib', 'number', ['number']); console.log(fib(12)); }; </script> If you run this code, you should see the "233" in the console, which is the 12th Fibonacci number. The holy grail: Compiling a C library Up until now, the C code we have written was written with Wasm in mind. A core use-case for WebAssembly, however, is to take the existing ecosystem of C libraries and allow developers to use them on the web. These libraries often rely on C's standard library, an operating system, a file system and other things. Emscripten provides most of these features, although there are some limitations. Let's go back to my original goal: compiling an encoder for WebP to Wasm. The source for the WebP codec is written in C and available on GitHub as well as some extensive API documentation. That's a pretty good starting point. $ git clone To start off simple, let's try to expose WebPGetEncoderVersion() from encode.h to JavaScript by writing a C file called webp.c: #include "emscripten.h" #include "src/webp/encode.h" EMSCRIPTEN_KEEPALIVE int version() { return WebPGetEncoderVersion(); } This is a good simple program to test if we can get the source code of libwebp to compile, as we don't require any parameters or complex data structures to invoke this function. To compile this program, we need to tell the compiler where it can find libwebp's header files using the -I flag and also pass it all the C files of libwebp that it needs. I'm going to be honest: I just gave it all the C files I could find and relied on the compiler to strip out everything that was unnecessary. It seemed to work brilliantly! $ emcc -O3 -s WASM=1 -s EXTRA_EXPORTED_RUNTIME_METHODS='["cwrap"]' \ -I libwebp \ webp.c \ libwebp/src/{dec,dsp,demux,enc,mux,utils}/*.c Now we only need some HTML and JavaScript to load our shiny new module: <script src="/a.out.js"></script> <script> Module.onRuntimeInitialized = async _ => { const api = { version: Module.cwrap('version', 'number', []), }; console.log(api.version()); }; </script> And we will see the correction version number in the output: ![ Screenshot of the DevTools console showing the correct version number.]() Get an image from JavaScript into Wasm Getting the encoder's version number is great and all, but encoding an actual image would be more impressive, right? Let's do that, then. The first question we have to answer is: How do we get the image into Wasm land? Looking at the encoding API of libwebp, it expects an array of bytes in RGB, RGBA, BGR or BGRA. Luckily, the Canvas API has getImageData(), that gives us an Uint8ClampedArray containing the image data in RGBA: async function loadImage(src) { // Load image const imgBlob = await fetch(src).then(resp => resp.blob()); const img = await createImageBitmap(imgBlob); // Make canvas same size as image const canvas = document.createElement('canvas'); canvas.width = img.width; canvas.height = img.height; // Draw image onto canvas const ctx = canvas.getContext('2d'); ctx.drawImage(img, 0, 0); return ctx.getImageData(0, 0, img.width, img.height); } Now it's "only" a matter of copying the data from JavaScript land into Wasm land. For that, we need to expose two additional functions. One that allocates memory for the image inside Wasm land and one that frees it up again: EMSCRIPTEN_KEEPALIVE uint8_t* create_buffer(int width, int height) { return malloc(width * height * 4 * sizeof(uint8_t)); } EMSCRIPTEN_KEEPALIVE void destroy_buffer(uint8_t* p) { free(p); } create_buffer allocates a buffer for the RGBA image — hence 4 bytes per pixel. The pointer returned by malloc() is the address of the first memory cell of that buffer. When the pointer is returned to JavaScript land, it is treated as just a number. After exposing the function to JavaScript using cwrap, we can use that number to find the start of our buffer and copy the image data. const api = { version: Module.cwrap('version', 'number', []), create_buffer: Module.cwrap('create_buffer', 'number', ['number', 'number']), destroy_buffer: Module.cwrap('destroy_buffer', '', ['number']), }; const image = await loadImage('/image.jpg'); const p = api.create_buffer(image.width, image.height); Module.HEAP8.set(image.data, p); // ... call encoder ... api.destroy_buffer(p); Grand Finale: Encode the image The image is now available in Wasm land. It is time to call the WebP encoder to do its job! Looking at the WebP documentation, WebPEncodeRGBA seems like a perfect fit. The function takes a pointer to the input image and its dimensions, as well as a quality option between 0 and 100. It also allocates an output buffer for us, that we need to free using WebPFree() once we are done with the WebP image. The result of the encoding operation is an output buffer and its length. Because functions in C can't have arrays as return types (unless we allocate memory dynamically), I resorted to a static global array. I know, not clean C (in fact, it relies on the fact that Wasm pointers are 32bit wide), but to keep things simple I think this is a fair shortcut. int result[2]; EMSCRIPTEN_KEEPALIVE void encode(uint8_t* img_in, int width, int height, float quality) { uint8_t* img_out; size_t size; size = WebPEncodeRGBA(img_in, width, height, width * 4, quality, &img_out); result[0] = (int)img_out; result[1] = size; } EMSCRIPTEN_KEEPALIVE void free_result(uint8_t* result) { WebPFree(result); } EMSCRIPTEN_KEEPALIVE int get_result_pointer() { return result[0]; } EMSCRIPTEN_KEEPALIVE int get_result_size() { return result[1]; } Now with all of that in place, we can call the encoding function, grab the pointer and image size, put it in a JavaScript-land buffer of our own, and release all the Wasm-land buffers we have allocated in the process. api.encode(p, image.width, image.height, 100); const resultPointer = api.get_result_pointer(); const resultSize = api.get_result_size(); const resultView = new Uint8Array(Module.HEAP8.buffer, resultPointer, resultSize); const result = new Uint8Array(resultView); api.free_result(resultPointer); Depending on the size of your image, you might run into an error where Wasm can't grow the memory enough to accommodate both the input and the output image: Luckily, the solution to this problem is in the error message! We just need to add -s ALLOW_MEMORY_GROWTH=1 to our compilation command. And there you have it! We compiled a WebP encoder and transcoded a JPEG image to WebP. To prove that it worked, we can turn our result buffer into a blob and use it on an <img> element: const blob = new Blob([result], {type: 'image/webp'}); const blobURL = URL.createObjectURL(blob); const img = document.createElement('img'); img.src = blobURL; document.body.appendChild(img) Behold, the glory of a new WebP image! Conclusion It's not a walk in the park to get a C library to work in the browser, but once you understand the overall process and how the data flow works, it becomes easier and the results can be mind-blowing. WebAssembly opens many new possibilities on the web for processing, number crunching and gaming. Keep in mind that Wasm is not a silver bullet that should be applied to everything, but when you hit one of those bottlenecks, Wasm can be an incredibly helpful tool. Bonus content: Running something simple the hard way If you want to try and avoid the generated JavaScript file, you might be able to. Let's go back to the Fibonacci example. To load and run it ourselves, we can do the following: <!doctype html> <script> (async function() { const imports = { env: { memory: new WebAssembly.Memory({initial: 1}), STACKTOP: 0, } }; const {instance} = await WebAssembly.instantiateStreaming(fetch('/a.out.wasm'), imports); console.log(instance.exports._fib(12)); })(); </script> WebAssembly modules that have been created by Emscripten have no memory to work with unless you provide them with memory. The way you provide a Wasm module with anything is by using the imports object — the second parameter of the instantiateStreaming function. The Wasm module can access everything inside the imports object, but nothing else outside of it. By convention, modules compiled by Emscripting expect a couple of things from the loading JavaScript environment: - Firstly, there is env.memory. The Wasm module is unaware of the outside world so to speak, so it needs to get some memory to work with. Enter WebAssembly.Memory. It represents a (optionally growable) piece of linear memory. The sizing parameters are in "in units of WebAssembly pages", meaning the code above allocates 1 page of memory, with each page having a size of 64 KiB. Without providing a maximumoption, the memory is theoretically unbounded in growth (Chrome currently has a hard limit of 2GB). Most WebAssembly modules shouldn't need to set a maximum. env.STACKTOPdefines where the stack is supposed to start growing. The stack is needed to make function calls and to allocate memory for local variables. Since we don't do any dynamic memory management shenanigans in our little Fibonacci program, we can just use the entire memory as a stack, hence STACKTOP = 0. RSS or Atom feed and get the latest updates in your favorite feed reader!Subscribe to our
https://developers.google.com/web/updates/2018/03/emscripting-a-c-library?hl=ja
CC-MAIN-2019-26
refinedweb
2,265
63.9
This approach is outdated since Xamarin.Forms 4.5.530 You can get information about the new functionality here Using Font Awesome in Xamarin.Forms All the code for this post is available at github If you want to have a consistent user interface in your Xamarin Forms application, it could be a good approach to try to use vector icons. For that, I will show you in this blog post, how you can include Font Awesome in your Xamarin Forms project to show icons like in this sample. Installation First you will need to download the icons from the Font Awesome site (in this post we are using the free web version 5.7.1). Select the Start Using Free button. And then the download button. After downloading, extract the content of your zip file. And copy the highlighted file to your Android's Assets folder and iOS's Resources folder. Change the properties in both to match the images. For iOS you need to change the info.plist file and include the new font file. Create support classes After the installation is complete, we are going to create a derived class from Label to show our text using Font Awesome. The class is called FontAwesomeIcon in our sample project. For Android that class needs a renderer (FontAwesomeIconRenderer.cs). The icon definitions are in the Icon.cs file, you could add any other icon that you need looking for the icon in this link and using the unicode representation. Using our custom label to show icons To show an icon in a view with xaml, create a tag like the three that we have in MainPage.xaml, setting the text to one of our defined icons in the Icon class. And that's all you need to start using Font Awesome!! Pro tip You could have noticed that the name of the font used in the FontAwesomeIcon.cs file depends on the platform, the android version is just the name of the ttf file, but for iOS we need the name of the font. In some tutorials they mention that you could check the properties of the file in Windows Explorer and take the name from there, but for the FontAwesome version that we are using for this post, that's not true. You can see the name of the font in the console of Visual Studio when you run the iOS version because we have this code in place (AppDelegate.cs). This will be very useful if you want to use another version of the font, but you don't know the name. Discussion (6) Thank you so much for putting this together! Is it possible to use a similar approach to utilize FontAwesome as text on standard XAML elements like Button and Label? I'm wanting to put a FontAwesome icon as the text of a button but am unsure of how to utilize the namespace inside of the Button element. I think this has changed since I wrote that post. Please take a look at this information from James Montemagno montemagno.com/using-font-icons-in... This is great! Thanks for the resource and quick response, Rodrigo! First thanks for this Article. you help me a lots in my project. i need to ask about something I'm starting in xamarin and i use font awesome in my project but i can't put font awesome as text inside a button can you help me in that. Hi! Did you take a look at this related link? montemagno.com/using-font-icons-in... If you still have issues, please let me know what are you trying to do, what is not working, and I will try to help Thank you so much! Extremely useful tutorial.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codingcoach/using-font-awesome-in-xamarinforms--3mh
CC-MAIN-2021-49
refinedweb
629
72.36
On Thu, 2005-10-27 at 12:34 +0200, Alfred M. Szmidt wrote: > You cannot get away from race conditions, This is only partly true. It is true that you can't stop a programmer from screwing up if they are determined to do so. But I would say that systems can be *subjectively* characterized into three classes: 1. Systems where avoiding race conditions is impossible 2. Systems where avoiding race conditions is possible with extra work 3. Systems where avoiding race conditions is a natural consequence of programming in the natural way. Original UNIX was in the first category. POSIX also was in the first category. Later enhancements are slowly moving UNIX-based systems into the second category. KeyKOS is in the third category. I say that this grouping is subjective because I cannot clearly articulate the underlying design patterns that make race conditions easily avoidable, and until I can do that I don't have a testable basis for categorizing other than my own experience and the experience reported by others. > Race conditions cannot be fixed, no matter what kind of a API you > implement. I don't think this is true. Transactions go a very long way here. In any case, what *is* true is that the *absence* of certain properties in an API can make race conditions unavoidable. > *. > > This isn't a argument against what you call a unified file namespace, > it is a argument against: badly written programs and chroot's. Sorry. No. It is an argument against a bad (because it is unsatisfiable) specification of the semantics of ".." by POS? > > All of these has nothing to do with POSIX once again, you are free to > implement a versioning file-system, add metadata etc, and it won't > make the system a non-POSIX system. Actually, the evidence of experience is that you can't implement a reasonable versioning file system without significantly extending POSIX. As Hans found out in ReiserFS and others discovered in JFS and XFS, you also cannot do any reasonable transacted interface without significant alterations to POSIX. > *. This is not incompetence. The objection is to an API that is "fail open" instead of "fail closed". In this regard, POSIX really is not well designed, because it magnifies the likelihood of programmer error. shap
http://lists.gnu.org/archive/html/l4-hurd/2005-10/msg00688.html
CC-MAIN-2018-26
refinedweb
382
55.84
This is my code.The problem is i get an error on the last line saying:This is my code.The problem is i get an error on the last line saying:Code:#include <iostream> #include <string> using namespace std; int wordcount(char *str){ int i=0; int count=0; while(&str[i] != "\0"){ if(&str[i] == "\t"){ count++;} i++; } return count; } int main() { string stringA; cout<<"Insert a string"<<endl; cin>>stringA; cout<<"there are"<<wordcount(stringA)<<"words in your string"; } "Cannot convert 'std::string' to 'char' for argument '1' to 'int wordcount(char*)' " Im really stuck here, pls help .. Any help is apreciated.Thanks Edit:Oups i posted in thw rong section.Sorry!
http://cboard.cprogramming.com/cplusplus-programming/114117-counting-words-string-cplusplus.html
CC-MAIN-2015-35
refinedweb
115
64.51
CircuitPython library for PCF8523 real time clock. Project description Introduction to Adafruit’s PCF8523 Real Time Clock (RTC) Library. Dependencies This driver depends on the Register and Bus Device libraries. Please ensure they are also available on the CircuitPython filesystem. This is easily achieved by downloading a library and driver bundle. Installing from PyPI Basics Of course, you must import the library to use it: import time import adafruit_pcf8523 All the Adafruit RTC libraries take an instantiated and active I2C object (from the board library) as an argument to their constructor. The way to create an I2C object depends on the board you are using. For boards with labeled SCL and SDA pins, you can: import board Now, to initialize the I2C bus: i2c = board.I2C() Once you have created the I2C interface object, you can use it to instantiate the RTC object: rtc = adafruit_pcf8523.PCF8523(i2c) Date and time.
https://pypi.org/project/adafruit-circuitpython-pcf8523/
CC-MAIN-2021-25
refinedweb
150
65.12
Drag inside scaled item. Hello everyone! I have a paroblem with dragging item inside scaled item. Simplest way to explain is a little qml test program: @ import QtQuick 2.1 import QtQuick.Controls 1.0 ApplicationWindow { title: qsTr("Hello World") width: 640 height: 480 Rectangle{transformOrigin: Item.TopLeft; height: 10; width: 10; color: "black"; scale: 50; Rectangle{ height: 1;width: 1; color: "green" MouseArea{id: dragArea anchors.fill:parent; drag.target: parent; } } } } @ When I trying to drag, it is starting when i move mouse scale10 = 5010 = 500 pixels. I want it to start when I move mouse 10 pixels or less. I have a big program, based on scaling father item and I need to accoplish this without changing program logic from scaling to changing widht and height.
https://forum.qt.io/topic/34545/drag-inside-scaled-item
CC-MAIN-2018-51
refinedweb
128
66.74
- Webdevelopment Webdevelopment - Webdevelopment Software - Webdevelopment Technology - Webdevelopment Macintosh - Webdevelopment News - Webdevelopment Apple - Webdevelopment Mac-News - Webdevelopment Miscellaneous - Webdevelopment all available Feeds - Deriving HTML+TIME's Active Time (10/11/2000) Do you know how to extract HTML+TIME's active time? Learn how to use the currTimeState object. - Coloring Scrollbar Arrows (8/28/2000) Add more color to your page. Learn how to modify the color of the scrollbar arrows on the fly. - Defining Polymorphism (6/23/2002) What are the three cornerstones of object oriented programming languages? Learn about the third foundation: polymorphism. - Converting Dimensions for Printing (9/25/2001) Did you know that the print template's settings are in inches while the printer expects them in 1/100 of an inch? Learn how to convert these dimension. - Packaging a Single-Class Namespace (5/24/2002) Do you know how to package a namespace? Here is an example for packaging a simple single-class namespace. - Boosting Performance by Avoiding Dots (5/10/2000) Want to improve the performance of your JavaScript? Avoiding dots in your code is both simple and effective. - Initializing the style Object (10/6/2001) Did you know that the style object is not initialized by the STYLE rule? Learn how to set the zoom property upon loading. - JScript .NET's Classes (4/24/2002) Did you know that JavaScript does not support classes? Learn about JScript .NET's class-based support. - Delimiting the Main Code (6/21/2002) Do you know how to delimit the main code section? Learn how JScript .NET is different from other languages. - Changing Inline Images (8/19/2000) How do you change an image when mousing over? out? Learn how to change a gif using very little programming. - Detecting LayoutRect's Overflow (9/14/2001) Do you know how to detect page overflow in a print template? Learn how to use the LAYOUTRECT's event handlers and properties to distinguish between page completion and page overflow. - Setting Default Visibility (12/17/2000) Do you know how to change the default visibility? Learn how to overcome differences between IE and Netscape 6. - WMLScript's Bytecode (6/22/2000) Do you know why WMLScript is a compiled once, run everywhere language? See how a 1071-byte WML code turns into a 446-byte bytecode. - Deleting DOM Substrings (7/6/2001) Do you know how to delete a substring from text node data? Learn how to shorten a DOM string in IE6/NS6. - The Event Object Properties (1/5/2001) Do you know how to find information about the event that just occurred? Learn how to use the e object. Feed cached for the next hour.
http://fundisom.com/newsreader/read/webdevelopment/JavaScript_Tip_of_the_Day
crawl-001
refinedweb
444
67.76
Make N numbers equal by incrementing N-1 numbers Reading time: 20 minutes | Coding time: 5 minutes Given an array, find the minimum number of operations to make all the array elements equal. The operation includes incrementing all but one element of the array by 1 that is incrementing N-1 elements out of N elements. In first example, we can get all elements to be equal in three operations in the following way: 1, 2, 3 → 2, 3, 3 → 3, 4, 3 → 4, 4, 4 This is a well-known problem. Let’s solve this once and for all! Note that we need to find the minimum operations. What does that mean? Take the above case as an example, we could have done it the following way and thus in many other ways. 1, 2, 3 → 2, 2, 4 → 3, 2, 5 → 4, 3, 5 → 5, 4, 5 → 5, 5, 6 → 6, 6, 6 What ensures the minimum operations? In order to make everything equal, first step would be that at least all the other elements should first try to be the same as the highest one, and in the process if some other element exceeds the highest, then also fine, it itself becomes the highest, and the other elements then try to be the new highest and so on until all elements become equal. This will make sure we are performing a minimum number of operations, because for all elements to be equal, at least they should all be the same as the current highest element in the array. We had not taken the highest as the bound in the above example, thus got 6 operations. So the approach becomes pretty simple. Could you think of it? Take a few minutes... Yeah, that’s correct. Take the current maximum one in the array, and increment all the other elements by 1 and update the current maximum. Go on doing this, until all elements become equal. The time complexity would be O(n^2). But this was simple.. Could we do better? Don’t go along the lines of the problem. Try to understand what it means to increment all except one element by one. Try to modify what you are doing right now in such a way that instead of working on n-1 elements each time, you work on just 1 element. Give a few minutes thinking about what I said. =>1, 2, 3 * 2, 3, 3 (Increment all but highest element by 1) * 1, 2, 2 (Decrement the highest element by 1) What’s our ultimate goal? To find the minimum number of operations required to make the elements equal, right? How does that matter if you make them equal to the highest one or the lowest one? Here’s a quick analogy. You are a family of 5, and you stay at your college hostel. They want to meet you through the shortest path. All 4 of them can either come to you or just you go to them. The result is the same that you 5 met. So incrementing other elements by one keeping one same is similar to decrementing that one element by one. Okay? So the operation becomes: Decrement an element by 1. What should be the bound? For all elements to be equal, all of them should reach the minimum element. By how much should we decrease 5 to reach minimum (1) : 4 operations (5-1) For 3 : 2 operations(3-1) For 2: 1 operation (2-1) And for 1, no operation as it’s already minimum And thus, For every element A[i], the number of operations will be A[i] - minimumInArray And the total minimum operations will be the sum of each element’s operations. Algorithm - Find the minimum element in the array. This will be the target value which all the other elements will try to reach. - Initialize minimumOperations variable to 0. - For each element in the array A[i], find A[i]-min. This is the amount of decrement operations done for one element. Keep adding this value to the minimumOperations variable - Output the result as minimumOperations. Implementation Following is the code in java to this problem: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; class EqualizeEveryone { public static void main(String[] args) throws java.lang.Exception { BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); int t = Integer.parseInt(br.readLine()); while (t-- > 0){ int n = Integer.parseInt(br.readLine()); int[] arr = new int[n]; String[] input = br.readLine().split(" "); int min = Integer.MAX_VALUE; for(int i=0; i<n; i++){ arr[i] = Integer.parseInt(input[i]); min = Math.min(arr[i],min); } int minOperations = 0; for(int i=0; i<n; i++){ minOperations += arr[i]-min; } System.out.println(minOperations); } } } One more observation: Consider A1, A2, A3, A4 are elements of the array and min is the minimum For every element Ai, number of operations will be Ai - min Thus total operations become (A1-min) + (A2-min) + (A3-min) + (A4-min) = A1+A2+A3+A4 - 4 * min = sumOfArrayElements - n * min Where n is the size of array. So you could directly find the solution. Complexity Time complexity O(n) where n is the number of elements in the array for finding sum and min Space complexity O(1) We are not using any data structure to store anything.
https://iq.opengenus.org/make-elements-equal/
CC-MAIN-2020-24
refinedweb
897
65.52
31 May 2011 14:34 [Source: ICIS news] LONDON (ICIS)--NYMEX light sweet crude futures gained almost $3/bbl on Tuesday to take the front month July contract above $103/bbl on the back of a weaker dollar against the euro and improving prospects of a second bail-out package for ?xml:namespace> By 13.13 GMT, July NYMEX crude had hit a high of $103.39/bbl, a gain of $2.80/bbl from the Friday close of $100.59/bbl, before easing back to around $103.23/bbl. At the same time, July Brent crude on ICE Futures was trading around $116.67/bbl, having hit a high of $117.00/bbl, a gain of $2
http://www.icis.com/Articles/2011/05/31/9464883/nymex-crude-up-3bbl-on-weaker-dollar-greece-bail-out-hopes.html
CC-MAIN-2015-18
refinedweb
118
85.69
This appendix describes the aspects of the Compaq C language that pertain to the writing of Internet application programs for the TCP/IP Services for OpenVMS product (formerly the VMS/ULTRIX Connection). TCP/IP Services for OpenVMS is the Compaq implementation of the TCP/IP (Transmission Control Protocol/Internet Protocol) protocol. For a description of Internet details, such as protocols, protocol types, and sockets, see the TCP/IP Services for OpenVMS System Management. For more information on how to write socket programs, see the UNIX Supplementary Documents, System Manager. This section contains information that you should consider when writing Internet application programs for the TCP/IP Services for OpenVMS. These considerations will help to make your programs more portable. Calls to various Interprocess Communication (IPC) routines use a static area within which they return information. The OpenVMS environment allows an asynchronous system trap (AST) routine to interrupt an IPC routine during its execution. In addition, the ASTs of more privileged modes can interrupt ASTs of less privileged modes. Therefore, caution needs to be observed when calling an IPC routine from an AST state, while a similar IPC routine is being called from a non-AST state or a less privileged mode. The IPC routines that use a static area are: In VMS Version 5.2, sockets should not be created or destroyed within ASTs. Several IPC routines access files in order to retrieve their information. These routines should not be called from either the KERNEL or EXEC modes when ASTs are disabled. These IPC routines are: IPC routines may use event flags during their operation. The event flags are assigned by using the library routine LIB$GET_EF and are released when the routine no longer needs them. Certain parameters to the IPC routines may require type casting to suppress Compaq C compilation warnings. Type casting is required because of parameter prototyping, which the Compaq C header ( .h ) files have in order to be ANSI compliant. These header files are unlike UNIX header files, whose IPC routines are not parameter prototyped. It is acceptable to include header files on an OpenVMS system without using angle brackets (< >) or double quotes (" "). For example, #include types . This form of the #include preprocessor directive is possible on OpenVMS systems because all header files are located in a text library in SYS$LIBRARY. On UNIX systems, however, header files must be specified with angle brackets (< >) or double quotes (" ") and any subdirectories that are needed to locate a header file. For example, to include the header file socket.h , use the following form of the #include directive: #include <sys/socket.h> The socket routines make use of several Compaq C structures. Table A-1 lists these structures and the header files in which they are defined. The Internet protocol family is a collection of protocols layered on the Internet Protocol (IP) transport layer, and using the Internet address format. This section describes the Transmission Control Protocol and User Datagram Protocol. The Transmission Control Protocol (TCP) provides a system call must be used after binding the socket with the bind system call. Only passive sockets may use the accept call to accept incoming connections. Only active sockets may use the connect call to initiate connections. Passive sockets may underspecify their location to match incoming connection requests from multiple networks. This technique, called wildcard addressing, allows a single server to provide service to clients on multiple networks. To create a socket that listens to all hosts on any network, the Internet address INADDR_ANY must be bound. The TCP port must be specified at this time. If the Internet address is not INADDR and the port is not specified, the system will assign a port. one socket option that is set with setsockopt and tested with getsockopt . Under most circumstances, TCP sends data when it is presented; when outstanding data has not yet been acknowledged, it gathers small amounts of output to be sent in a single packet once an acnowledgement is received. For a small number of clients, such as window systems that send a stream of mouse events that receive no replies, this packetization may cause significant delays. Therefore, TCP provides a Boolean option, TCP_NODELAY (from <netinet/tcp.h> ), to defeat this algorithm. The option level for the setsockopt call is the protocol number for TCP, which is available from getprotobyname . User Datagram Protocol (UDP) is a simple, unreliable datagram protocol or read or write). Also, broadcast packets may be sent (assuming the underlying network supports this) by using a reserved broadcast address; this address is network interface dependent. The SO_BROADCAST option must be set on the socket and the process must have the SYSPRV or BYPASS privilege for broadcasting to succeed. errno is an external variable whose value is set whenever an error occurs during a call to any of the Compaq C RTL routines. You can use this value to obtain a more detailed description of the error. errno is not cleared on successful calls, so its value should be checked only when an error has been indicated. Most calls to the Compaq C RTL routines have one or more returned values. Any error condition is indicated by an otherwise impossible return value. This is almost always --1; the individual routine descriptions specify the details. All return codes and values from routines are of type int unless otherwise noted. An error number is also made available in the external variable errno , which is not cleared on successful calls. The errno values may be translated to a message, similar to that found in UNIX systems, by using the perror routine. vaxc$errno may also be returned as an error. Table A-2 lists the errno values. The external integer h_errno is available only with OpenVMS Version 7.0, and is set only by a 4.4BSD TCP/IP interface. Specifically, The gethostbyname and gethostbyaddr functions require 4.4BSD semantics to set h_errno . The gethostbyname and gethostbyaddr functions indicate an error condition by returning a null pointer and setting the external integer h_errno to indicate the error return status. When gethostbyname or gethostbyaddr returns an error status, h_errno , which is very similar to errno , can be checked to determine whether the error is the result of a temporary failure or an invalid or unknown host. Use the herror routine to print the error message describing the failure. If the argument string to herror is not NULL, it is printed, followed by a colon (:) and a space. The error message is printed with a trailing new-line character. The <netdb.h> header file declares h_errno on a per-thread basis as: #define h_errno (*decc$h_errno_get_addr()) The <netdb.h> header file also symbolically defines the error code values that h_errno can accept, as follows: Like errno , the value of h_errno is zero at program startup. Checking h_errno is valid only when a failure status is returned by a Compaq C RTL routine that is defined to set it.
http://h71000.www7.hp.com/commercial/c/docs/5763p062.html
CC-MAIN-2016-30
refinedweb
1,158
55.34
This application will make you understand how Model (M), View (V), Controller (C) architecture is implemented in JavaServer Faces. This application will make use of UI components, Validator, Navigation and Bean component available with JSF. This application will take a user First Name and Last Name. Later these fields will be validated by JSF and using the controller bean and Navigation rule the output will be displayed. This application will also introduce a UI component which is a submit button.: Once you have all the prerequisites installed, follow the following steps to create a project with Eclipse. Setting Eclipse for application development - Launch Eclipse and create a dynamic Web project as shown in the figure. - Give the fields for the Web Project as shown in the following figure. - Select Finish. - Right click on the SimpleJSF project and select Properties, then select Project Facets. - Check the box for JavaServerFaces and under the Version tab select 1.2 as the version. Select the Further configuration required... indicator to display the JSF Capabilities pane. - On the JSF Capabilities window check the box and select new as shown in the figure. - The next window is used to create a JSF Implementation library. Give the library name as JSFCustomLibrary and add the following jars. Select Finish once done. See the figure below: - .3\myfaces-api-1.2.3.jar - <GERONIMO_HOME>\repository\org\apache\myfaces\core\myfaces-impl\1.2.3\myfaces-impl-1.2.3.jar - Check Deploy and modify the URL pattern to *.jsf as shown in the figure. Select Finish. This finishes the setting up of the Eclipse IDE for application development. Define and Implement the application Model (M) The Model as suggested by MVC architecture handles data and logic of the application. In an enterprise application. Java Beans are used to represent collection of data and operation on that data. In JSF we use Java Beans to define the Model. - Under the project explorer right click on the SimpleJSF project and create a new class. - Fill the New Java Class form with jsf as the package name and FirstName as the bean class name. Select Finish once done. - Add the following code to the FirstName bean class: FirstName.java - Create a second Bean class LastName and add the following code to the class:This completes the Model definition and implementation of the bean classes.LastName.java Define and implement Model (M) objects to Controller - In a JSF application the Controller is implemented by a configuration file called WebContent/WEB-INF/faces-config.xml. Double click on the file. This will open the Faces Configuration Editor. - Select the ManagedBean tab in the editor. Select the request Managed Bean Element and select Add. - Choose the Using an existing Java class option, select Browse. Give the search element as FirstName and select OK. - Select Finish on the next window. Similarly add the other bean LastName. Now select the Source tab in the Faces configuration Editor. It displays the bean components (i.e., the Model) in the controller. This completes the description of Model to Controller. Define and implement View (V) in application - Right click on WebContent and create a new folder with the name pages. - Right click on pages folder and create a JSP called login.jsp. Select Finish. - Similarly create another JSP page called welcome.jsp. - Now we have to include the Tag Library Descriptors (TLD) in our application. Geronimo comes packaged with the required TLD's, which can be found in: Location of TLD <GERONIMO_HOME>\repository\org\apache\myfaces\core\myfaces-impl\1.2.3\myfaces-impl-1.2.3.jar\META-INF\myfaces-html.tld and <GERONIMO_HOME>\repository\org\apache\myfaces\core\myfaces-impl\1.2.3\myfaces-impl-1.2.3.jar\META-INF\myfaces_core.tld - To add these two TLD's in the application, in Eclipse under the Project Explorer right click on WEB-INF. Create a folder called tld. Copy myfaces-html.tldand myfaces_core.tldto this folder. - The next step is to populate login.jspand welcome.jspwith datalogin.jspLets now try to understand what each line of code represents.welcome.jsp - The first two lines in login.jspdefines two tag librariesThese two sets of tags are defined by JSF. The first one with the namespace "h" is used to generate html views. The second one with the namespace "f" handles the core functionalities of JSF like type conversions, validations and listeners for input from user.Code Snippet from login.jsp - The next few lines contains the usual html tagsCode Snippet from login.jsp - The tag <f:view> represents the start of JSF code. - This line of code Represents the input tag. The id="firstName" and value="firstName.name" comes from the Managed Bean.Code Snippet from login.jsp - Using the Faces Configuration Editor, select firstName bean under Managed Bean tab. The Managed Bean Name is firstName. See the figure below. This completes the implementation of View (V) in the application. The other tags <f:validateLength> and <h:commandButton> will be explained in the next section. Define the Validator Component The code <f:validateLength defines the input text length to be minimum of 4 characters and maximum of 10 characters. This is the standard validation provided by core tag libraries. Other examples of validators are Validate Long Range tag, Validate Double Range tag , and so on. JSF also provides a Validator interface which can be implemented to create custom validators. The code <h:message defines the error message. When the user inputs the controller validates each of the inputs. If the inputs are invalid Controller displays the same page again with an error message for the errors. The color:red suggests that the error message will be displayed in red color. Define and implement the View navigation by Controller (C) This step uses the JSP page navigation in the order of user inputs and validation by controller. If all the inputs are valid than the controller performs the action as suggested by the HTML form. This action is submitted by the HTML form as a command button. The code in the input.jsp <h:commandButton checks to determine if all the inputs are valid. This is the button which submits the form to controller if all inputs are valid. In this case the commandButton tells the controller to execute the validated action if all the inputs are valid. The pages navigation in a JSF applicaiton is defined by faces-config.xml. Follow the steps before to define the pages navigation. - Launch the Faces Configuration Editor by double clicking on faces-config.xml - Select the Navigation Rule tab in the Configuration Editor. Under the Palette window select Page. This will select a PageFlow Page GUI object. - Drag the mouse over the Navigation Rule Window and click on the window. This will give a Select JSP File window. Select the login.jspas shown in the figure and select OK. - Similarly add the welcome.jsppage on the Navigation Rule window. See the figure below: - Select Link from the Palette window and join the two pages as shown in the figure: - Select the link between the two pages and go to properties view and set the value for From Outcome field as validated. This is because of the tag <h:commandButton. Once all the inputs are valid the action taken is validated. See the figure. - Once done have a look the Source tab in the Faces Navigation Editor. A <navigation-rule> tag has been introduced into the faces-config.xml. This rule instructs the Controller that if all the inputs are valid from a form in the /pages/login.jsp, and the action is validated, then go to page /pages/welcome.jsp. - Now lets add a index.jspunder WebContent as follows:What is theindex.jsp login.jsfin the forward path tag. If you look at the web.xml, *.jsf is used as the URL pattern to suggest that forwarded page be taken care by Java Server Faces Servlet. This completes the Application Development process. The next step is to deploy and test the application. Deploy and Test the application Right click on the project SimpleJSF and select Run As -> Run On Server. This will deploy the sample on the Apache Geronimo Server and a Login page will be launched. Lets give some sample inputs: Sample Input #1: First Name: Mickey Last Name: Mouse Both the First Name as well as Last Name fulfills the validation rules, so this form will be submitted to controller and according to the navigation rule controller will launch a welcome.jsp page. Sample Input #2: First Name: Mic Last Name: Mouse First Name should be minimum of length=4 but in this case First Name is of length=3. In this case validation will fail and an error message will be generated by controller for First Name field. Sample Input #3: First Name: Mickey Last Name: Mo Last Name should be minimum of length=3 but in this case Last Name is of length=2. In this case validation will fail and an error message will be generated by controller for Last Name field.
https://cwiki.apache.org/confluence/display/GMOxDOC22/Developing+a+Simple+JavaServer+Faces+application
CC-MAIN-2017-04
refinedweb
1,515
59.6
std::nextafter, std::nexttoward Returns the next representable value of from in the direction of to. fromequals to to, tois returned. fromequals to to, tois returned, converted from long double to the return type of the function without loss of range or precision. Promotedis also long double, otherwise the return type is always double. fromargument of any integral type. Equivalent to (6) (the argument is cast to double). [edit] Parameters [edit] Return value If no errors occur, the next representable value of from in the direction of to. is returned. If from equals to, then to is returned.) IEC 60559 recommends that from is returned whenever from==to. These functions return to instead, which makes the behavior around zero consistent: std::nextafter(-0.0, +0.0) returns +0.0 and std::nextafter(+0.0, -0.0) returns –0.0. [edit] Example #include <cmath> #include <iomanip> #include <iostream> #include <cfloat> #include <cfenv> int main() { float from1 = 0, to1 = std::nextafter(from1, 1.f); std::cout << "The next representable float after " << std::setprecision(20) << from1 << " is " << to1 << std::hexfloat << " (" << to1 << ")\n" << std::defaultfloat; float from2 = 1, to2 = std::nextafter(from2, 2.f); std::cout << "The next representable float after " << from2 << " is " << to2 << std::hexfloat << " (" << to2 << ")\n" << std::defaultfloat; double from3 = std::nextafter(0.1, 0), to3 = 0.1; std::cout << "The number 0.1 lies between two valid doubles:\n" << std::setprecision(56) << " " << from3 << std::hexfloat << " (" << from3 << ')' << std::defaultfloat << "\nand " << to3 << std::hexfloat << " (" << to3 << ")\n" << std::defaultfloat << std::setprecision(20); // difference between nextafter and nexttoward: long double dir = std::nextafter(from1, 1.0L); // first subnormal long double float x = nextafter(from1, dir); // first converts dir to float, giving 0 std::cout << "With nextafter, next float after " << from1 << " is " << x << '\n'; x = std::nexttoward(from1, dir); std::cout << "With nexttoward, next float after " << from1 << " is " << x << '\n'; // special values { #pragma STDC FENV_ACCESS ON std::feclearexcept(FE_ALL_EXCEPT); double from4 = DBL_MAX, to4 = std::nextafter(from4, INFINITY); std::cout << "The next representable double after " << std::setprecision(6) << from4 << std::hexfloat << " (" << from4 << ')' << std::defaultfloat << " is " << to4 << std::hexfloat << " (" << to4 << ")\n" << std::defaultfloat; if(std::fetestexcept(FE_OVERFLOW)) std::cout << " raised FE_OVERFLOW\n"; if(std::fetestexcept(FE_INEXACT)) std::cout << " raised FE_INEXACT\n"; } // end FENV_ACCESS block float from5 = 0.0, to5 = std::nextafter(from5, -0.0); std::cout << "std::nextafter(+0.0, -0.0) gives " << std::fixed << to5 << '\n'; } Output: The next representable float after 0 is 1.4012984643248170709e-45 (0x1p-149) The next representable float after 1 is 1.0000001192092895508 ) With nextafter, next float after 0 is 0 With nexttoward, next float after 0 is 1.4012984643248170709e-45 The next representable double after 1.79769e+308 (0x1.fffffffffffffp+1023) is inf (inf) raised FE_OVERFLOW raised FE_INEXACT std::nextafter(+0.0, -0.0) gives -0.000000
http://en.cppreference.com/w/cpp/numeric/math/nextafter
CC-MAIN-2016-50
refinedweb
456
58.38
# Shake Manual _See also: [Shake links](); [Why choose Shake](Why.md#readme); [Function documentation]()_ Shake is a Haskell library for writing build systems - designed as a replacement for `make`. This document describes how to get started with Shake, assuming no prior Haskell knowledge. First, let's take a look at a Shake build system: import Development.Shake import Development.Shake.Command import Development.Shake.FilePath import Development.Shake.Util main :: IO () main = shakeArgs shakeOptions{ c -<.> "o" | c <- cs] need os cmd "gcc -o" [out] os "_build//*.o" %> \out -> do let c = dropDirectory1 $ out -<.> "c" let m = out -<.> "m" () <- cmd "gcc -c" [c] "-o" [out] "-MMD -MF" [m] needMakefileDependencies m This build system builds the executable `_build/run` from all C source files in the current directory. It will rebuild if you add/remove any C files to the directory, if the C files themselves change, or if any headers used by the C files change. All generated files are placed in `_build`, and a `clean` command is provided that will wipe all the generated files. In the rest of this manual we'll explain how the above code works and how to extend it. #### Running this example To run the example above: 1. Install the [Haskell Platform](), which provides a Haskell compiler and standard libraries. 2. Type `cabal update`, to download information about the latest versions of all Haskell packages. 3. Type `cabal install shake`, to build and install Shake and all its dependencies. 4. Type `shake --demo`, which will create a directory containing a sample project, the above Shake script (named `Build.hs`), and execute it (which can be done by `runhaskell Build.hs`). For more details see a [trace of `shake --demo`](Demo.md). ## Basic syntax This section explains enough syntax to write a basic Shake build script. #### Boilerplate The build system above starts with the following boilerplate: import Development.Shake import Development.Shake.Command import Development.Shake.FilePath import Development.Shake.Util main :: IO () main = shakeArgs shakeOptions{shakeFiles="_build"} $ do build rulesAll the interesting build-specific code is placed under build rules. Many build systems will be able to reuse that boilerplate unmodified. #### Defining targets A target is a file we want the build system to produce (typically executable files). For example, if we want to produce the file `manual/examples.txt` we can write: want ["manual/examples.txt"] The `want` function takes a list of strings. In Shake lists are written `[item1,item2,item2]` and strings are written `"contents of a string"`. Special characters in strings can be escaped using `\` (e.g. `"\n"` for newline) and directory separators are always written `/`, even on Windows. Most files have the same name on all platforms, but executable files on Windows usually have the `.exe` extension, while on POSIX they have no extension. When writing cross-platform build systems (like the initial example), we can write: want ["_build/run" <.> exe] The `<.>` function adds an extension to a file path, and the built-in `exe` variable evaluates to `"exe"` on Windows and `""` otherwise. #### Defining rules A rule describes the steps required to build a file. A rule has two components, a pattern and some actions: pattern %> \out -> do actionsThe pattern is a string saying which files this rule can build. It may be a specific file (e.g. `"manual/examples.txt" %> ...`) or may use wildcards: * The `*` wildcard matches anything apart from a directory separator. For example `"manual/*.txt"` would define a rule for any `.txt` file in the `manual` directory, including `manual/examples.txt`, but would not match `manual/examples.zip`, `examples.txt` or `manual/docs/examples.txt`. * The `//` wildcard matches any number of complete path components. For example `//*.txt` would define a rule for any `.txt` file, including `manual/examples.txt`. As another example, `manual//examples.txt` would match any file named `examples.txt` inside `manual`, including both `manual/examples.txt` and `manual/docs/examples.txt`. It is an error for multiple patterns to match a file being built, so you should keep patterns minimal. Looking at the two rules in the initial example: "_build/run" <.> exe %> ... "_build//*.o" %> ... The first matches only the `run` executable, using `<.> exe` to ensure the executable is correctly named on all platforms. The second matches any `.o` file anywhere under `_build`. As examples, `_build/main.o` and `_build/foo/bar.o` both match while `main.o` and `_build/main.txt` do not. Lots of compilers produce `.o` files, so if you are combining two different languages, say C and Haskell, use the extension `.c.o` and `.hs.o` to avoid overlapping rules. The actions are a list of steps to perform and are listed one per line, indented beneath the rule. Actions both express dependencies (say what this rule uses) and run commands (actually generate the file). During the action the `out` variable is bound to the file that is being produced. #### A simple rule Let's look at a simple example of a rule: "*.rot13" %> \out -> do let src = out -<.> "txt" need [src] cmd "rot13" src "-o" out This rule can build any `.rot13` file. Imagine we are building `"file.rot13"`, it proceeds by: * Using `let` to define a local variable `src`, using the `-<.>` extension replacement method, which removes the extension from a file and adds a new extension. When `out` is `"file.rot13"` the variable `src` will become `file.txt`. * Using `need` to introduce a dependency on the `src` file, ensuring that if `src` changes then `out` will be rebuilt and that `src` will be up-to-date before any further commands are run. * Using `cmd` to run the command line `rot13 file.txt -o file.rot13`, which should read `file.txt` and write out `file.rot13` being the ROT13 encoding of the file. Many rules follow this pattern - calculate some local variables, `need` some dependencies, then use `cmd` to perform some actions. We now discuss each of the three statements. #### Local variables Local variables can be defined as: let variable = expressionWhere variable is a name consisting of letters, numbers and underscores (a-z, A-Z, 0-9 and \_). All variables _must_ start with a lower-case letter. An expression is any combination of variables and function calls, for example `out -<.> "txt"`. A list of some common functions is discussed later. Variables are evaluated by substituting the expression everywhere the variable is used. In the simple example we could have equivalently written: "*.rot13" %> \out -> do need [out -<.> "txt"] cmd "rot13" (out -<.> "txt") "-o" out Variables are local to the rule they are defined in, cannot be modified, and should not be defined multiple times within a single rule. #### File dependencies You can express a dependency on a file with: need ["file.src"] To depend on multiple files you can write: need ["file.1","file.2"] Or alternatively: need ["file.1"] need ["file.2"] It is preferable to use fewer calls to `need`, if possible, as multiple files required by a `need` can be built in parallel. #### Running external commands The `cmd` function allows you to call system commands, e.g. `gcc`. Taking the initial example, we see: cmd "gcc -o" [out] os After substituting `out` (a string variable) and `os` (a list of strings variable) we might get: cmd "gcc -o" ["_make/run"] ["_build/main.o","_build/constants.o"] The `cmd` function takes any number of space-separated expressions. Each expression can be either a string (which is treated as a space-separated list of arguments) or a list of strings (which is treated as a direct list of arguments). Therefore the above command line is equivalent to either of: cmd "gcc -o _make/run _build/main.o _build/constants.o" cmd ["gcc","-o","_make/run","_build/main.o","_build/constants.o"] To properly handle unknown string variables it is recommended to enclose them in a list, e.g. `[out]`, so that even if `out` contains a space it will be treated as a single argument. The `cmd` function as presented here will fail if the system command returns a non-zero exit code, but see later for how to treat failing commands differently. As a wart, if the `cmd` call is _not_ the last line of a rule, you must precede it with `() <- cmd ...`. #### Filepath manipulation functions Shake provides a complete library of filepath manipulation functions (see the manual docs for `Development.Shake.FilePath`), but the most common are: * `str1 > str2` - add the path components together with a slash, e.g. `"_build" > "main.o"` equals `"_build/main.o"`. * `str1 <.> str2` - add an extension, e.g. `"main" <.> "o"` equals `"main.o"`. * `str1 ++ str2` - append two strings together, e.g. `"hello" ++ "world"` equals `"helloworld"`. * `str1 -<.> str2` - replace an extension, e.g. `"main.c" -<.> "o"` equals `"main.o"`. * `dropExtension str` - drop the final extension of a filepath if it has one, e.g. `dropExtension "main.o"` equals `"main"`, while `dropExtension "main"` equals `"main"`. * `takeFileName str` - drop the path component, e.g. `takeFileName "_build/src/main.o"` equals `"main.o"`. * `dropDirectory1 str` - drop the first path component, e.g. `dropDirectory1 "_build/src/main.o"` equals `"src/main.o"`. ## Advanced Syntax The following section covers more advanced operations that are necessary for moderately complex build systems, but not simple ones. #### Directory listing dependencies The function `getDirectoryFiles` can retrieve a list of files within a directory: files <- getDirectoryFiles "" ["//*.c"] After this operation `files` will be a variable containing all the files matching the pattern `"//*.c"` (those with the extension `.c`) starting at the directory `""` (the current directory). To obtain all `.c` and `.cpp` files in the src directory we can write: files <- getDirectoryFiles "src" ["//*.c","//*.cpp"] The `getDirectoryFiles` operation is tracked by the build system, so if the files in a directory changes the rule will rebuild in the next run. You should only use `getDirectoryFiles` on source files, not files that are generated by the build system, otherwise the results will change while you are running the build and the build may be inconsistent. #### List manipulations Many functions work with lists of values. The simplest operation on lists is to join two lists together, which we do with `++`. For example, `["main.c"] ++ ["constants.c"]` equals `["main.c","constants.c"]`. Using a _list comprehension_ we can produce new lists, apply functions to the elements and filtering them. As an example: ["_build" > x -<.> "o" | x <- inputs] This expression grabs each element from `inputs` and names it `x` (the `x <- inputs`, pronounced "`x` is drawn from `inputs`"), then applies the expression `"_build" > x -<.> "o"` to each element. If we start with the list `["main.c","constants.c"]`, we would end up with `["_build/main.o","_build/constants.o"]`. List expressions also allow us to filter the list, for example we could know that the file `"evil.c"` is in the directory, but should not be compiled. We can extend that to: ["_build" > x -<.> "o" | x <- inputs, x /= "evil.c"] The `/=` operator checks for inequality, and any predicate after the drawn from is used to first restrict which elements of the list are available. #### Using `gcc` to collect headers One common problem when building `.c` files is tracking down which headers they transitively import, and thus must be added as a dependency. We can solve this problem by asking `gcc` to create a file while building that contains a list of all the imports. If we run: gcc -c main.c -o main.o -MMD -MF main.m That will compile `main.c` to `main.o`, and also produce a file `main.m` containing the dependencies. To add these dependencies as dependencies of this rule we can call: needMakefileDependencies "main.m" Now, if either `main.c` or any headers transitively imported by `main.c` change, the file will be rebuilt. In the initial example the complete rule is: "_build//*.o" %> \out -> do let c = dropDirectory1 $ out -<.> "c" let m = out -<.> "m" () <- cmd "gcc -c" [c] "-o" [out] "-MMD -MF" [m] needMakefileDependencies m We first compute the source file `c` (e.g. `"main.c"`) that is associated with the `out` file (e.g. `"_build/main.o"`). We then compute a temporary file `m` to write the dependencies to (e.g. `"_build/main.m"`). We then call `gcc` using the `-MMD -MF` flags and then finally call `needMakefileDependencies`. #### Top-level variables Variables local to a rule are defined using `let`, but you can also define top-level variables. Top-level variables are defined before the `main` call, for example: x We can now write: buildDir ("run" <.> exe) %> \out -> do ... All top-level variables and functions can be though of as being expanded wherever they are used, although in practice may have their evaluation shared. #### A clean command A standard clean command is defined as: phony "clean" $ do putNormal "Cleaning files in _build" removeFilesAfter "_build" ["//*"] Running the build system with the `clean` argument, e.g. `runhaskell Build.hs clean` will remove all files under the `_build` directory. This clean command is formed from two separate pieces. Firstly, we can define `phony` commands as: phony "name" $ do actionsWhere name is the name used on the command line to invoke the actions, and actions are the list of things to do in response. These names are not dependency tracked and are simply run afresh each time they are requested. The actions can be any standard build actions, although for a `clean` rule, `removeFilesAfter` is typical. This function waits until after any files have finished building (which will be none, if you do `runhaskell Build.hs clean`) then deletes all files matching `//*` in the `_build` directory. The `putNormal` function writes out a message to the console, as long as `--quiet` was not passed. ## Running This section covers how to run the build system you have written. #### Compiling the build system As shown before, we can use `runhaskell Build.hs` to execute our build system, but doing so causes the build script to be compiled afresh each time. A more common approach is to add a shell script that compiles the build system and runs it. In the example directory you will find `build.sh` (Linux) and `build.bat` (Windows), both of which execute the same interesting commands. Looking at `build.sh`: #!/bin/sh mkdir -p _shake ghc --make Build.hs -rtsopts -with-rtsopts=-I0 -outputdir=_shake -o _shake/build && _shake/build "$@" This script creates a folder named `_shake` for the build system objects to live in, then runs `ghc --make Build.hs` to produce `_shake/build`, then executes `_shake/build` with all arguments it was given. The `-with-rtsopts` flag can be treated as magic - it instructs the Haskell compiler to turn off features that would otherwise steal CPU from the commands you are running. Now you can run a build by simply typing `./build.sh` on Linux, or `build` on Windows. On Linux you may want to alias `build` to `./build.sh`. For the rest of this document we will assume `build` runs the build system. _Warning:_ You should not use the `-threaded` for GHC 7.6 or below because of a [GHC bug](). If you do turn on `-threaded`, you should include `-qg -qb` in `-with-rtsopts`. #### Command line flags The initial example build system supports a number of command line flags, including: * `build` will compile all files required by `want`. * `build _build/main.o` will compile enough to create `_build/main.o`, ignoring all `want` requirements. * `build clean` will delete the contents of `_build`, because of our `phony` command. * `build --help` will list out all flags supported by the build system, currently 36 flags. Most flags supported by `make` are also supported by Shake based build systems. * `build -j8` will compile up to 8 rules simultaneously, by default Shake uses 1 processor. Most flags can also be set within the program by modifying the `shakeOptions` value. As an example, `build --metadata=_metadata` causes all Shake metadata files to be stored with names such as `_metadata/.shake.database`. Alternatively we can write `shakeOptions{ c -<.> "o" | c <- inputs, not $ any isDigit c] For defining non-overlapping rules it is sometimes useful to use a more advanced predicate. For example, to define a rule that only builds results which have a numeric extension, we can use the `?>` rule definition function: (\x -> all isDigit $ drop 1 $ takeExtension x) ?> \out -> do ... We first get the extension with `takeExtension`, then use `drop 1` to remove the leading `.` that `takeExtension` includes, then test that all the characters are numeric. The standard `%>` operator is actually defined as: pattern %> actions = (pattern ?==) ?> actions Where `?==` is a function for matching file patterns. #### Haskell Actions You can run any Haskell `IO` action by using `liftIO`. As an example: liftIO $ launchMissiles True Most common IO operations to run as actions are already wrapped and available in the Shake library, including `readFile'`, `writeFile'` and `copyFile'`. Other useful functions can be found in `System.Directory`. ####` and `*.h.dep` rule uses `|%>`, which defines a single action that matches multiple patterns. The file `foo.h.dep` contains a list of headers directly included by `foo.h`, using `usedHeaders` from the previous section. * The `*.deps` rule takes the transitive closure of dependencies, so `foo.h.deps` contains `foo.h` and all headers that `foo.h` pulls in. The rule takes the target file, and all the `.deps` for anything in the `.dep` file, and combines them. More abstractly, the rule calculates the transitive closure of _a_, namely _a*_, by taking the dependencies of _a_ (say _b_ and _c_) and computing _a\* = union(a, b\*, c\*)_. * The `*.o` rule reads the associated `.deps` file (ensuring it is up to date) and then depends on its contents. The pattern of `*.deps` files occurs frequently, for example when linking Haskell files.
http://hackage.haskell.org/package/shake-0.15.1/src/docs/Manual.md
CC-MAIN-2020-40
refinedweb
2,933
68.57
Multiplayer Game Programming for Teens with Python: Part 1 Learn how to make a multiplayer game with Python! This is a post by Tutorial Team Member Julian Meyer, a 13-year-old python developer. You can find him on Google+ and Twitter. I’m sure that once in a while, you and your friends go online to play a multiplayer game. Have you ever wondered about the inside of that game and how everything works? In this tutorial, you will learn about multiplayer game programming by creating a sample game. Along the way you will also learn about object-oriented programming. For this tutorial, you will be using Python and the PyGame modules. If you are new to Python or PyGame, you should first look at this earlier tutorial on beginning game programming, which explains some of the basics of PyGame. Getting Started The first step is to make sure that you have PyGame installed. You can download a Mac installer for PyGame here. Make sure you download the Lion installer if you have Mac OSX 10.7 or up. Otherwise, download the Snow Leopard installer. You can also download and install PyGame in these ways: - With MacPorts using: sudo port install python2.7 py27-game - With Fink using: sudo fink install python27 pygame-py27 - With Homebrew and pip using the command found here. If you are running Windows, then you can find your installer here. Note: If you had trouble in the last tutorial, make sure you have the 32-bit version of Python on your system. If you have a 64-bit system, then you need to run python2.7-32 to run Python. Lastly, download the resources for this project, which include some images and sounds that you’ll need for this game. The Rules of the Game The game you’re going to make in this tutorial is called “Boxes”. You may be familiar with playing this game on paper with some friends while you were in school! In case you’re not familiar with the game, here are the rules: - The board consists of a 7×7 grid of points (which makes a 6×6 grid of cubes if you were to connect the dots). - On each player’s turn, the player fills in the horizontal or vertical line segment connecting two neighboring points. - If filling in a line segment completes a box on the grid, the player becomes the owner of that square and gets a point. The player also gets to place another line segment on the same turn. - The player with the most squares/points at the end of the game wins! Although these rules are very simple, it’s a fun game to play, especially if you’re bored. But wouldn’t it be great if you could play this online? Object-Oriented Programming: A Quick Introduction Before we begin, let’s discuss something called Object-Oriented Programming which you’re going to use in this tutorial. Object-oriented programming, also known as OOP, is a type of programming based on objects. Object are bundles of data and associated logic. For example, you might have a “dog” object that consists of some data (the dog’s name or favorite treat) and associated logic (for example, instructions on how to bark). Objects are made from templates called classes that define what kinds of data the object can hold and what kinds of things the object can do. These are known as the object’s properties and methods, respectively. Methods are functions that represent something you can ask the object to do. For example, the statement car.drive() can be interpreted as telling the object in the “car” variable to “drive”. Properties are variables that belong to an object. Continuing the example, your car object might have a property called gas, and the statement car.gas = 100 would set the car’s gas to 100. These two statements manipulate a car object that already exists. Recall that the car’s class is the template that defines how to make a car object and what a car is by defining its properties and methods. Within the definitions of those methods, you will find the code that manipulates the car from the inside. For instance, instead of car.gas = 100, you might find self.gas=100, which is a car object telling itself – self, get it? – to set its own gas to 100. OOP is a large topic but the basics above are all you need to get started. Your code will describe the Boxes game as the interaction of various objects. Those objects all have properties and methods, which you will define in the object’s class. And when you write a piece of code, you should remember whether you’re writing the class code that defines what an object can do from the “inside” of the object, or code that manipulates an object from the “outside” of that object. Setting Up a Basic Object-Oriented Game There are a couple of ways to use an object-oriented framework for your game. Your Boxes game will take a simple approach in which there is one class for the client and one for the server. For now, let’s just create the main client class that will run when the user starts the game. At the start of making every game, I like to make a folder for the game. When you unzipped the resources for this project, it should have created a folder or you called boxes. This is where you will put your source code for the game – right here alongside all the images. Create a file in this directory called boxes.py using your favorite text editor (if you don’t have one, you can use TextEdit on the Mac, or Notepad in Windows). Then add this import of the file: import pygame This imports the PyGame module for you to use. Before you go any further, you should test that at least this much is working. To do this, open Terminal and switch to your boxes directory using the cd command. Then enter python boxes.py. For example, here’s what it looks like on my machine: cd /Users/jmeyer/Downloads/boxes python boxes.py If you get no errors after running this, that means you have PyGame installed correctly, and you are good to go. Note: If running the code above gives you an ImportError saying there is “No module named pygame”, then you have not installed PyGame or else you have installed PyGame into a copy of Python different from the one you are running.: ImportError: /Library/Frameworks/SDL.framework/Versions/A/SDL: no appropriate 64-bit architecture (see "man python" for running in 32-bit mode) That means you need to run Python in 32-bit mode, like this: python2.7-32 Next add the class definition, as well as one thing every class should have: class BoxesGame(): def __init__(self): pass #put something here that will run when you init the class. The first line of this code tells the compiler that you are creating a new class called BoxesGame. The second line defines a method called __init__. The surrounding double underscores are a hint that this is a special method name. In fact, this name identifies the method as the class’s __init__ method, the method that you run whenever you want to create or instantiate an object of the class. Now you’ll fill in the body of the init function to do some PyGame initialization. Add this to the code you wrote above, in place of the comment beginning with #put something here...: #1 pygame.init() width, height = 389, 489 #2 #initialize the screen self.screen = pygame.display.set_mode((width, height)) pygame.display.set_caption("Boxes") #3 #initialize pygame clock self.clock=pygame.time.Clock() Make sure you indent it correctly, so that everything lines up to the left margin of where the “#put something here…” comment was. You can read more about the matter here: Python Indentation. Let’s look at the code you just added, one chunk at a time: - First you initialize PyGame and two variables that you’ll use to set up the screen, widthand height. - Then you initialize the screen using those two variables. You also set the title of the screen. - Finally, you initialize the PyGame clock, which you’ll need for tracking time in the game. Next let’s add the update() loop, which runs every periodically to update the game, draw the graphics and receive user input. Do this by simply adding the following after the __init__ method (the left margin should be equal to the left margin of __init__): def update(self): #sleep to make the game 60 fps self.clock.tick(60) #clear the screen self.screen.fill(0) for event in pygame.event.get(): #quit if the quit button was pressed if event.type == pygame.QUIT: exit() #update the screen pygame.display.flip() This is a basic update loop that clears the screen and checks to see if the user wants to quit the game. You’ll be adding more to this later. Running the Python file now won’t do anything yet, as all you’ve done is defined the class BoxesGame. You still need to create an object of this class and start the game! Now that you have the update loop ready, let’s add the code that will run the main game class. After that, you’ll set up some of the basic graphics in the game, such as drawing the board. Add this code to the end of the file to start the game (the left margin should be equal to the left margin of the file): bg=BoxesGame() #__init__ is called right here while 1: bg.update() This is the nice thing about object-oriented programming: The code that actually makes things happen is only three lines long! At this point, the entire file should look like this: import pygame class BoxesGame(): def __init__(self): pass #1 pygame.init() width, height = 389, 489 #2 #initialize the screen self.screen = pygame.display.set_mode((width, height)) pygame.display.set_caption("Boxes") #3 #initialize pygame clock self.clock=pygame.time.Clock() def update(self): #sleep to make the game 60 fps self.clock.tick(60) #clear the screen self.screen.fill(0) for event in pygame.event.get(): #quit if the quit button was pressed if event.type == pygame.QUIT: exit() #update the screen pygame.display.flip() bg=BoxesGame() #__init__ is called right here while 1: bg.update() That’s it. Now wasn’t that easy? This is a good time to run the game: As you can see, running the game results in a very impressive black screen! Yay! You may not understand this now, but game writing is a strategic process. Think of it as being an architect. You have just built a strong base for your building. Large buildings must have very good bases and so you must think your plan through before you start. Let’s add another method. If you don’t remember what this means, reread the section of the tutorial called, “Object Oriented Programming: A Quick Introduction.” Drawing the Board and Lines on the Screen In PyGame, the upper left of the window is coordinate (0, 0). So let’s define a coordinate system for the points in the Boxes grid that is similar, with (0,0) representing the upper left point and (6,6) representing the bottom right point: Somehow, you need a way to represent the potential line segments in the game. Well, there are two different types of line segments: horizontal and vertical lines. Let’s imagine you make a list of all the potential horizontal and vertical line combinations. It would look something like this: In programming terms, a list is also known as an array. And when you have a list of lists, like the horizontal and vertical line combinations here, that’s called a 2D array. For example, to represent the horizontal line from (0, 0) to (1, 1), that would be row 0, column 0 in the “horizontal lines” list. Note that the horizontal lines list has 6 rows and 7 columns, and the vertical lines list has 7 rows and 6 columns. Add these two lines to __init__ to define these two arrays: self.boardh = [[False for x in range(6)] for y in range(7)] self.boardv = [[False for x in range(7)] for y in range(6)] A quick way to create an array is to do this: [valuePerItem for x in y]. In this case, you fill an array with an array filled with Falses. False stands for an empty space. Now that you have the board representation, let’s get to the code of drawing the board. First of all, create a new method called initGraphics(). This method will be something you call from __init__ but, to keep your code organized, you’re creating a separate method just for the purpose of loading the graphics. Add this right before the __init__ function: def initGraphics(self): self.normallinev=pygame.image.load("normalline.png") self.normallineh=pygame.transform.rotate(pygame.image.load("normalline.png"), -90) self.bar_donev=pygame.image.load("bar_done.png") self.bar_doneh=pygame.transform.rotate(pygame.image.load("bar_done.png"), -90) self.hoverlinev=pygame.image.load("hoverline.png") self.hoverlineh=pygame.transform.rotate(pygame.image.load("hoverline.png"), -90) As you can see, you have three main sprites: an normal (empty) line, a done (occupied) line and a hover line. You rotate each of these lines by 90 degrees to draw the horizontal versions of them. These files came with the resources you downloaded earlier and should be in the same directory as your Python file. You have a method to load all of the graphics, but you have yet to call it. Try to guess where to add what! Once you have an answer, click the Show button below to see if you’re right. Next you should add the code that actually draws the board. To loop through every x and y in a grid, you must add a for loop inside of a for loop. (For all of you Inception fans, a for-loop-ception.) You need a loop that loops through the x- and y-values. Add this right after the __init__ method: def drawBoard(self): for x in range(6): for y in range(7): if not self.boardh[y][x]: self.screen.blit(self.normallineh, [(x)*64+5, (y)*64]) else: self.screen.blit(self.bar_doneh, [(x)*64+5, (y)*64]) for x in range(7): for y in range(6): if not self.boardv[y][x]: self.screen.blit(self.normallinev, [(x)*64, (y)*64+5]) else: self.screen.blit(self.bar_donev, [(x)*64, (y)*64+5]) This code simply loops through the grid and checks whether or not that part on the grid has been clicked. The code does this for both the horizontal and vertical lines. self.boardv[x][y] and self.boardh[x][y] returns either true or false, depending on whether the appropriate line segment has been filled in yes. Running the program now still won’t do anything. All you’ve done is defined what the game should do if it ever gets that method call. Now let’s add the method call to the update function. Add this after you clear the screen with screen.fill(0): #draw the board self.drawBoard() And of course, as a good programmer, you remember to add a comment to explain the code. Run your code now. When you do, you should see the grid drawn on the screen: Every time I write map drawing code, I like to test it out, both because it’s fun and because it’s a good way to find bugs. Add this after you initialize the boards by defining self.boardh and self.boardv: self.boardh[5][3]=True Run the code and as you can see, one horizontal line is lit up – the line from (5, 3) to (5, 4): Pretty cool, huh? Delete the line of test code you just added. Good job. You’ve finished drawing your map, which is one of the most difficult things to do in game programming. Adding Other Types of Lines Next you need to find the line to which the mouse is closest so that you can draw a hover line at that spot. First, at the top of the file, add this line to import the math library, which you’ll need soon: import math Then, before pygame.display.flip(), add this big chunk of code: #1 mouse = pygame.mouse.get_pos() #2 xpos = int(math.ceil((mouse[0]-32)/64.0)) ypos = int(math.ceil((mouse[1]-32)/64.0)) #3 is_horizontal = abs(mouse[1] - ypos*64) < abs(mouse[0] - xpos*64) #4 ypos = ypos - 1 if mouse[1] - ypos*64 < 0 and not is_horizontal else ypos xpos = xpos - 1 if mouse[0] - xpos*64 < 0 and is_horizontal else xpos #5 board=self.boardh if is_horizontal else self.boardv isoutofbounds=False #6 try: if not board[ypos][xpos]: self.screen.blit(self.hoverlineh if is_horizontal else self.hoverlinev, [xpos*64+5 if is_horizontal else xpos*64, ypos*64 if is_horizontal else ypos*64+5]) except: isoutofbounds=True pass if not isoutofbounds: alreadyplaced=board[ypos][xpos] else: alreadyplaced=False Wow! That's a lot of code. Let's go over the sections one-by-one: - First you get the mouse position with PyGame's built-in function. - Next you get the position of the mouse on the grid, using the fact that each square is 64x64 pixels. - You check if the mouse is closer to the top and bottom or the left and right, in order to determine whether the user is hovering over a horizontal or vertical line. - You get the new position on the grid based on the is_horizontalvariable. - You initialize the variable boardas either boardhor boardv, whichever is correct. - Finally, you try drawing the hover line to the screen, taking into consideration whether it is horizontal or vertical and on the top, bottom, left or right. You also check if the line is out of bounds. If it is, or if the line has already been drawn, you don't draw the hover line. Run the program and you get... drum roll, please... a map where a line lights up as your mouse moves over it! If you're like me, you probably have your mouse whizzing across the board by now. Take some time to enjoy your results. OK, now you have a grid that lights up when the player's mouse moves over a line. But this isn't a game where you just have to move your mouse around a bunch. You need to add the click-to-lay-down-line functionality. To do this, you're going to use PyGame's built-in mouse function, which is simply pygame.mouse.get_pressed()[0]. The function returns either 1 or 0, depending on whether the mouse button is currently pressed down. Before I tell you how to implement this in your game, try figuring it out yourself. Remember how you used if statements before and how to create a piece on the board. Run the program now and voilà! If you click, you place a line just where you were hovering. As you can see, the code you added checks if the mouse is pressed and if the line should be horizontal or vertical, and places the line accordingly. One problem, though, is that if you click at the bottom of the screen (below where the boxes are drawn), the game crashes. Let's see why this is. When something crashes, usually it gives you an error report in the Terminal. In this case, the report looks like this: Traceback (most recent call last): File "/Users/school/Desktop/Dropbox/boxes/WIPBoxes.py", line 103, in <module> bg.update() File "/Users/school/Desktop/Dropbox/boxes/WIPBoxes.py", line 69, in update self.boardh[ypos][xpos]=True IndexError: list index out of range This error is saying that the array boardh that you tried to access doesn't go as far as where you clicked. Remember that variable called isoutofbounds? That will come in handy here. Simply change this: if pygame.mouse.get_pressed()[0] and not alreadyplaced: #-----------to------------- if pygame.mouse.get_pressed()[0] and not alreadyplaced and not isoutofbounds: Now if you try clicking outside of the board, the game doesn't crash. Good job – you have just demonstrated the word debugging! Before you begin implementing the game logic on the server side, let's first add some finishing touches to the client side. Finishing Touches One thing that really bugs me are the spaces at the junctions of the lines. Fortunately, you can fix this quite easily using a 7x7 grid of square dots to fill in those spaces. Of course, you do need the image file, so let's load that right now and at the same time add all of the other images you will be using in this section. Add this to the end of initGraphics(): self.separators=pygame.image.load("separators.png") self.redindicator=pygame.image.load("redindicator.png") self.greenindicator=pygame.image.load("greenindicator.png") self.greenplayer=pygame.image.load("greenplayer.png") self.blueplayer=pygame.image.load("blueplayer.png") self.winningscreen=pygame.image.load("youwin.png") self.gameover=pygame.image.load("gameover.png") self.score_panel=pygame.image.load("score_panel.png") Now that you image is loaded, let's draw each of the 49 dots onto the screen. Add this to the end of drawBoard(): #draw separators for x in range(7): for y in range(7): self.screen.blit(self.separators, [x*64, y*64]) All right, enough code! It's time for a test run. Run the game, and you should get a better-looking grid. Next, let's put a head-up display or HUD at the bottom of the screen. First, you need to create the drawHUD() method. Add this code after drawBoard(): def drawHUD(self): #draw the background for the bottom: self.screen.blit(self.score_panel, [0, 389]) This code also draws the background of the score panel. Let me go over the way PyGame handles fonts. There are three steps: - First you define a font with a set size. - Next you call font.render("your text here")to create a surface for those letters in that font. - Then you draw the surface just as you would an image. Now that you know that, you can use this information to draw the next part of the HUD: the "Your Turn" indicator. Add this code at the bottom of drawHUD(): #create font myfont = pygame.font.SysFont(None, 32) #create text surface label = myfont.render("Your Turn:", 1, (255,255,255)) #draw surface self.screen.blit(label, (10, 400)) Also add this after the call to pygame.init(): pygame.font.init() This code creates the font, renders it in white and then draws it onto the screen. Before you try running the game, add this after the call to self.drawBoard(): self.drawHUD() Run the program and you should get some text that says "Your Turn" at the bottom of the screen. If you look closely, you can also see the nicely textured background. This is great, but you still need to add the indicator after the "Your Turn" text to let the player know it's their round. Before you do, though, you want the game to know whose turn it is. Make sure it knows by adding this to the end of __init__: self.turn = True Now for that indicator. Add this to the end of drawHUD(): self.screen.blit(self.greenindicator, (130, 395)) Run the game and you will see the green score indicator. You can check that off of your list of things to do. Next let's add the text for each player's score. Initialize the variables for the two scores by tacking this onto the end of __init__: self.me=0 self.otherplayer=0 self.didiwin=False Here you also add another variable that you will use later in this step. Remember how to add text? You're going to do the same type of thing you did before, but with differently sized fonts. Add this to the end of drawHUD(): #same thing here myfont64 = pygame.font.SysFont(None, 64) myfont20 = pygame.font.SysFont(None, 20) scoreme = myfont64.render(str(self.me), 1, (255,255,255)) scoreother = myfont64.render(str(self.otherplayer), 1, (255,255,255)) scoretextme = myfont20.render("You", 1, (255,255,255)) scoretextother = myfont20.render("Other Player", 1, (255,255,255)) self.screen.blit(scoretextme, (10, 425)) self.screen.blit(scoreme, (10, 435)) self.screen.blit(scoretextother, (280, 425)) self.screen.blit(scoreother, (340, 435)) Run the game to check out your work. You are now officially done with the HUD. There are just a couple more things to do on the client side, so bear with me. Next, let's add a very simple owner grid that contains values representing a player. These values will let you keep track of who owns which squares. You need this to color the squares properly, and to keep track of the score. Remember, the person who controls the most squares wins! First initialize another array by adding this at the end of __init__: self.owner = [[0 for x in range(6)] for y in range(6)] Now draw the owner grid onto the screen using the same kind of 2d-array loop that you used to loop through the lines arrays. Add this to the bottom of the class: def drawOwnermap(self): for x in range(6): for y in range(6): if self.owner[x][y]!=0: if self.owner[x][y]=="win": self.screen.blit(self.marker, (x*64+5, y*64+5)) if self.owner[x][y]=="lose": self.screen.blit(self.othermarker, (x*64+5, y*64+5)) This method checks if it needs to draw in a given square and if it does, it draws the correct color (each player will have his or her own color). Right now this code won't work because you need the server to tell the client which color to draw, which you will do in the next part of the tutorial. For now, you just won't call this method. You have one more thing to add to the user interface: winning and losing screens. Define this last method and add it to the bottom of the class: def finished(self): self.screen.blit(self.gameover if not self.didiwin else self.winningscreen, (0,0)) while 1: for event in pygame.event.get(): if event.type == pygame.QUIT: exit() pygame.display.flip() Of course, there is no way yet to trigger these screens in the game. That, too, you'll take care of in the next part of the tutorial, when you implement the server side of the game. Remember that by adding all of these game elements now, you are making sure that the server will be able to manipulate the client however it wants. From here on out, you won't need to make many changes to the client other than a little glue between the client and the server. But just to make sure it works, try calling the finished() method in the last part of __init__. You should get a game over screen that looks like the image to the right. Where to Go from Here? Here is the source code from the tutorial so far. Congratulations! You have finished the client side of a very organized and good-looking game. This, of course, is not the end since you haven't implemented any game logic, but excellent job on the client side! Now you should go look at Part 2 of this tutorial, which is all about the server-side - and you'll finally start making this game truly multiplayer!
https://www.raywenderlich.com/38732/multiplayer-game-programming-for-teens-with-python
CC-MAIN-2017-26
refinedweb
4,687
74.39
Recursion In mathematics and computer science, recursion is a way of specifying something (usually a mathematical object or part of a computer program) by reference to itself. More precisely (and to dispel the appearance of circularity in the definition), "complicated" instances are defined in terms of "simpler" instances, and the "simplest" instances are given explicitly. One interesting "application" of recursion is the definition of the set of natural numbers. We can define a natural number recursively as follows: - 0 is natural - If n is natural, then n + 1 is also natural Fibonacci numbers A canonical example of recursion is the computation of Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946 ... This sequence can be defined recursively as follows (F(n) is the nth Fibonacci number): - if n = 0, F(n)= 0 - if n = 1, F(n) = 1 - otherwise, F(n) = F(n - 1) + F(n - 2) How would we generate this sequence in code? It's quite simple - recursive definitions are easily reflected in recursive function calls. Here is the C++ code that returns any Fibonacci number (constrained by execution-time and limitations of the C++ long type): long fib_rec( long index ) { if (index < 2) return index; else return fib_rec(index - 1) + fib_rec(index - 2); }Note how gracefully the mathematical definition is translated to the code of fib_rec. This is the beauty and elegance of recursion. Unfortunately, everything has its price. While often being the most natural way to express algorithms, recursion can suffer from performance problems. For example, finding the 40th Fibonacci number (which is, by the way, 102,334,155) using this routine takes about 4 seconds on my machine. The 42nd number (267,914,296) takes 11 seconds to compute, and the time grows very quickly (the 45th, which is 1,134,903,170, takes 47 seconds, etc.) One reason for this is the cost of function calls (of which there are many in the recursive solution). When a function is called, there is always a certain amount of overhead. For small functions, this overhead can be comparable to the time required to execute the function itself. This results in a performance hit. However, this is not the main reason for recursion being slow for the computation of Fibonacci numbers. The principal cause, in this case, is the vast amount of repetition involved. To demonstrate this, let's try to trace a sample execution of the recursive computation fib_rec, taking a call with index set to 5 as an example: fib_rec(5) | |---fib_rec(4) | | | |---fib_rec(3) | | | | | |---fib_rec(2) | | | | | | | |---fib_rec(1) | | | |---fib_rec(0) | | | | | |---fib_rec(1) | | | |---fib_rec(2) | | | |---fib_rec(1) | |---fib_rec(0) | |---fib_rec(3) | | fib_rec(2) | | | |---fib_rec(1) | |---fib_rec(0) | |---fib_rec(1)When fib_rec(5) is called, it calls fib_rec with 4 and fib_rec with 3. Each of those makes the appropriate calls, et cetera. What you see above is the complete call tree that results from the fib_rec(5). You can generate it yourself by inserting a tracing printout in the beginning of the function. Now, do you notice anything funny about this call tree? It shouldn't be hard to spot the scandalous number of times the same calls are made. For instance, the call fib_rec(1) is made 5 times. The result of fib_rec(i) surely doesn't change between calls (the first Fibonacci number is, by definition, 1), so why is there a need for so much repetition? This, in fact, is the reason for the unfortunate inefficiency of recursive algorithms for many computations. So can we really write nice recursive algorithms and not be daunted by their performance problems? The answer to this question is fortunately positive! Memoized Fibonacci Memoization literally means "putting into memory". An alternative name for it is caching. Caching is familiar from the hardware world, where a cache is that small amount of fast but expensive memory where the CPU keeps recent data from the RAM (which is considerably slower than cache), thus avoiding some costly RAM accesses and saving execution time. In programming, Memoization plays a role similar to a hardware cache. It is a technique used to speed up computer programs by saving intermediary answers for later use, rather than recomputing them. If you look at the call tree for fib_rec(5), you can see that many (most!) of the calls may be avoided by saving their results in earlier calls. In fact, there's no real need to compute the Fibonacci number at any index more than once, so five fib_rec calls would do for fib_rec(5), and not 15 as it currently is. So what should we do in order to memoize the Fibonacci computation? First, we should set up some data structure to serve as a cache of computations. Then, when being asked to compute a Fibonacci number we should first consult the cache. If the result is in cache, it can be returned without any further computations. If it isn't in the cache - it means we haven't computed it yet, so we compute it and add it to the cache. Let's see how this is translated to code: long fib_memoized_aux(vector<long>& cache, long index) { if (cache[index] >= 0) return cache[index]; cache[index] = fib_memoized_aux(cache, index - 1) + fib_memoized_aux(cache, index - 2); return cache[index]; } long fib_memoized(long index) { vector<long> cache(index + 1, -1); cache[0] = 0; cache[1] = 1; return fib_memoized_aux(cache, index); }Here, fib_memoized acts exactly as the simple fib_rec - it takes an index as an argument and returns the Fibonacci number at this index. Internally, it first sets up a cache (for such a simple task a vector is enough - the ith vector cell holds the computed ith Fibonacci number, with -1 meaning a yet-uncomputed result), and uses fib_memoized_aux as a helper function for the computations. In fib_memoized_aux, you can see memoization in action, just as described above. What about performance, then? While up to about 30, fib_rec and fib_rec_memoized are comparable in execution time, afterwards the difference is staggering. For example, computing the 47th Fibonacci number takes ~47 seconds with fib_rec. With fib_rec_memoized it takes 0 (below resolution of system clock). There's no doubt that the difference gets bigger and bigger after that. There's another major speed-improvement here, which may not be immediately obvious. Imagine that during the runtime of our program, we need to calculate Fibonacci numbers not just once, but many times. While using the plain method we'd go through the computations again and again, using memoization we can just reuse the cache from call to call. Chances are that most of the computations will be answered in constant time - because the result will already be in the cache! The assiduous reader may implement a simple class for Fibonacci number calculation. This class will have the cache as a member, which will be initialized only once. The Fibonacci calculation method will use this cache and update it at times (when yet un-calculated results are requested). Alternative Fibonacci implementations Personally, I don't feel good about the Fibonacci calculation example. Though it's educationally sound, I find it somewhat contrived. This is because there are fast implementations for Fibonacci calculations that don't require memoization. For example, it's very easy to come up with a simple iterative solution. Since a Fibonacci number is simply a sum of the previous two, we can use a loop and keep track of just two numbers to generate any Fibonacci: long fib_iter(long index) { if (index < 2) return index; long cur = 1; long one_back = 0; for (long i = 2; i <= index; ++i) { long temp = cur; cur = cur + one_back; one_back = temp; } return cur; }This will calculate Fibonacci numbers as fast as the memoized implementation - in linear time. An even faster solution can utilize Binet's Fibonacci number formula: long fib_binet(long index) { double sqrt_5 = 2.2360679774997896964091736687313; return (long) floor((pow(1 + sqrt_5, index) - pow(1 - sqrt_5, index)) / (pow(2, index) * sqrt_5)); }Don't just sit there and gasp in horror :-) Calculation of Fibonacci numbers is a fascinating topic and you can learn a lot by browsing the web a little - start by Googling for "Fibonacci numbers". I must note, just to be fair, that these fast non-recursive implementations lack the caching-between-calls property of memoization. That is, if we use memoization to save results between function calls (discussed in the last paragraph of the previous section); we can get most results at a cost of a trivial array look up - faster than the non-recursive implementations. But to be even more fair, huge Fibonacci numbers are rarely needed in practice, and even when they are, the iterative or the formula implementations can provide us with as big numbers as we'll even need in negligible time. So let's examine another problem, where there is no simple alternative to the recursive solution. Counting change As we saw, it isn't hard to come up with a simple iterative Fibonacci algorithm (the same goes for the factorial function, another common example of recursion in programming books and tutorials). In contrast, consider the following problem: How many different ways can we make change of $1.00, given half-dollars, quarters ($0.25), dimes ($0.10), nickels ($0.05), and cents ($0.01)? More generally, can we design an algorithm to compute the number of ways to change any given amount of money? While at first sight an innocuous problem that might interest supermarket cashiers, this is a close relative of an important algorithm - the subset sum problem (once again, Google can be your guide to enlightenment). Let's start with an example, to make sure that the problem is understood. In how many ways can we make change from 10 cents? One is ten cents. Two is one nickel and five cents. Three is two nickels. Four is a dime. So, there are four ways. In fact, this problem has a simple recursive solution. coin. Thus, we've found a way to solve the problem by reducing it to two smaller problems (in the first the amount of coins is smaller, and in the second the sum is smaller). This is just what recursion is about - reducing problems to simpler problems. What we're lacking is an explicit solution for the "simplest" problems: - If the amount a is 0, there's only one way to make change (no coins) - If the amount a is negative, there is no way to make change - If n is 0, there's only one way to make change (we don't have any choice...) To take care of the coins ordering, we'll define a helper function: long first_denomination(long n_kinds_of_coins) { switch (n_kinds_of_coins) { case 5: return 50; case 4: return 25; case 3: return 10; case 2: return 5; case 1: return 1; default: assert(0); } }Given how many coins we can use, this function returns the denomination of the first coin. It sets up the following ordering of the coins - 50, then 25, then 10, then 5, then 1. Now we're ready for to implement the change counting procedure itself. As a true recursive algorithm, it translates into code very naturally: long count_change_aux(long amount, long n_kinds_of_coins) { if (amount == 0) return 1; else if (amount < 0 || n_kinds_of_coins == 0) return 0; else { return count_change_aux(amount, n_kinds_of_coins - 1) + count_change_aux(amount - first_denomination(n_kinds_of_coins), n_kinds_of_coins); } } long count_change(long amount) { return count_change_aux(amount, 5); }count_change is the procedure that is to be called to get an answer, and it uses count_change_aux as a helper function. If you understood the algorithm and the boundary cases, there's really not much left to explain, since the code is just the algorithm "paraphrased" (to be exact, written in another language - C++ instead of English). On to some benchmarking: count_change answers our original question (how many ways are there to make change of a dollar) in a no-time - below resolution of system clock (the answer is 292, by the way). However, when we start raising the stakes the runtime grows quickly. It takes 5 seconds for 1000 cents, 2.5 minutes for 2000 and the time soars rapidly on and on. Care to throw a guess at the cause of this slowness? Right - it's the same problem we had with fib_rec - multiple repetitions of the same calculations. To get some intuition of the problem, suppose that we run count_change on 2000 cents. Consider an intermediate sum of 1000 cents. How many ways are there to reach 1000 cents from 2000 cents? Quite a lot... But each time we reach 1000 cents we go on and compute the ways to change 1000 cents, and we saw that it takes 5 seconds each time - so it's not surprising that the runtime grows so quickly. Contrary to the Fibonacci problem, here we don't have any simple way to formulate a swift iterative algorithm that will complete the same task (if you find one, let me know!). But we'd still like to compute change for large sums in a reasonable time. The solution is memoization. Memoized change counting We will proceed in a manner similar to the memoization of fib_rec: we'd like to keep the results of count_change_aux computations in some cache and return immediately with a cached result when requested to do some computation for the second time. The only slightly problematic point is that we can't just use a simple array for the cache as we did in fib_memoized, since count_change_aux takes two arguments. However, as this code demonstrates; there's really no problem expanding the memoization technique to use multiple arguments. long count_change_memoized_aux(map<pair<long, long>, long>& cache, long amount, long n_kinds_of_coins) { pair<long, long> entry = make_pair(amount, n_kinds_of_coins); if (cache.find(entry) != cache.end()) return cache[entry]; if (amount == 0) cache[entry] = 1; else if (amount < 0 || n_kinds_of_coins == 0) cache[entry] = 0; else { cache[entry] = count_change_memoized_aux(cache, amount, n_kinds_of_coins - 1) + count_change_memoized_aux(cache, amount - first_denomination(n_kinds_of_coins), n_kinds_of_coins); } return cache[entry]; } long count_change_memoized(long amount) { map<pair<long, long>, long> cache; return count_change_memoized_aux(cache, amount, 5); }Note that first_denomination remains the same as in the simple recursive version, so I didn't reproduce it here. Here I use a map as the cache. It maps argument pairs to results: for a pair of (amount, n kinds of coins) the cache holds the number of ways to change this amount with this number of kinds of coins. Except for the different data structure used as a cache, the changes are very similar to the ones in fib_memoized - first of all the cache is consulted and if the desired result is already there it's simply returned. Then, the real calculation is performed and added to the cache. The next time the function runs with the same arguments - the computation will be immediate - simply a fetch from the cache. And indeed, benchmarking shows considerable speed improvement. Change from 2000 cents now takes only 0.02 seconds to compute (vs. 2.5 minutes for the non-memoized version). Change from 5000 takes 0.06 seconds (I gave up waiting for the non-memoized version to finish this calculation). The runtime increase for the memoized version increases linearly with the size of the problem - as expected. Wrapping up In this article you've been introduced to memoization - an important programming technique used to speed up computations. In particular, memoization often allows one to considerably improve the runtime of a crawling recursive algorithm that may be just the right solution for a problem but is too slow to use. You learned about the inherent slowness in some recursive computations due to repetitions, and saw how to use memoization to eliminate this problem. You probably noticed the similarity between memoizing the Fibonacci implementation and memoizing the change counting algorithm. Indeed, memoization is quite simple to apply automatically, if the programming language allows it. For example, in Lisp where functions are data just like any other data, memoizing an arbitrary function is trivial. In Perl it is a little bit trickier, but there exists an excellent module called Memoize that will automatically memoize any function for you. As far as I know there is no simple way to achieve this in C++.
http://www.gamedev.net/page/resources/_/technical/general-programming/algorithmic-forays-part-7-r2198?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024
CC-MAIN-2016-44
refinedweb
2,713
60.04
Saving with Serverless Side projects are especially awesome when they help you solve a real life problem. During a recent problem-solving expedition I encountered a road block that I’d bet prevents a lot of us developers from finishing our side projects: just because a solution is possible, doesn’t mean it’s affordable. If this is sounding familiar, you probably know that infrastructure can be pretty cost prohibitive, both in time and money. Here’s a look at how I used the serverless framework to ship my side project without any additional cost. The Problem When I first to moved to the San Francisco bay area I ran into an interesting issue: people here actually go to baseball games (I’m from Tampa, Florida, where people only attend baseball games when the home team plays a better team). On game days, my commute between San Francisco and Oakland was nuts. Things were even crazier when the Oakland Athletics and the San Francisco Giants played against each other in the Battle of the Bay series. During those weeks, public transportation newbs rode the train system all day acting confused while taking up as much space as possible. I realized I could make my commute easier by answering the question, ‘When is baseball?’ So, when is baseball? I built an app called Hustlin that knows when there is a home game and sends notifications at the start and anticipated end times. I generated the project really quickly using Ruby on Rails and deployed to production using Heroku. This was amazing until I realized I had to pay $7 a month to keep the thing up and running during the 7-month long season. I give the app some love every new baseball season, so I eventually rebuilt it to use the JAMstack, separating the API from the markup. The frontend was easily hosted on Netlify for free as a React application, but I wanted to find something just as free to host my API. The solutions I came across were going to either cost more in money or time to set up and maintain. The API was costing too much just to optimize my commute. I hosted David Wells from Serverless team on an episode of JAMstack Radio and discovered everything I did could be done with Serverless and hosted for free on AWS. Plus, AWS’s Lambda gives you 1 million invocations of functions for free. If you are not familiar with the server-less, or Functions-as-a-service(FaaS), they are functions that execute on-demand and in a matter milliseconds. Their use can vary from small automated tasks to replacing large process in a devops pipeline with very few limitations. The switch to Serverless AWS is one of the providers the Serverless Framework works with out of the box and the CLI made it easy to try out. My simple JSON for home baseball games fits nicely in an AWS-provided DynamoDB table. To get started I used the CLI to deploy the node templates. # Create a new Serverless Service/Project $ serverless create --template aws-nodejs --path serverless-hustl # Change into the newly created directory $ cd serverless-hustl After creating the boilerplate I created a seed function to move my existing JSON to a DynamoDB table. This was significantly less code than my previous version of Hustlin, it was 102 lines of code to be exact. // function that seed the DynamoDB table module.exports.seed = (event, context, callback) => { baseballs.forEach((data) => { const {name, start_time, end_time, started, standard_start_time} = data; const item = { id: `${data.id}`, name, start_time, end_time, started: `${started}`, standard_start_time }; dynamodb.put({TableName: 'slshustl', Item: item}, (err) => { if (err) { callback(err); } const response = { statusCode: 201, headers, } callback(null, response); }); }) } I created a second function, called today, that returns all the games happening that today with start time and location information. // function that returns all games happening today module.exports.today = (event, context, callback) => { const params = { TableName: 'slshustl', } dynamodb.scan(params, (err, data) => { if (err) { callback(err); } const todaysGames = data.Items.filter(isToday) const response = { statusCode: 200, headers, body: JSON.stringify({ count: todaysGames.length, data: todaysGames }) }; callback(null, response); }); function isToday(game) { const today = new Date(); const gameTime = new Date(game.start_time); return (today.toDateString() == gameTime.toDateString()); } } After creating my seed and today functions I exposed them as endpoints in the serverless.yml. The original version cost me $7/month for the convenience of simple deployments, that same simplicity is why I never attempted to host my project elsewhere until now. The Serverless framework handles all the complication of setting up API Gateway, Updating DynamoDB, and deploying my Lambda functions. // serverless.yml service: slshustl provider: name: aws runtime: nodejs6.10 iamRoleStatements: - Effect: "Allow" Action: - "dynamodb:*" Resource: "arn:aws:dynamodb:*:*:table/slshustl" functions: seed: handler: handler.seed description: seed dynanomodb table with baseball games events: - http: path: seed method: post today: handler: handler.today description: return just baseball games today events: - http: path: today method: get cors: true resources: Resources: DynamoDbTable: Type: 'AWS::DynamoDB::Table' Properties: AttributeDefinitions: - AttributeName: id AttributeType: S KeySchema: - AttributeName: id KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 TableName: slshustl If you are interested in taking a closer look at the code check out bdougie/serverless-hustl. I have completely switched to DynamoDB to store my baseball game JSON data. I also used cron jobs to send notifications. I leveraged the aws-node-scheduled-cron example repo to trigger my notifications, which is live at bdougie/scheduled-hustlin-notifications. After reading through the Serverless documentation as well some heavy copy and pasting, I was able mirror what I was getting from my expensive Postgres database with simple JSON in a DynamoDB table. Profit This switch has saved me 100% of the $84 a year I was paying previously. Now that I am saving on time and money, I can start working on making this project provide real time notifications during baseball games and really see if that 1 million invocations can be achieved. If you have interest in this project, please keep an eye open for notifications at hustlin.netlify.com for the 2018 season. Add your thoughts in the comments
https://www.netlify.com/blog/2017/12/20/saving-with-serverless/
CC-MAIN-2020-05
refinedweb
1,025
53.41
import "github.com/dropbox/godropbox/cinterop/lib" This software package is designed to help interop between legacy C programs and go programs. If a primarily C program needs to call a utility function that is only available in go, this tool can assist with interop. In essence this package lets you call golang functions from C if the go functions take in simple byte arrays as input. See the docs in the toplevel for more detailed information Updated 2016-11-30. Refresh now. Tools for package owners. This is an inactive package (no imports and no commits in at least two years).
https://godoc.org/github.com/dropbox/godropbox/cinterop/lib
CC-MAIN-2020-40
refinedweb
102
63.9
# __name__=='pymol.helping': import string import thread import cmd from cmd import DEFAULT_ERROR, DEFAULT_SUCCESS, _raising, is_ok, is_error def show_help(cmmd,_self=cmd): # INTERNAL print "PyMOL>help %s" % cmmd help(cmmd) if _self.get_setting_legacy("internal_feedback")>0.1: print "(Hit ESC to hide)" def python_help(*arg): r''' DESCRIPTION You have asked for help on a Python keyword which is available from within the PyMOL command language. Please consult the official Python documentation at for detailed information on Python keywords. You may include Python blocks in your PyMOL command scripts, but do note that multi-line blocks of Python in PyMOL command files will require explicit continuation syntax in order to execute properly (see below). Generally, if you want to write Python block which span multiple lines, you will want to use ".py" file, and then use "extend" in order to expose your new code to the PyMOL command language. This will give you better error checking and more predictable results. EXAMPLES a=1 while a<10: \ print a \ a=a+1 SEE ALSO extend, run, @ ''' return None def help(command = "commands",_self=cmd): ''' DESCRIPTION "help" prints out the online help for a given command. USAGE help command ''' r = DEFAULT_SUCCESS # if cmd.get_setting_legacy("internal_feedback")>0.1: # cmd.set("text","1",quiet=1) cmmd = _self.help_sc.auto_err(command,'topic') if _self.keyword.has_key(cmmd): doc = _self.keyword[cmmd][0].__doc__ if doc: print "\n",string.strip(doc),"\n" else: print "Error: sorry no help available on that command." elif _self.help_only.has_key(cmmd): doc = _self.help_only[cmmd][0].__doc__ if doc: print "\n",string.strip(doc),"\n" else: print "Error: sorry no help available on that command." else: print "Error: unrecognized command" return r def commands(): '''", "@". ''' _self.help('commands') def editing(_self=cmd): ''' SUMMARY PyMOL has a rudimentary, but quite functional molecular structure editing capability. However, you will need to use an external mimizer to "clean-up" your structures after editing. Furthermore, if you are going to modify molecules other than proteins, then you will also need a way of assigning atom types on the fly. To edit a conformation or structure, you first need to enter editing mode (see Mouse Menu). Then you can pick an atom (CTRL-Middle click) or a bond (CTRL-Right click). Next, you can use the other CTRL-key/click combinations listed on the right hand side of the screen to adjust the attached fragments. For example, CTRL-left click will move fragments about the selected torsion. Editing structures is done through a series of CTRL key actions applied to the currently selected atom or bonds. See "help edit_keys" for the exact combinations. To build structures, you usually just replace hydrogens with methyl groups, etc., and then repeat. They are no short-cuts currently available for building common groups, but that is planned for later versions. NOTE Only "lines" and "sticks" representations can be picked using the mouse, however other representations will not interfere with picking so long as one of these representation is present underneath. ''' _self.help('editing') def release(_self=cmd): ''' RELEASE NOTES PyMOL is a free, open, and expandable molecular graphics system written by computational scientists to enable molecular modeling from directly within Python. It will be of most benefit to hybrid scientist/developers in the fields of structural biology, computational chemistry, and informatics who seek an open and unrestricted visualization tool for interfacing with their own programs. PyMOL will also be of benefit to advanced non-developers familiar with similar programs such as Midas, O, Grasp, X-PLOR and CNS. PyMOL currently includes a diverse command language, a powerful application programmers interface (API), and a variety of mouse and keyboard driven functionality for viewing, animation, rendering, and molecular editing. A partial manual is now available on the web. Two external GUI development options are supported for PyMOL: "Tkinter" and "wxPython". Developers can take their pick. Note that only Tkinter is supported under Windows with the default PyMOL and Python distributions, so for maximum ease of installation under Windows, stick with Tkinter (Tcl/Tk). For this reason, the Tkinter-based GUI is going to be the default GUI for standard PyMOL despite its drawbacks. Warren L. DeLano (5/1/2001), warren@delanoscientific.com Jason Vertrees (3/7/2011), jason.vertrees@schrodinger.com (update) ''' _self.help('release') def edit_keys(_self=cmd): ''' EDITING KEYS These are defaults, which can be redefined. Note that while entering text on the command line, some of these control keys take on text editing functions instead (CTRL - A, E, and K, and DELETE), so you should clear the command line before trying to edit atoms. ATOM REPLACEMENT CTRL-C Replace picked atom with carbon (C) CTRL-N Replace picked atom with nitrogen (N) CTRL-O Replace picked atom with oxygen (O) CTRL-S Replace picked atom with sulpher (S) CTRL-G Replace picked atom with hydrogen (H) CTRL-F Replace picked atom with fluorene (F) CTRL-L Replace picked atom with chlorine (Cl) CTRL-B Replace picked atom with bromine (Br) CTRL-I Replace picked atom with iodine (I) ATOM MODIFICATION CTRL-J Set charge on picked atom to -1 CTRL-K Set charge on picked atom to +1 CTRL-D Remove atom or bond (DELETE works too). CTRL-Y Add a hydrogen to the current atom CTRL-R Adjust hydrogens on atom/bond to match valence. CTRL-E Inverts the picked stereo center, but you must first indicate the constant portions with the (lb) and (rb) selections. CTRL-T Connect atoms in the (lb) and (rb) selections. CTRL-W Cycle the bond valence on the picked bond. UNDO and REDO of conformational changes (not atom changes!) CTRL-Z undo the previous conformational change. (you can not currently undo atom modifications). CTRL-A redo the previous conformational change. ''' _self.help('edit_keys') def at_sign(_self=cmd): ''' DESCRIPTION "@" sources a PyMOL command script as if all of the commands in the file were typed into the PyMOL command line. USAGE @ <script-file> PYMOL API Not directly available. Instead, use cmd.do("@..."). ''' _self.help(at_sign) def run(_self=cmd): ''' DESCRIPTION "run" executes an external Python script in a local name space, the main Python namespace, the global PyMOL namespace, or in its own namespace (as a module). USAGE run file [, namespace ] ARGUMENTS file = string: a Python program, typically ending in .py or .pym. namespace = local, global, module, main, or private PYMOL API Not directly available. Instead, use cmd.do("run ..."). NOTES The default mode for run is "global". Due to an idiosyncracy in Pickle, you can not pickle objects directly created at the main level in a script run as "module", (because the pickled object becomes dependent on that module). Workaround: delegate construction to an imported module. ''' _self.help(run) def spawn(_self=cmd): ''' DESCRIPTION "spawn" launches a Python script in a new thread which will run concurrently with the PyMOL interpreter. It can be run in its own namespace (like a Python module, default), a local name space, or in the global namespace. USAGE run python-script [, ( local | global | module | main | private )] PYMOL API Not directly available. Instead, use cmd.do("spawn ..."). NOTES The default mode for spawn is "module". Due to an idiosyncracy in Pickle, you can not pickle objects directly created at the main level in a script run as "module", (because the pickled object becomes dependent on that module). Workaround: delegate construction to an imported module. The best way to spawn processes at startup is to use the -l option (see "help launching"). ''' _self.help(spawn) def api(_self=cmd): ''' DESCRIPTION The PyMOL Python Application Programming Interface (API) should be accessed exclusively through the "cmd" module (never "_cmd"!). Nearly all command-line functions have a corresponding API method. USAGE from pymol import cmd result = cmd.<command-name>( argument , ... ) NOTES Although the PyMOL core is not multi-threaded, the API is thread-safe and can be called asynchronously by external python programs. PyMOL handles the necessary locking to insure that internal states do not get corrupted. This makes it very easy to build complicated systems which involve direct realtime visualization. ''' _self.help('api') def keyboard(_self=cmd): ''' KEYBOARD COMMANDS and MODIFIERS ESC Toggle onscreen text. INSERT Toggle rocking. LEFT ARROW, RIGHT ARROW Go backward or forward one frame, or when editing, go forward or back one character. HOME, END Go to the beginning or end of a movie. Command Entry Field in the Interal GUI (black window) TAB Complete commmand or filename (like in tcsh or bash). CTRL-A Go to the beginning of the line. CTRL-E Go to the end of the line. CTRL-K Delete through to the end of the line. Command Entry Field on the External GUI (gray window). CTRL-C These operating system-provided cut and paste functions CTRL-V will only work in the external GUI command line. EDITING type "help edit_keys" for keyboard shortcuts used in editing. ''' _self.help('keyboard') def transparency(_self=cmd): ''' TRANSPARENCY As of version 0.68, trasparent surfaces are supported in both realtime (OpenGL) rendering mode as well as with ray-traced images. Transparency is currently managed by setting either the global transparency variable or one attached to an individual molecule object. It isn't yet possible to control transparency on a per-atom basis. EXAMPLES set transparency=0.5 # makes all surfaces 50% transparent set transparency=0.5, mol3 # makes only mol3's surface transparent ''' cmd.help('transparency') def mouse(): ''' MOUSE CONTROLS The configuration can be changed using the "Mouse" menu. The current configuration is described on screen with a small matrix on the lower right hand corner, using the following abbreviations: Buttons (Horizontal Axis) L = left mouse click M = middle mouse click R = right mouse click Modifiers (Veritical axis on the matrix) None = no keys held down while clicking Shft = hold SHIFT down while clicking Ctrl = hold CTRL down while clicking CtSh = hold both SHIFT and CTRL down while clicking Visualization Functions Rota = Rotates camera about X, Y, and Z axes RotZ = Rotates camera about the Z axis Move = Translates along the X and Y axes MovZ = Translates along Z axis Clip = Y motion moves the near clipping plane while PkAt = Pick an atom PkBd = Pick a bond Orig = Move origin to selected atom +lb = Add an atom into the (lb) selection lb = Define the (lb) selection with the indicated atom. rb = Define the (rb) selection with the indicated atom. Editing Functions RotF = Rotate fragment MovF = Move fragment TorF = Torsion fragment ''' _self.help('mouse') def examples(_self=cmd): ''' EXAMPLE ATOM SELECTIONS select bk = ( name ca or name c or name n ) * can be abbreviated as * sel bk = (n;ca,c,n) select hev = ( not hydro ) * can be abbreviated as * sel hev = (!h;) select site = ( byres ( resi 45:52 expand 5 )) * can be abbreviated as * sel site = (b;(i;45:52 x;5)) select combi = ( hev and not site ) * can be abbreviated as * sel combi = (hev&!site) ''' _self.help('examples') def launching(_self=cmd): ''' PyMOL COMMAND LINE OPTIONS -c Command line mode, no GUI. For batch opeations. -i Disable the internal OpenGL GUI (object list, menus, etc.) -x Disable the external GUI module. -t Use Tcl/Tk based external GUI module (pmg_tk). -q Quiet launch. Suppress splash screen & other chatter. -p Listen for commands on standard input. -e Start in full-screen mode. -2 Start in two-button mouse mode. -o Disable security protections for session files. -R Launch Greg Landrum's XMLRPC listener. -B Enable blue-line stereo signal (for Mac stereo) -G Start in Game mode. -S Force and launch in stereo, if possible. -M Force mono even when hardware stereo is present. -X <int> -Y <int> -W <int> -H <int> -V <int> Adjust window geometry. -f <# line> Controls display of commands and feedback in OpenGL (0=off). -r <file.py> Run a Python program (in __main__) on startup. -l <file.py> Spawn a python program in new thread. -d <string> Run pymol command string upon startup. -u <script> Load and append to this PyMOL script or program file. -s <script> Save commands to this PyMOL script or program file. -g <file.png> Write a PNG file (after evaluating previous arguments) <file> can have one of the following extensions, and all files provided will be loaded or run after PyMOL starts. .pml PyMOL command script to be run on startup .py, .pym, .pyc Python program to be run on startup .pdb Protein Data Bank format file to be loaded on startup .mmod Macromodel format to be loaded on startup .mol MDL MOL file to be loaded on startup .sdf MDL SD file to be parsed and loaded on startup .xplor X-PLOR Map file (ASCII) to be loaded on startup .ccp4 CCP4 map file (BINARY) to be loaded on startup .cc1, .cc2 ChemDraw 3D cartesian coordinate file .pkl Pickled ChemPy Model (class "chempy.model.Indexed") .r3d Raster3D file .cex CEX file (Metaphorics) .top AMBER topology file .crd AMBER coordinate file .rst AMBER restart file .trj AMBER trajectory .pse PyMOL session file .phi Delphi/Grasp Electrostatic Potential Map ''' _self.help('launching') def movies(_self=cmd): ''' MOVIES To create a movie, simply load multiple coordinate files into the same object. This can be accomplish at the command line, using script files, or by writing PyMOL API-based programs. The commands: load frame001.pdb,mov load frame002.pdb,mov will create a two frame movie. So will the following program: from pymol import cmd for a in ( "frame001.pdb","frame002.pdb" ): cmd.load(a,"mov") which can be executed at the command line using the "run" command. Python built-in glob module can be useful for loading movies. from pymol import cmd import glob for a in ( glob.glob("frame*.pdb") ): cmd.load(a,"mov") NOTE Because PyMOL stores all movie frames in memory, there is a a practical limit to the number of atoms in all coordinate files. 160 MB free RAM enables 500,000 atoms with line representations. Complex representations require significantly more memory. ''' _self.help('movies') ### ------------------------------------------------------------------- def selections(_self=cmd): ''' DESCRIPTION Selections are enclosed in parentheses and contain predicates, logical operations, object names, selection names and nested parenthesis: ( [... [(...) ... ]] ) name <atom names> n. <atom names> resn <residue names> r. <residue names> resi <residue identifiers> i. <residue identifiers> chain <chain ID> c. <chain identifiers> segi <segment identifiers> s. <segment identifiers> elem <element symbol> e. <element symbols> flag <number> f. <number> alt <code> numeric_type <numeric type> nt. <numeric type> text_type <text type> tt. <text type> b <operator> <value> q <operator> <value> formal_charge <op> <value> fc. <operator> <value> partial_charge <op> <value> pc. <operator> <value> id <original-index> hydrogen h. all * visible v. hetatm <selection> and <selection> <selection> & <selection> <selection> or <selection> <selection> | <selection> not <selection> ! <selection> byres <selection> br. <selection> byobj <selection> bo. <selection> around <distance> a. <distance> expand <distance> e. <distance> gap <distance> in <selection> like <selection> l. <selection> <selection> within <distance> of <selection> <selection> w. <distance> of <selection> ''' _self.help('selections') def povray(_self=cmd): ''' DESCRIPTION PovRay: Persistance of Vision Support Information The built-in ray-tracer (technically a ray-caster) is as fast or faster than PovRay for many figures (provided that hash_max is tuned appropriately for your content). However, PovRay blows PyMOL away when it comes to rendering images without using lots of RAM, and with PovRay you get the ability use perspective, textures, reflections, infinite objects, and a superior lighting model. Assuming that PovRay is built and in your path... ray renderer=1 # will use PovRay instead of the built-in engine set ray_default_renderer=1 # changes the default renderer to PovRay ray # will now use PovRay by default cmd.get_povray() # will give you a tuple of PovRay input strings # which you can manipulate from Python ''' _self.help('povray') def stereochemistry(_self=cmd): """ PYMOL STEREOCHEMISTRY PyMOL can label chiral centers; however, due to the recursive and dependent nature of the determination, PyMOL will refuse to label structures with alternate coordinates. To determine stereochemistry' PyMOL labels chiral centers using the IUPAC symbols 'R' for rectus, 'S' for sinister, 'r' for pseudoasymmetric rectus and 's' for pseudoasymmetric sinister. SEE ALSO label, select """ help('stereochemistry') def text_type(_self=cmd): """ PYMOL ATOM TYPING PyMOL can label atom types with formats mol2/sybyl or macromodel/mmd. The global setting atom_type_format is used to determine which type is labelled. Due to the recursive and dependent nature of the determination, PyMOL will refuse to label structures with alternate coordinates. To determine atom types' SEE ALSO label, select """ help('text_type') def faster(_self=cmd): ''' RAY TRACING OPTIMIZATION 1. Reduce object complexity to a minimum acceptable level. For example, try lowering: "cartoon_sampling" "ribbon_sampling", and "surface_quality", as appropriate. 2. Increase "hash_max" so as to obtain a voxel dimensions of 0.3-0.6. Proper tuning of "hash_max" can speed up rendering by a factor of 2-5X for non-trivial scenes. WARNING: memory usage depends on hash_max^3, so avoid pushing into virtual memory. Roughly speaking: hash_max = 80 --> ~9 MB hash + data hash_max = 160 --> ~72 MB hash + data hash_max = 240 --> ~243 MB hash + data Avoid utilizing virtual memory for the voxel hash, it will slow things way down. 3. Recompiling with optimizations on usually gives a 25-33% performance boost for ray tracing. ''' help('faster') def abort(_self=cmd): ''' DESCRIPTION "abort" abruptly terminates execution of the PyMOL command script without executing any additional commands. SEE ALSO embed, skip, python ''' return None def skip(_self=cmd): ''' DESCRIPTION "skip" delimits a block of commands that are skipped instead of being executed. EXAMPLE skip # the following command will not be executed color blue, all skip end NOTES If the "skip" command is commented out, the subsequent "skip end" can be left in place, and will have no effect upon execution of subsequent commands. SEE ALSO abort, embed, python ''' return None def python(_self=cmd): ''' DESCRIPTION "python" delimits a block of literal Python code embedded in a PyMOL command script. EXAMPLE python for a in range(1,10): b = 10 - a print a, b python end NOTES Literal Python blocks avoid the annoying requirement of having to use explicit line continuation markers for multi-line Python commands embedded within Python scripts. SEE ALSO abort, embed, skip ''' return None def embed(_self=cmd): ''' DESCRIPTION "embed" delimits a block of data embedded in a PyMOL command script. USAGE embed key [, type [, sentinel ]] ARGUMENTS key = string: unique indentifier for the data type = pdb, mol, mol2, sdf, xplor sentinel = string: a unique string signalling the end of the data {default: embed end} EXAMPLE embed wats, pdb HETATM 1 O WAT 1 2.573 -1.034 -1.721 HETATM 2 H1 WAT 1 2.493 -1.949 -1.992 HETATM 3 H2 WAT 1 2.160 -0.537 -2.427 HETATM 4 O WAT 2 0.705 0.744 0.160 HETATM 5 H1 WAT 2 -0.071 0.264 0.450 HETATM 6 H2 WAT 2 1.356 0.064 -0.014 embed end NOTES Only text data formats can be used with embed SEE ALSO abort, skip, python ''' return None
http://pymol.sourcearchive.com/documentation/1.4.1/helping_8py_source.html
CC-MAIN-2017-17
refinedweb
3,146
57.47
04 December 2007 17:20 [Source: ICIS news] By Nigel Davis LONDON (ICIS news)--In a year in which Dow Chemical strategically has promised so much and delivered so little the company has fallen back on another series of cutbacks to help deliver greater competitiveness. The measures announced on Tuesday represent a mixed bag. And implemented at any other time, in a piecemeal fashion, they would make little external impact. But Dow seems to have decided that in the absence of any other concrete strategic progress they will help draw attention to the fact the world’s second largest chemicals maker means business. “Our focus on financial discipline and low cost to serve remains as sharp as ever,” chief executive Andrew Liveris said in a press statement. “We will continue to seek ways to refine our organisational structure asset base and business portfolio to ensure Dow’s competitiveness on the world stage,” he added. Liveris has promised so much this year in terms of the company’s so-called ‘asset light’ strategy and its desire to create more market focused businesses that the restructuring package does not look as significant as it otherwise might. Dow has been looking to create a foothold in the important ?xml:namespace> The progress of plans for the Middle East and Indeed the real excitement surrounding Dow kicked off in April with the dismissal of two senior executives over allegations that they were in discussions with third parties about the possible break-up of the company. The M&A (merger and acquisition) focus has drifted since the global credit crunch hit in mid-year. Yet analysts still see a range of companies that might interest Dow itself which, at the peak of the chemicals cycle, has been generating large amounts of cash. The company said earlier this year it was looking at 60 possible acquisitions, joint ventures or divestments. But it postponed an early November investor meeting until early 2008, according to a company spokesman to make better use of everyone’s time. Analysts don’t think a deal is imminent but they are keeping a close watch on Dow. Joint venture partners for the commodity business could include Petrochemical Industries Company (PIC) in A good fit ‘specialty’ company might be Celanese, Cytec, Valpsar or Ciba Specialty Chemicals, Citigroup adds. Dow has been linked rather obliquely to the industrial gases players Air Products and Air Liquide. Senior management knows how difficult it is proving to be to strike the rights deal or deals to put the chemical giant’s asset intensive businesses into the sort of joint ventures that will benefit shareholders. Dow now has a handful of market facing business platforms but needs more. A Dow spokesman said that although advancement of Dow’s asset light strategy may not be as fast as some would like, progress is being made and 2007 will be looked on as a good year. "We are still actively moving ahead on a number of fronts including joint ventures and acquisitions,” he said. The current range of measures illustrate more of the sort of work in progress that the company needs to undertake as the centre of gravity of key markets and businesses move towards the Middle East and Asia. In some businesses Dow needs to cut back. In other functions it needs to consolidate and by doing so move. Dow’s asset base will follow this trend although the speed at which it does so is reliant on a number of factors. Dow would like to the world to think it is making
http://www.icis.com/Articles/2007/12/04/9084127/insight-dow-cuts-further-but-deal-remains-elusive.html
CC-MAIN-2014-23
refinedweb
595
56.39
In Chapter 5 of his book, Flexible Rails, Peter A.'s leads his readers through an exercise to override the behavior of to_xml() in ActiveRecord objects so that :dasherize would be false by default. I found his solution inelegant. I came up with a more elegant one, and it seems to work. All I did was add the following to the very end of myapp/config/ environment.rb: class ActiveRecord::Base def to_xml(options=nil, &block) super options == nil ? { :dasherize => false } : options.merge(:dasherize => false), &block end end But, I don’t know why it works. Specifically, how come I can invoke “super” even though I am not subclassing ActiveRecord::Base, and ActiveRecord::Base’s parent class (whom I believe is Object) doesn’t implement to_xml()? Can someone who is more enlightened about Ruby/Rails help me out? Thanks!
https://www.ruby-forum.com/t/overriding-to-xml-why-does-this-work/135494
CC-MAIN-2021-31
refinedweb
139
58.79
An unofficial python wrapper for the SBB api. Project description pySBB This is an unofficial python wrapper for the SBB API. SBB stands for the "Schweizerische Bundes Bahnen" (Swiss Federal Transport). Installation pip3 install pySBB Usage This package lets you access the SBB api easily. Here is how to use it: Get Connections It is very simple to get connections between two stations: import pySBB connections = pySBB.get_connections("Zürich", "Bern") for c in connections: print(c) Example Output: Zürich HB (18:32, Plat. 32) -> Bern (19:28, Plat. 32) | 56min Zürich HB (19:02, Plat. 31) -> Bern (19:58, Plat. 31) | 56min Zürich HB (19:32, Plat. 32) -> Bern (20:28, Plat. 32) | 56min Zürich HB (20:02, Plat. 31) -> Bern (20:58, Plat. 31) | 56min Further parameters (see connections) for more info: - via: Specifies up to five via locations. - date: Date of the connection, in the format YYYY-MM-DD - time: Time of the connection, in the format hh:mm - isArrivalTime: Defaults to False, if set to True the passed date and time is the arrival time - transportations: Transportation means; one or more of train, tram, ship, bus, cableway - limit: 1 - 16. Specifies the number of connections to return. If several connections depart at the same time they are counted as 1. - page: 0 - 3. Allows pagination of connections. Zero-based, so first page is 0, second is 1, third is 2 and so on. - direct: defaults to False, if set to True only direct connections are allowed - sleeper: defaults to False, if set to True only direct connections are allowed - couchette: defaults to False, if set to True only night trains containing couchettes are allowed, implies direct=True - bike: defaults to False, if set to True only trains allowing the transport of bicycles are allowed - accessibility: Possible values are independent_boarding, assisted_boarding, and advanced_notice Get Locations The api allows you to find locations such as train stations, addresses and other point of interests (eg. Clock Tower or China Garden) import pySBB locations = pySBB.get_locations(query="Lidostrasse 5 Luzern") for l in locations: print(l) Example Output: Luzern, Lidostr. 5 Verkehrshaus der Schweiz, Luzern, Lidostr. 5 Restaurant Piccard im Verkehrshaus der Schweiz, Luzern, Lidostr. 5 ... Further parameters (see locations for more info: - query: Specifies the location name to search for - x: Latitude - y: Longitude - type: Only with query parameter. Specifies the location type, possible types are: - all (default): Looks up for all types of locations - station: Looks up for stations (eg. train station, bus station) - poi: Looks up for points of interest (eg. Clock tower, China garden) - address: Looks up for an address (eg. Zurich Bahnhofstrasse 33) Get Stationboards Stationboards are the big blue boards that can be seen at trainstations. These are also available via the api. import pySBB entries = pySBB.get_stationboard("Lugano") for e in entries: print(e) Example Output: Lugano (18:51, Plat. 2) -> Chiasso Lugano (18:55, Plat. 4) -> Bellinzona Lugano (19:05, Plat. 2) -> Chiasso Lugano (19:22, Plat. 2) -> Monza Lugano (19:25, Plat. 4) -> Bellinzona ... Further parameters (see stationboard) for more info: - id: The id of the station whose stationboard should be returned. Overwrites to the station parameter. - limit: Number of departing connections to return. - transportations: Transportation means; one or more of train, tram, ship, bus, cableway - date: Date of departing connections, in the format YYYY-MM-DD - time: Time of departing connections, in the format hh:mm - type: departure (default) or arrival Objects The objects are the same as the ones used by the API, which are documented here The only difference is that any strings containing times or durations have been converted to datetime objects. Sometimes it can also help to look at the unprocessed data returned by the API, in order to figure out how the classes are structured. The unprocessed data is stored for every Object in the _data parameter and can be accessed like this (I also used the json module to format the dictionary nicely with indentations) import pySBB import json entry = pySBB.get_stationboard("Lugano", limit=1)[0] print(json.dumps(entry._data, indent=1)) Further Examples Get all transfer stations The following code lets you see all transfer stations for a given connection import pySBB connection = pySBB.get_connections("Mauraz", "Amriswil", limit=1)[0] print(connection) for section in connection.sections: print(" {}".format(section)) Mauraz (11:48) -> Amriswil (16:05, Plat. 33) | 4h 17min Mauraz (11:48) -> Pampigny-Sévery (12:04) Pampigny-Sévery (12:04) -> L'Isle (12:13) L'Isle (12:13) -> L'Isle, gare (12:15) L'Isle, gare (12:15) -> Cossonay-Penthalaz, gare (12:35) Cossonay-Penthalaz, gare (12:35) -> Cossonay-Penthalaz (12:37) Cossonay-Penthalaz (12:37, Plat. 1) -> Yverdon-les-Bains (13:00, Plat. 1) Yverdon-les-Bains (13:00, Plat. 1) -> Zürich HB (14:56, Plat. 13) Zürich HB (14:56, Plat. 33) -> Amriswil (16:05, Plat. 2) Get passed stations with coordinates The following code prints all station names that are passed, together with its coordinates. import pySBB connection = pySBB.get_connections("Brugg", "Basel", limit=1)[0] print(connection) for section in connection.sections: for passList in section.journey.passList: station = passList.station print(" {} {}".format(station.name, station.coordinate)) Brugg AG (11:41, Plat. 2) -> Basel SBB (12:24, Plat. 2) | 43min Brugg AG (47.48085, 8.208829) Frick (47.507341, 8.01309) Rheinfelden (47.551208, 7.792162) Basel SBB (47.547403, 7.589577) Get all follwing station for first stationboard entry The following code prints all stations of the first ship departing from "Luzern Bahnhofquai" at a given date: import pySBB entry = pySBB.get_stationboard("Luzern Bahnhofquai", transportations="ship", datetime="2019-10-10 12:00", limit=1)[0] print(entry) for passList in entry.passList: print(" {}".format(passList)) Luzern Bahnhofquai (12:00, Plat. 1) -> Vitznau Verkehrshaus-Lido (12:10) Hertenstein (See) (12:30) Weggis (12:40) Vitznau (12:54) Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pySBB/
CC-MAIN-2020-40
refinedweb
997
58.58
Hey Coders! If you are a react developer then you might have already heard about the latest version of React- React 18 Alpha. The team is still working on the update and there is still a lot to come, so in this article let's see what's happening in the version and breakdown it into simple. The first thing that comes to our mind every time there is a version update is the latest set of changes will break anything with your current setup, or whether you will have to learn new completely unrelated concepts? The answer is no, we will be able to adopt React 18 without rewrites and try the new features at our own pace. React 18 – what can we expect? 1.out-of-the-box improvements (including automatic batching), 2.new streaming server renderer with built-in support for React.lazy, 3.other concurrent features such as startTransition, useDeferredValue, 4.new root API. This release is more focused on User Experience and internal architecture changes, including adaptation to concurrent features. However, the most important, new addition in React 18 seems to be the concurrent rendering and the related concurrent mode. 1. Automatic batching React 18 adds out-of-the-box performance improvements by doing more batching by default, removing the need to manually batch updates in application or library code. But, what is batching? Batching is when React groups multiple state updates into a single re-render for better performance. In simple words, batching (grouping) means multiple state updates are combined into a single render. Whenever you are using setState to change a variable inside any function, instead of making a render at each setState, React instead collects all setStates and then executes them together. This is known as batching.> ); } This is great for performance because it avoids unnecessary re-renders. However, React didn't use to be consistent about when it performed batching. This was because React used to only batch updates during browser events (like a click), but here we’re updating the state after the event has already been handled (in a fetch callback): function App() { const [count, setCount] = useState(0); const [flag, setFlag] = useState(false); function handleClick() { fetchSomething().then(() => { // React 17 and earlier does NOT batch these because // they run *after* the event in a callback, not *during* it setCount(c => c + 1); // Causes a re-render setFlag(f => !f); // Causes a re-render }); } return ( <div> <button onClick={handleClick}>Next</button> <h1 style={{ color: flag ? "blue" : "black" }}>{count}</h1> </div> ); } What if I don’t want to batch? Usually, batching is safe, but some codes may depend on reading something from the DOM immediately after a state change. For those use cases, you can use ReactDOM.flushSync() to opt-out of batching: import { flushSync } from 'react-dom'; // Note: react-dom, not react function handleClick() { flushSync(() => { setCounter(c => c + 1); }); // React has updated the DOM by now flushSync(() => { setFlag(f => !f); }); // React has updated the DOM by now } 2. Server-Side Rendering Server-side rendering is a way of rendering the JS data to HTML on the server to save computation on the frontend. This results in a faster initial page load in most cases. React performs Server Side Rendering in 4 sequential steps: - On the server, data is fetched for each component. - On the server, the entire app is rendered to HTML and sent to the client. - On the client, the JavaScript code for the entire app is fetched. - On the client, the JavaScript connects React to the server-generated HTML, which is known as Hydration. In the trivial version(till React 17), SSR had to load the entire page before it could start hydrating the page. This changes in React18, now we can break React components into smaller chunks using . Streaming HTML <Suspense fallback={<Spinner />}> {children} </Suspense> By wrapping the component in , we tell React that it doesn’t need to wait for comments to start streaming the HTML for the rest of the page. Instead, React will send the placeholder (a spinner) instead. When the data for the comments is ready on the server, React will send additional HTML into the same stream, as well as a minimal inline script tag to put that HTML in the "right place". Selective Hydration Before React 18, hydration couldn't start if the complete JavaScript code for the app hadn't loaded in. For larger apps, this process can take a while. lets you hydrate the app before the child components have loaded in. By wrapping components in , you can tell React that they shouldn’t block the rest of the page from streaming—and even hydration. This means that you no longer have to wait for all the code to load in order to start hydrating. React can hydrate parts as they’re being loaded. 3. startTransition One important use case for startTransition could be when a user starts typing in a search box. The input value has to be immediately updated while the search results could wait a few milliseconds(as expected by the user). This API provides a way to differentiate between quick updates and delayed updates. The delayed update(i.e. transition of one UI view to another) is termed as Transition Updates. For urgent updates like typing, hover, clicking, we call props/functions usually like this : setText(input) For non-urgent or heavy UI updates, we can wrap it in a startTransition API as : startTransition(() => { setText(input); }); 4.The New Root API We usually create a Root level DOM like this and append the React App. This has now been deprecated and is now called "Legacy Root API" import React from 'react'; import ReactDOM from 'react-dom'; const container = document.getElementById('root') ReactDOM.render(<App />, container); Instead, a new Root API is introduced in React18, which looks like this : import React from 'react'; import ReactDOM from 'react-dom'; import App from 'App' const container = document.getElementById('root') const root = ReactDOM.createRoot(container) root.render(<App />) React18 will ship with both Legacy Root API and the New Root API to maintain a smooth transition of React 17(or older) apps to React 18. Wrapping-Up So to summarize, the features that React 18 brings are: - Concurrency control with the Transition API, - Automatic Batching of function calls and events to improve in-app performance, and - Much faster page loads for SSR with Suspense. React 18 docs React 18 discussions Thank you so much for reading this article! I hope this was useful to you in some way. Happy Coding💜 Discussion (3) If correct, please change: from 1.out-of-the-box improvements (including automatic bathing), to 1.out-of-the-box improvements (including automatic batching), You’re right. Thank you for catching that💜 simple yet effective explaining, Thank you very much for putting this :) ...
https://practicaldev-herokuapp-com.global.ssl.fastly.net/codewithtee/are-you-ready-for-react-18-4ip1
CC-MAIN-2021-43
refinedweb
1,135
54.22
React Router and rrtr have reconciled. Explore React Router alternatives and learn how to use them in your apps. The JavaScript ecosystem, for better or worse, is in a constant state of change and disarray. From the NodeJS fork to io.js and later reconciliation to the npm package-gate which broke many packages and ruined a lot of peoples day. The constant in all of this turbulence is that the JavaScript community was quick to react and resolve the issue for the better. The latest discord comes from the popular and heavily-depended upon React Router library, which provides a routing framework for applications built with React. React Router is a community project with no direct affiliation to Facebook or React but is a major dependency for many developers building React apps. React Router was forked into rrtr by Jimmy Jia, a longtime contributor to the project, last week after complaints that React Router has fallen into a slow release cycle, is missing critical features and more. A few days later, the rrtr library was itself deprecated and users told to switch back to React Router. Jimmy was made an owner of the React Router project so that he could further his contributions to the project. React Router Alternatives React Router is the de-facto routing library for React. In our brief post today, we'll take a look at some React Router alternatives. "React Router is the de-facto routing library for React apps." TWEET THIS React Router Component React Router Component is a declarative router component for React. Routes in this library are declared directly as part of your component hierarchy. Having routes defined as a part of your component hierarchy allows for dynamically reconfiguring routing based on application state. An example of the React Router Component in action: var App = React.createClass({ render: function() { return ( <Locations> <Location path="/" handler={MainPage} /> /* Check if user is logged in, redirect to login page if not */ <Location path="/account/:username" logged_in={this.state.logged_in} handler={this.state.logged_in ? AccountPage : createRedirect("/login")} /> <Location path={/\/friends\/(\d+)\/(photos|wall)/} logged_in={this.state.logged_in} handler={FriendsPage} matchKeys={['id', 'pageName']} /> </Locations> ) } }) React Mini Router The React Mini Router is a minimal routing library for React apps. It has few external dependencies and comes in at a tiny 4kb when gzipped. This routing library works by declaring the routes at the root level of the React app. This may be a good alternative for simple React apps. The React Mini Router library does not have pre or post hooks for routes so any logic for checking if a user is authenticated should be handled within the route itself. var React = require('react'), RouterMixin = require('react-mini-router').RouterMixin; var App = React.createClass({ mixins: [RouterMixin], routes: { '/': 'home', }, render: function() { return this.renderCurrentRoute(); }, home: function() { return <div>Hello World</div>; }, notFound: function(path) { return <div class="not-found">Page Not Found: {path}</div>; } }); module.exports = App; Universal Router Universal Router provides a simple routing solution for JavaScript built apps including React. The benefits of universal router are that it uses the same middleware approach as Express which makes it very easy to pick up, learn and extend. An example of universal router in action: const authorize = (state, next) => { // Check if user is logged in if (!state.isAuthenticated) { state.redirect = '/login'; next(); } } const router = new Router(on => { on('*', async (state, next) => { const component = await next(); return component && <App context={state.context}>{component}</App>; }); on('/admin', async (state, next) => { // Ensure user is logged in authorize(state, next); return ( <AdminPage /> ) }); }) router5 router5 is a framework agnostic routing solution that is not limited to only React. It treats routing like any other data or state and handles both route and data updates. router5 was designed for component trees which makes it a great fit for React based applications. Here is an example of router5 with React in action: import Router5, { loggerPlugin } from 'router5'; const router = new Router5() .setOption('useHash', true) // .setOption('hashPrefix', '!') .setOption('defaultRoute', 'home') // Routes .addNode('home', '/home') .addNode('account', '/account', canActivate : isLoggedIn) .addNode('messages', '/messages', canActivate: isLoggedIn) // Plugins .usePlugin(loggerPlugin()) export default router; For additional resources on router5 check out their Github repo and the helper library for React. Build Your Own React Router If you are feeling adventurous and up for a challenge, James K Nelson has written a great tutorial on building your own routing solution with React. His tutorial covers a lot and is a great starting point for learning and understanding how state based routing works. Aside: Auth0 Makes it Easy to Protect Routes Whether you are using React Router, router5, or building your own - Auth0 can help with authentication. Sign up for your free account to get started. You can follow the in-depth documentation for adding authentication to a React app but we'll still give you a sneak peek below. We'll show a quick example of how you can use the jwt-decode library to ensure that a users token is valid. We'll assume that the user already has a token and we'll check to see if this token is expired. Our code looks like: export function tokenIsExpired() { let jwt = localStorage.getItem('id_token') if(jwt) { let jwtExp = jwt_decode(jwt).exp; let expiryDate = new Date(0); expiryDate.setUTCSeconds(jwtExp); if(new Date() < expiryDate) { return false; } } return true; } If we were to use the Universal Router code example from above, we could easily integrate our tokenIsExpired() method to check if a token is valid. Let's do this. We'll enhance our authorize method and check for the token and it's expiration. If the token is not present or is present but expired, we'll route the user to the login page, otherwise we'll send them to their intended route. ... const authorize = (state, next) => { // Check if user has active JWT if(tokenIsExpired()) { state.redirect = '/login'; next(); } else { next(); } } ... With just a few lines of code we were able to protect our routes. The jwt-decode library works great for React application and can be used in any of the discussed routers as well as React Router. Conclusion The JavaScript community is constantly changing. Frameworks, libraries and conflicts come and go. React Router is and will likely remain the go-to routing library for React but that's not to say that there aren't great alternatives worth checking out. The co-maintainers of the React Router library have pledged to take better steps in terms of communication, release schedule and merging of pull requests for the React Router library and I'm excited to see those changes implemented.
https://auth0.com/blog/react-router-alternatives/
CC-MAIN-2017-04
refinedweb
1,099
55.13
Closed Bug 1311324 Opened 4 years ago Closed 3 years ago Update Service Worker/Extendable Message Event interfaces Categories (Core :: DOM: Service Workers, defect, P3) Tracking () mozilla55 People (Reporter: catalinb, Assigned: bkelly) References (Blocks 1 open bug) Details (Keywords: dev-doc-complete) Attachments (4 files, 9 obsolete files) Some minor changes were made to the webidl. See Priority: P4 → P3 Assignee: nobody → bkelly Status: NEW → ASSIGNED This patch updates our MessageEvent webidl and implementation. Main differences are: 1) origin attribute should be USVString 2) source attribute should permit ServiceWorker values Attachment #8845466 - Attachment is obsolete: true Replace our use of ServiceWorkerMessageEvent with MessageEvent. This matches the current spec in step 8.1 of: Remove ServiceWorkerMessageEvent now that its not used any more. Update WPT test expectations now that we pass these two tests. Note, there is a reference to ServiceWorkerMessageEvent in a devtools debugger.js, but they tell me I can ignore that. There is some spec debate about Window vs WindowProxy here. I spoke with bz and he is ok with using WindowProxy if we convert to storing the outer window. I think this will also fix crashes in the last try build. Attachment #8845514 - Attachment is obsolete: true Try is looking pretty green. I think this is ready for review. Boris, the patches here do the following: P1: Updates MessageEvent.webidl to spec. As we discussed, this means using WindowProxy for the source and storing the outer window. P2: Replace usage of ServiceWorkerMessageEvent with MessageEvent. The corresponding spec is step 8 here: P3: Remove ServiceWorkerMessageEvent has its no longer used in gecko and does not exist in the spec any more. P4: Update WPT test expectations now that we pass more tests. Thanks for your help. Flags: needinfo?(bzbarsky) Comment on attachment 8845589 [details] [diff] [review] P1 Update the MessageEvent webidl and implementation class. r=bz > MessageEvent::InitMessageEvent(JSContext* aCx, const nsAString& aType, You need to null out mServiceWorker where we null out mWindowSource/mPortSource. Worth adding a test for all three, perhaps. >+ RefPtr<mozilla::dom::workers::ServiceWorker> mServiceWorker; This is all in namespace mozilla::dom already. Just RefPtr<workers::ServiceWorker>, please. r=me with those fixed. Attachment #8845589 - Flags: review+ Comment on attachment 8845542 [details] [diff] [review] P2 Replace usage of ServiceWorkerMessageEvent with MessageEvent. r=bz >+ if (NS_WARN_IF(rv.Failed())) { >+ xpc::Throw(aCx, rv.StealNSResult()); >+ return NS_ERROR_FAILURE; In practice, MessageEvent::Constructor never fails. How about we change MessageEvent's Constructor that takes an EventTarget not take an ErrorResult, to make this clearer. Then you won't need this code here. r=me with that. Attachment #8845542 - Flags: review+ Comment on attachment 8845543 [details] [diff] [review] P3 Remove ServiceWorkerMessageEvent interface. r=bz r=me Attachment #8845543 - Flags: review+ Comment on attachment 8845544 [details] [diff] [review] P4 Update WPT test expectations. r=bz Presumably this should actually be merged into part 2 or so? Either way, I guess; I just don't like known-failing stuff in bisect if it can be easily avoided. r=me Flags: needinfo?(bzbarsky) Attachment #8845544 - Flags: review+ (In reply to Boris Zbarsky [:bz] (still a bit busy) (if a patch has no decent message, automatic r-) from comment #18) > Presumably this should actually be merged into part 2 or so? Either way, I > guess; I just don't like known-failing stuff in bisect if it can be easily > avoided. I didn't think that was an issue if it all landed in the same push. I thought we only built artifacts on push, not revisions within the push. Updated: 1) Rename mServiceWorker to mServiceWorkerSource to match other source members. 2) Null out mServiceWorkerSource where other sources are cleared. 3) Cycle collect mServiceWorkerSource Attachment #8845589 - Attachment is obsolete: true Attachment #8845615 - Flags: review+ Updated to not require ErrorResult in MessageEvent::Constructor(). Attachment #8845542 - Attachment is obsolete: true Attachment #8845617 - Flags: review+ > I thought we only built artifacts on push, not revisions within the push. For our CI, yes. For someone doing a local bisect later, this will have a changeset with random test failures, which may or may not affect tests they are caring about during their bisect.... Backout by bkelly@mozilla.com: Backout 3cc235b8f878 to 2269c901720f for build bustage r=me The bustage appears to be because this landed in m-c directly, but not m-i: And my patch shifted stuff around in binding directory such that it blew up there. I verified locally that the patches build in m-i after c40ca7a1bdd9 was merged. The tree is closed for taskcluster issues, though, so flagging as checkin-needed. Status: ASSIGNED → RESOLVED Closed: 3 years ago status-firefox55: --- → fixed Resolution: --- → FIXED Target Milestone: --- → mozilla55 I have documented this change. First of all, I have completed the documentation for MessageEvent, along with including details about the origin and source changes (most of it was missing): I have also updated all (I think) relevant SW pages with details about the deprecation of ServiceWorkerMessageEvent: (and child pages) Finally, I've added two separate notes to the Fx55 rel notes to explain the change: Can I get a tech review please? Thanks! Keywords: dev-doc-needed → dev-doc-complete
https://bugzilla.mozilla.org/show_bug.cgi?id=1311324
CC-MAIN-2020-34
refinedweb
853
56.76
Dan Diephouse wrote: > But I *want* people to to be able to override the method down stream. > Why is that so evil? An ex-colleague of mine once spent three days trying to track down a networking issue, the symptom he was faced with was an intermittent closing of his socket connection. The code he was looking at was similar to the following (from memory and not compiled) public abstract class A { private OutputStream os ; A() { os = getOutputStream() ; } protected abstract OutputStream getOutputStream() ; } public class B { private Socket socket = null ; protected OutputStream getOutputStream() { socket = .... ; return socket.getOutputStream() ; } } It took me 5 minutes to realise what was happening , and most of that was listening to his description of the problem :-) It may be that you are aware of the implications of the design, as am I, but there are plenty of people around who are not. I have come across this type of 'fault' in numerous engagements as a contractor, often made people who claimed to be experienced. The simplest way of handling it is always, IMHO, not to let it happen in the first place. If the rope isn't there then they can't hang themselves :-) But that's just my opinion :-) Kev
http://mail-archives.apache.org/mod_mbox/cxf-dev/200610.mbox/%3C45338EFA.5000505@jboss.com%3E
CC-MAIN-2018-30
refinedweb
203
63.22
On Sun, 25 Apr 2010, Linus Torvalds wrote:> > Iirc, some _really_ old code used 'short' for pid_t, and we wanted to be > really safe when we raised the limits. .. I dug into the history, and this is from August 2002..We used to limit it to sixteen bits, but that was too tight even then for some people, so first we did this: Author: Linus Torvalds <torvalds@home.transmeta.com> Date: Thu Aug 8 03:57:42 2002 -0700 Make pid allocation use 30 of the 32 bits, instead of 15. diff --git a/include/linux/threads.h b/include/linux/threads.h index 880b990..6804ee7 100644 --- a/include/linux/threads.h +++ b/include/linux/threads.h @@ -19,6 +19,7 @@ /* * This controls the maximum pid allocated to a process */ -#define PID_MAX 0x8000 +#define PID_MASK 0x3fffffff +#define PID_MAX (PID_MASK+1) #endif diff --git a/kernel/fork.c b/kernel/fork.c index d40d246..017740d 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -142,7 +142,7 @@ static int get_pid(unsigned long flags) return 0; spin_lock(&lastpid_lock); - if((++last_pid) & 0xffff8000) { + if((++last_pid) & ~PID_MASK) { last_pid = 300; /* Skip daemons etc. */ goto inside; } @@ -157,7 +157,7 @@ inside: p->tgid == last_pid || p->session == last_pid) { if(++last_pid >= next_safe) { - if(last_pid & 0xffff8000) + if(last_pid & ~PID_MASK) last_pid = 300; next_safe = PID_MAX; }which just upped the limits. That, in turn, _did_ end up breaking somesilly old binaries, so then a month later Ingo did a "pid-max" patchthat made the maximum dynamic, with a default of the old 15-bit limit,and a sysctl to raise it. And then a couple of weeks later, Ingo did another patch to fix thescalability problems we had with lots of pids (avoiding the whole"for_each_task()" crud to figure out which pids were ok, and using a'struct pid' instead).So the whole worry about > 15-bit pids goes back to 2002. I think we'repretty safe now. Linus
http://lkml.org/lkml/2010/4/25/115
CC-MAIN-2014-15
refinedweb
317
63.59
using oracleClient in WebService Discussion in 'ASP .Net Web Services' started by ptcgh, Jun 5, 2004. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads System.Data.OracleClient=?Utf-8?B?Sko=?=, Jan 10, 2005, in forum: ASP .Net - Replies: - 3 - Views: - 2,242 - Andreas Wegmann - Jan 16, 2005 Re: OracleClient System.NullReferenceException: Object reference not set to an instance of an objectAngel Faus, Nov 3, 2003, in forum: ASP .Net - Replies: - 2 - Views: - 1,161 - Jeff Washburn - Dec 1, 2003 using System.Data.OracleClient with asp.netAndy Fish, Sep 6, 2005, in forum: ASP .Net - Replies: - 0 - Views: - 1,786 - Andy Fish - Sep 6, 2005 using System.Data.OracleClient;sck10, May 24, 2007, in forum: ASP .Net - Replies: - 2 - Views: - 550 - sck10 - May 24, 2007 How to handle OracleClient call oci.dll's error in WebServiceSonicChen, Jan 17, 2006, in forum: ASP .Net Web Services - Replies: - 0 - Views: - 249 - SonicChen - Jan 17, 2006
http://www.thecodingforums.com/threads/using-oracleclient-in-webservice.783225/
CC-MAIN-2015-48
refinedweb
191
69.68
Fedora Community/PluginDesign From FedoraProject MyFedora Plugin Design Document Abstract This document describes how MyFedora plugins are put together. The intent is to be a starting point to describe how each individual piece fits so that it is possible to develop extensions to MyFedora. Note that this document and the implementation may change as we refine the design. Concepts MyFedora is a tool for bringing together the Fedora infrastructure under one roof in an interface that is designed for usability. There are two central parts of the MyFedora design, the widget system portal and the toolbox. Plugins concern themselves with the toolbox. This is where systems like koji and bodhi are tied together for efficient access to the fedora infrastructure. The plugin system consists of three main concepts that work together to form a framework for building the toolbox. They are: - Resources - Tools - Data IDs MyFedora is a TurboGears application which uses both the standard controller model and the routes model for plugins. Resources This is the starting point for MyFedora plugins. A resource is any abstract grouping such as "packages", "people" and "projects" which contain tools for viewing and manipulating data within the resource's context. Resources are a self contained directory structure placed within a resource/ directory containing an object which inherits from the myfedora.plugins.Resource object. Things that are defined by the resource: - a master template, inherited by the tools, for showing an integrated interface no matter what tool is being used - template global variables for injecting variables used by the master template as well as useful variables which the tool can use - tool routes for creating urls for each of the registered tools Example myfedora/resources/testresource/testresource.py #!python from myfedora.plugin import Resource class TestResource(Resource): """Test is an example resource""" def __init__(self): Resource.__init__(self, 'Test', # display name 'test', # resource id 'Test resource', # short description '''Test resource for testing out the loaders and acting as an example ''') self.set_master_template('master') def get_template_globals(self, *args, **kwargs): result = Resource.get_template_globals(self, *args, **kwargs) data = kwargs.get('data','') tool_list = self.get_tool_list() tool_urls = [] for tool in tool_list: tool_urls.append((self.get_tool_url(tool.get_id(), data), tool.get_display_name())) result.update({'tool_urls': tool_urls, 'resource_name': self.get_id() }) return result def set_tool_route(self, route_map, tool): tool_id = tool.get_id() resource_id = self.get_id() controller_id = self._namespace_id(tool_id) if tool.is_default(): route_map.connect(self.url(resource_id), contoller = controller_id, resource = self) r = self._route_cat(self.url(resource_id), ':data', tool_id) route_map.connect(r, controller = controller_id, data = '') return controller_id myfedora/resources/testresource/templates/master.html <html xmlns="" xmlns: <body> <h1>Test Master Template</h1> <ul> <li py: <a href="${turl[0] }">${turl[1] }</a> </li> </ul> </body> </html> Tools A tool is a web app for viewing or manipulating data. For example Builds would be a tool for the package resource. Tools are implemented as self contained TurboGears controllers. Tools have a self contained directory structure placed within a tools/ directory containing this structure: <toolname>/ __init__.py <classfiles>.py templates/ __init__.py <genshi template>.html static/ images/ js/ css/ Tools inherit from myfedora.plugin.Tool which itself inherits from turbogears.Controller. All standard controller features are supported according to the resource's route configuration which the tool registers with. Example myfedora/tools/helloworldtool/helloworldtool.py #!python from myfedora.plugin import Tool from turbogears import expose import random class HelloWorldTool(Tool): def __init__(self, parent_resource): Tool.__init__(self, parent_resource, # just pass in the parent_resource and the base class does the rest 'Hello World', # display name 'helloworld', # tool id 'Prints out hello', # short description 'Take the data and prints out hello data', # long description ['test'] , # registers with the test resource ['test'] ) # is the default for the test resource @expose(template='myfedora.tools.helloworldtool.templates.helloworld', allow_json=True) def default(self, data=''): result = self.get_parent_resource().get_template_globals() result.update({'data': data}) return result myfedora/tools/helloworldtool/templates/helloworld.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <!-- resource.get_master_template takes care of finding the master template no matter where it is on the system --> <xi:include <body> Hello ${data} how do you like resource ${resource_name} </body> </html> This will print out the test resource's header and registered tool links as well as the Hello message. Notice we mapped the data variable from the resource route so that if I used a url such as /myfedora/test/John/helloworld it would print out: Hello John how do you like resource Test Data IDs The data id is a pointer to a specific dataset the tools work on. For example the package resource considers each fedora package name to be a data id. Data id's are by standard in between the resource and the tool. Why put data id's between the resource and the tool? Doesn't it make things harder? Well yes and no. It does make us have to use routes and makes it harder to deal with the case where no data is given which could mean show me all packages in the build tool for example. We do it this way because while the current Fedora infrastructure is based off of an application centric model, you go to koji for build and bodhi for updates, in MyFedora we are data centric. What this url, /myfedora/package/dbus/builds, says in plain english is go to the package dbus and look at it's builds. A common misconception is that the builds tool is koji. It is actually a mashup of koji and bodhi as well as other infrastructure bits in the future. While koji supplies the basis of the data via the data_id, the builds tool then goes out and queries bodhi to see if the build has been pushed or requested somewhere. One can then request or rescind a push from the builds tools if they have the right permissions in FAS. There needs to be a little more look at routes so that we can drop you off into a controller and the controller works just like any other controller. I think right now it is slightly broken in that only exact matches will be routed correctly so for instance /myfedora/package/dbus/builds/push_to_testing wouldn't work if you had added a push_to_testing controller. Conclusion The plugin system allows a person or group of people to write integration points for various interesting pieces of our infrastructure using similar techniques to writing a standalone TurboGears app along with some boilerplate glue code. In fact once it is done it should be easy enough to modify an existing project to integrate with MyFedora. That is not the goal of MyFedora however. While a quick first pass of a tool such as transflex could be that easy it should evolve into integrating with the existing tools and infrastructure. For instance it would be nice on the builds page to see what percentage of a package is currently translated and provide a link for translators to do more translation. Standard libraries and documentation will be developed for this purpose. Another advantage of this system is the ability to write one tool and hook it up to multiple resources. In the tutorial that is being written it will show how the build tool hooks into the packages resource and then the small changes needed to be made to hook into the newly created peoples resource.
https://fedoraproject.org/wiki/Fedora_Community/PluginDesign?rd=FedoraCommunity/PluginDesign
CC-MAIN-2017-22
refinedweb
1,223
55.13
If I use FreeCAD 0.18.1, with Python 3, installed from the freecad-stable PPA, I get this error. Code: Select all import CfdTools CfdTools.checkCfdDependencies() The error is due to the way CfdOF checks for the version of FreeCAD. Code: Select all Checking CFD workbench dependencies... Checking FreeCAD version Traceback (most recent call last): File "/home/ecc/.FreeCAD/Mod/CfdOF/CfdPreferencePage.py", line 159, in runDependencyChecker msg = CfdTools.checkCfdDependencies() File "/home/ecc/.FreeCAD/Mod/CfdOF/CfdTools.py", line 685, in checkCfdDependencies gitver = ver[2].split()[0] IndexError: list index out of range The checkCfdDependencies() function tries to parse the git version number, that is, the third element in the list, but since this string is empty, it fails. Code: Select all ver = FreeCAD.Version() print(ver) ['0', '18.1', '', '', '2019/04/06 19:19:55'] On the other hand, if I use a development release of FreeCAD 0.18, which has a git number, it works. This is the version that is currently in the freecad-daily repository, which hasn't been updated in a while. As you can see, the "stable" release produces a shorter list of strings than the development version, and doesn't include the git revision. Code: Select all ver = FreeCAD.Version() print(ver) ['0', '18', '16093 (Git)', 'git://github.com/FreeCAD/FreeCAD.git releases/FreeCAD-0-18', '2019/03/12 13:38:07', 'releases/FreeCAD-0-18', '690774c0effe4fd7b8d2b5e2fb2b8c8d145e21ce'] I think the checkCfdDependencies() function also does something like int(ver[1]). This also fails because the second element of the list is not an integer, but a float. That is, it tries to convert the string "18.1" to an integer and it fails. It works correctly if the string is "18", as in the development version. So, I think these checks need some try: except: blocks to check for different styles of the version returned by FreeCAD.Version(). In particular, the check shouldn't fail with the "stable" version; it is disconcerting if a check seems to fail with software that is supposed to be "stable". --------------- Code: Select all try: check if version is "18" with git commit "1246" except: parse another way to get "18.1" with no git commit else: assume the version is correct Another issue. If I use the checkCfdDependencies() in the development version, 0.18, 16093 (Git), with Python 2, the output looks fine. The end of line characters are correctly recognized so the output is correctly spaced. However, if I use the "stable" version 0.18.1, with Python 3, the output is not parsed correctly. It seems that the output of the external commands is not interpreted as text, but as binary data. In this case, the end of line characters '\n' are shown in the returned string, and the entire output looks badly formated. Code: Select all Checking FreeCAD version Checking for OpenFOAM: Running echo $WM_PROJECT_VERSION Raw command: ['bash', '-c', 'source "/opt/openfoam6/etc/bashrc" && echo $WM_PROJECT_VERSION'] 6 Running cartesianMesh -help Raw command: ['bash', '-c', 'source "/opt/openfoam6/etc/bashrc" && cartesianMesh -help'] Usage: cartesianMesh [OPTIONS] options: -case <dir> specify alternate case directory, default is the cwd -fileHandler <handler> ... The output strings appear as b'something\n'; the newline is not actually used in the console. Code: Select all Checking FreeCAD version Checking for OpenFOAM: Running echo $WM_PROJECT_VERSION Raw command: ['bash', '-c', 'source "/opt/openfoam6/etc/bashrc" && echo $WM_PROJECT_VERSION'] b'6\n'Error parsing OpenFOAM version string b'6\n' Running cartesianMesh -help Raw command: ['bash', '-c', 'source "/opt/openfoam6/etc/bashrc" && cartesianMesh -help'] b'\n'b'Usage: cartesianMesh [OPTIONS]\n'b'options:\n'b' -case <dir> specify alternate case directory, default is the cwd\n'b' -fileHandler <handler>\n'b' override the fileHandler\n'b' -hostRoots <(((host1 dir1) .. (hostN dirN))>\n' ... I believe checkCfdDependencies() internally uses subprocess to run the tools in the command line, but maybe some options aren't correctly set up, so the output isn't being displayed correctly when run in Python 3. --------- And finally. Paraview is not detected by checkCfdDependencies(). Not with the 0.18 development (16093) nor with the 0.18.1 stable version. It was correctly installed. As I said, I can run the examples. It seems checkCfdDependencies() internally uses distutils to check for the presence of paraview, but it's not being found. Code: Select all which -a paraview /opt/paraviewopenfoam56/bin/paraview The other external commands seem to be found by running the name of the executable through runFoamCommand(), which internally uses subprocess. It seems runFoamCommand() first sources the OpenFoam resource file that sets up the environmental variables, and then runs the program, for example: Code: Select all paraview_cmd = "paraview" # Otherwise, the command 'paraview' must be in the path - test to see if it exists import distutils.spawn if distutils.spawn.find_executable(paraview_cmd) is None: So, again, maybe there should be a try: except: block. If the distutil.spawn check fails, it should try loading the resource file and running the paraview executable inside /opt. Code: Select all Running cartesianMesh -help Raw command: ['bash', '-c', 'source "/opt/openfoam6/etc/bashrc" && cartesianMesh -help'] Something like this Code: Select all Running paraview Raw command: ['bash', '-c', 'source "/opt/openfoam6/etc/bashrc" && paraview'] --------- Code: Select all try: distutils.spawn.find_executable(paraview_cmd) except: runFoamCommand(paraview_cmd) In this thread I'm only talking about the dependency check in the CfdOF preferences window (Edit -> Preferences -> CFD); I haven't checked if the actual operation of CfdOF fails because of this. Everything, OpenFoam, Paraview, cfMesh, and HiSA seem to be correctly installed. So, it is a case that the everything is installed but the software still reports problems. I think the dependency check should be more robust so as to not confuse new users that want to start using CfdOF.
https://forum.freecadweb.org/viewtopic.php?p=301933
CC-MAIN-2020-45
refinedweb
957
56.25
Content-type: text/html curs_scr_dump, scr_dump, scr_init, scr_restore, scr_set - Read or write a Curses screen from or to a file #include <curses.h> int scr_dump( const char *filename ); int scr_init( const char *filename ); int scr_restore( const char *filename ); int scr_set( const char *filename ); Curses Library (libcurses) Interfaces documented on this reference page conform to industry standards as follows: scr_dump, scr_init, scr_restore, scr_set: XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. The scr_dump routine writes the current contents of the virtual screen to filename. The scr_restore routine sets the virtual screen to the contents of filename, which must have been written using scr_dump. The next call to doupdate restores the screen to the way it looked in the dump file. The scr_init routine reads the content of filename and uses it to initialize the Curses data structures that describe what the terminal currently has on its screen. If Curses determines that the data is valid, it bases its next update of the screen on this data rather than clearing the screen and starting from scratch. Applications call scr_init after an initscr or a system call (see system(3)) to share the screen with another process that executed scr_dump after endwin. Curses declares the data invalid if the time stamp of the tty is old or the terminfo capabilities rmcup and nrrmc exist. The scr_set routine is a combination of scr_restore and scr_init. This routine tells the program that the information in filename is what is currently on the screen and is what the program wants on the screen. The scr_set routine can be thought of as a screen-inheritance function. To read or write a window from or to a file, use the getwin and putwin routines (see curs_util(3)). The header file <curses.h> automatically includes the header file <stdio.h>. All routines return the integer ERR upon failure and OK upon success. Functions: curses(3), curs_initscr(3), curs_refresh(3), curs_util(3), system(3) Others: standards(5)
http://backdrift.org/man/tru64/man3/scr_dump.3.html
CC-MAIN-2016-44
refinedweb
337
60.75
Welcome back to the ongoing Superpowers game engine tutorial series. In the first part we got Superpowers installed and created our first project, in the second tutorial we look at Actors and Components, the “stuff that makes up your game world”. Then we did a tutorial on Sprites and Animations give our “stuff” a little bit more visible panache. In this tutorial we are going to add a Behavior to our component, which is the way you give your “stuff” a bit of logic. Basically this is how you program your Superpowers game. Just like adding a sprite was a two step process, first we added the image to the scene as an asset, then we created a new component on our actor, scripts work the same way. Let’s start by creating a new script. In the left hand window, click the + icon and select Script: This will create a new script and automatically open it in Superpower’s integrated code editor. Oh, by the way, Superpowers has an integrated code editor! And it’s actually pretty good, with auto completion, code formatting and more. Now double click your Scene or switch to the Scene tab in the editor and select your Sprite. In my case I have an animated sprite with a single animation called “Walk” defined, but the Animation set to (None). See the previous tutorial for more details on this process. Now let’s add another component to our action, a Behavior. Following the same process as adding the Sprite Renderer, simply click New Component and select Behavior: There is only one setting in Behavior, the name of the class to add. Drop down the Class dropdown and choose ScriptBehavior. Now we can actually do a bit of coding for this Actor. I will present the code upfront, and we will go through it after. Coding in Superpowers by default is using the TypeScript programming language, which is an open source language created by Microsoft to solve some of JavaScripts, shall we say… rough spots. I’m actually rather a fan of TypeScript myself. Anyways… here is the code I created: class ScriptBehavior extends Sup.Behavior { awake() { } update() { if(Sup.Input.isKeyDown("UP")){ this.actor.move(0,0.1,0); if(this.actor.spriteRenderer.getAnimation() !== "Walk"){ this.actor.spriteRenderer.setAnimation("Walk"); Sup.log("Set animation to walk"); } } if(Sup.Input.isKeyDown("DOWN")){ this.actor.move(0,-0.1,0); if(this.actor.spriteRenderer.getAnimation() !== "Walk"){ this.actor.spriteRenderer.setAnimation("Walk"); Sup.log("Set animation to walk"); } } } } Sup.registerBehavior(ScriptBehavior); Our class ScriptBehavior (perhaps we should have named it something a bit less lazy, like CodeThatMovesMySpriteOnKeypress …) inherits from Sup.Behavior. You will notice all Superpowers code is in the Sup namespace to prevent name collisions. The Sup.Behavior class can be thought of as a scriptable component. It has a number of methods that can be called as part of the program’s lifecycle, in this example we implement awake() and update(), although there are a few more available, such as start() and onDestroy(). awake is called when the behavior is attached to an actor and can basicially be thought of like a constructor, where you do one time setup and initialization logic. update() on the other hand is called every frame, or every iteration of the game loop, and this is where you implement the logic of your behavior. In this particular example we simple check for keyboard presses using the Sup.Input global object, testing if a key with the value “UP” or “DOWN” is pressed. If it is, we move slightly up or down in the Y axis. You can notice in this example that a Behavior can access the Actor it is attached to using the actor property. Additionally you can see you can access components attached to the actor in a similar way, like we did with the Sprite Renderer via .spriteRenderer. Finally just to illustrate that it can be done, we log the animation change using Sup.log(). Finally the script is registered with a call to Sup.registerBehavior().
https://gamefromscratch.com/superpowers-tutorial-series-part-four-scripting-behaviors/
CC-MAIN-2021-04
refinedweb
678
54.32
Unity 5.3.2Mettre à jour mainteant Vous découvrez Unity ? Commencez ! Notes sur la version Improvements - Compute: Improved compute shader import times, particularly on Windows. Since 5.1 we were always compiling them for GL4.3+GLES3.1 too, while most people only need DX11. (706098) - Editor: Added link to learn more about Unity Cloud Build in the Build Player window. - Editor/Scenes: Add Scene.GetRootGameObjects() to return the root game objects of the scene. - Editor/Scenes: Add SceneManager.CreateScene() API to allow creating new empty scene at runtime. - GI: Light probes and ambient (everything with SH calculations) match ground truth more closely now. (754898) - IL2CPP: Optimize method prologues for code size and incremental builds. - iOS: Added bitcode support. (752178) - iOS/IL2CPP: Added support for Xcode 7.3 (don't use __declspec attributes). - Samsung TV: Added support for 2016 TVs. - Tizen: Now supports tizenstore URL protocol to open up Tizen store pages. - tvOS: Enable Dynamic GI. - VR: Applications that lost focus no longer throttle the CPU. Changes - iOS: Enable bitcode support by default. - Editor/Scenes: Prevent calling some Editor mode only APIs on EditorSceneManager from play mode, including EditorSceneManager.OpenScene, EditorSceneManager.SaveScene etc. - Editor/Scenes: Prevent calling some play mode only APIs on SceneManager from Editor mode, including SceneManager.LoadLevel, SceneManager.LoadLevelAsync etc. Fixes - Android: Fixed crash when loading many asset bundles. (743739) - Android: Fixed crash in Cloth vertex buffer setup. (750362) - Android: Fixed NullReferenceException on x86 devices running Android 5.0 or newer. - Animation: Fixed an issue with stepped keys having the wrong value. (745139) - Animation: Fixed animation events not firing on the last frame of a looping animation. (745781) - Animation: Fixed crash when deleting all Euler keys in animation curve. (754373) - Animation: Fixed Crossfade called subsequently not properly interrupting transition. (753866) - API Updater: Fixed possible crashes in script updater when resolving types. - AssetBundles: Fixed AssetBundle.CreateFromFile retaining file descriptor. (715753) - AssetBundles: Fixed excessive memory usage when decompressing asset bundles with many objects inside. - AssetBundles: Fixed memory leak when loading asset bundles with LZMA compression. - AssetBundles: Fixed possible asset bundle caching error when starting multiple downloads with an empty cache. - AssetBundles: Fixed the asset bundle reading bug when compressed data could be read as uncompressed. - Core: '~' folders were no longer ignored in projects; fixed. (687655) - DX11: Improved performance in GPU bound scenarios. (747492) - DX11: Fixed wrong VRAM detection on some Intel GPUs, resulting in shadows not being rendered. - DX11/XB1: Fixed FP16 static batched mesh vertex compression to actually work there, was always decompressing to FP32 before. - Editor: Display console platform doc items in the help menu, when console docs are present, but the main documentation is not installed. (754108) - Editor: Fixed MissingMethodException when using some APIs from UnityEngine.WSA namespace. - Editor: Make right and left arrow select next/previous parent for fast expand/collapse in hierarchy views. (752821) - Editor/Scenes: Fixed a crash when trying to get the root count on an invalid Scene. (752423) - Editor/Scenes: Fixed loading new unsaved scene during playmode using Application.LoadLevel(index) or SceneManager.LoadScene(index). (751923) - Editor/Scenes: Fixed the issue that script association was lost when another scene was loaded. (748904) - Editor/Scenes: Fixed the issue that the unloaded scenes would be removed from the hierarchy when entering playmode, if they were first in the hierarchy. - Editor/Scenes: Now make sure inspector in ActiveEditorTracker for MonoBehaviours are not garbage collected. The ActiveEditorTracked manages the objects itself. (753550) - Editor/Scenes: Throw null reference exception if SerializedObject has been disposed. (752599) - Global Illumination: Enlighten; fixed an issue where Unity crashed if scene was unloaded before it got a chance to fully load. (740808, 747666) - Global Illumination: Fixed light probes / skybox ambient being wrong in some cases, 5.3 regression. (754898) - Graphics: More consistency between editor & player when setting up color space sRGB write modes. (754487) - Graphics: Fixed an issue where enabling vertex compression for positions could result in geometry not being rendered. (754928) - Graphics: Realtime reflection probes in some cases did not have trilinear filtering properly set on them; fixed. (752879) - Graphics: Fixed crash when setting shader properties. - IL2CPP: Do not incorrectly free blittable arrays marshaled as [In,Out] parameters. (760943) - IL2CPP: Ensure that the header file for a type defined in a different assembly is included when that type is used for a method parameter. (755088) - IL2CPP: Ensure thread ID is valid for all threads. (691038) - IL2CPP: Fixed an issue that caused a crash in Class::IsSubclassOf. (752737), (751428) - IL2CPP: Fixed double.Parse with InvariantCulture. (752197) - IL2CPP: Fixed ExecutionEngineException being thrown on System.Reflection.MonoProperty::GetterAdapterFrame. (737529) - IL2CPP: Fixed StateMachineBehaviour messages not being executed if stripping is enabled. (753610) - IL2CPP: Forward declare a type and marshaled type in the method declarations header for marshaling methods so that the order of includes does not matter. (756447) - IL2CPP: Implemented out marshaling for arrays correctly. (752153) - IL2CPP: Implemented support for MetadataToken property on the following types: FieldInfo, EventInfo, MethodInfo, ConstructorInfo, PropertyInfo, Type, and Module. (670027) - IL2CPP: Implemented the Thread::Abort and Thread::ResetAbort methods. This should allow a Socket to be closed properly while another thread is waiting in a call to Accept or Connect on that socket. (746822) - IL2CPP: Prevent a NotImplementedException exception from occurring in il2cpp.exe when the Unbox opcode is used with certain generics. This usually occurs when an assembly is built with Visual Studio. (758926) - IL2CPP: Properly cleanup when a native thread is cancelled rather than exiting normally. (749988, 733609) - IL2CPP: Provide a proper argument for instance methods on value types invoked via a delegate. (750153) - IL2CPP: Removed an unnecessary Box used to null check before calling a virtual method. - iOS: Added font containing Lao characters to the fallback list. (750357) - iOS: Added Xcode 7.2 to iOS plugin compatibility list. (750311) - iOS: Duplicate another image layer when not all are defined. (749289) - iOS: Fixed Apple Pencil position reporting on iPad Pro. - iOS: Hindi characters are displayed now. (725558) - iOS/IL2CPP: Correct stack traces in exception messages, which could sometimes miss managed frames. (754539) - iOS/IL2CPP: Fire all GC profiler events. Fixed GC data in internal profiler. - iOS/tvOS: Build all object files with correct SDK. (755365) - Linux: Fixed a corner case where tearing would occur on some window managers even with VSync enabled. - Mecanim: Fixed a bug where Euler rotations would be retained in scene after scrubbing animation. (754562) - Mecanim: Fixed a bug where Euler rotations would not work in Legacy Animations. (752955) - Mecanim: Fixed a bug where lights would not be animated in Legacy mode. (753595) - Mecanim: Fixed a bug where RectTransform couldn't be animated in Legacy. (752847) - Metal: Wrongly claimed to support DXT1/DXT5 texture formats on iOS, and ETC on Mac. - Mono: Preserve non-volatile XMM registers across calls on 64-bit Windows during JIT compilation. (691466) - Networking: Added a 'connecting' state and cancel button to NetworkManagerHUD UI to prevent multiple attempts to connect at the same time. (748936) - Networking: Fixed 'recursion depth exceeded' error for complex NetworkBehaviour scripts. (757766) - Networking: Fixed ClientScene object list being wrong after host migration. (746011) - Networking: Fixed NetworkAnimator not working for non-player objects (755391) - Networking: Fixed NetworkServer.SendToAll sends the message to the host twice. (756153) - Networking: Fixed SyncEvent regression issue. (755450) - Networking: Fixed SyncList updates don't use a configurable network channel. (745795) - Networking: Fixed UI that allowed host migration to be enabled for WebGl platform where it is not supported. (744002) - Networking: OnStopAuthority called on server when it should not be. (751239) - Networking: Prevent [Command] functions from accepting NetworkConnection objects as parameters, which causes a UNetWeaver error. (729157) - Networking: Prevent NetworkIdentity objects from being server-only and local-player-authority at the same time, as this is not a valid configuration. (749338) - OpenGL Core: Fixed a crash with AMD and NVIDIA on Windows when using RenderTexture with recent drivers. - OpenGL Core: Fixed a crash with Intel GPUs on Linux. - OpenGL Core: Fixed shaders with multiple constant arrays. - OpenGL Core: Fixed text rendering with AMD GPUs on OSX 10.9. - OpenGL ES: Fixed crashes with new Samsung firmware. (756734, 756754) - OpenGL ES: Fixed mipmap generation for render textures. (751743) - OpenGL: Fixed binary shader cache, cache was always disabled. (742591) - OpenGL (legacy): Added work around buffer state tracking failure. - Particles: Fixed error message spam on particle systems that have no particles (5.3.1 regression). (755423) - Physics: Fixed memory corruption/crash when deactivating a collider from inside of OnTriggerStay. (744007) - Physics: PlatformEffector2D now supports negative scaling on parent Transform. (755612) - Profiler: Fixed excessive memory usage in development players. - Samsung TV: Fixed the smarthub button problem. - Samsung TV: Fixed wrong JPG library access problem. - Scripting: UnusedByteCodeStripper2 will show a better error message when processing assemblies, so it will be easier to identify offending assembly. (750266) - Shaders: During surface shader generation, do not initialise non-vector type members of the Input struct i.e. a struct/int/matrix as a member variable of the Input struct. (759336) - Shaders: Fixed a bug in Standard shader GGX specular term, introduced in 5.3. - Shaders: More proper environment reflection in Standard shader for very rough surfaces. - Substance: Fixed a crash when checking/unchecking 'Generate all outputs' or 'Generate mipmaps' on OSX. (752039) - Substance: Fixed a crash when reimporting SBSARs with multiple material instances on OSX. (751300) - Substance: Fixed a rare crash that could happen around the destruction of animated ProceduralMaterials. (750442) - Substance: Fixed console spam about unavailable material properties. - Substance: Fixed editor stutter when using RebuildTextures on OSX. (663236) - Substance: Fixed emission color being set to opaque white when resetting a ProceduralMaterial. - Substance: Fixed textures not properly generated on player awake when affected only by constant inputs. (754556) - Substance: Output textures from ProceduralMaterials without any input are now always generated. - tvOS: Fixed missing symbols for simulator builds. (756044) - tvOS: Fixed rendering path selector in player settings. (753925) - tvOS: Fixed UnityEngine.Apple.TV.Remote API access in editor. - UI: Added fix so that the placeholder text is enabled when the InputField is deactivated and the text is empty. - UI: Fixed crash in some cases after deleting world space Canvas. (740782) - UI: Removed remaining uses of multiple display system (temporary fix while non-native resolutions are not supported). (741751) - VR: Fixed Lines & Trail rendering; was offset for one eye. (754068) - VR: Fixed Render Scale not reverting after being edited in play mode. (731324) - VR: Fixed VRDevice.isPresent reporting true on first frame if Device was not connected at start. (732236) - VR: Stereo Cameras correctly respect camera depth when rendering to the game view and HMD. (753256) - WebGL: Fixed a crash when setting Application.runInBackground, if Strip Engine Code is enabled. (758120) - WebGL: Prevent browser from processing Arrow Keys. (740182) - WebGL: Prevent browser from processing Backspace and Tab key presses. (747236) - Windows Store: Fixed incorrect display of Korean characters on Windows 10 (if Korean language pack is not installed) and Windows Phone 10, Unity will now fallback to "Malgun Gothic" font. - Windows Store: Fixed Directory.CreateDirectory() for forward slashes. (752546) - Windows Store: Populate autorotation settings to screen manager. (751207) - Windows Store: Fixed a build failure (rrw failure) when calling methods with System.Numerics.Matrix4x4 as parameter. (754533) - Windows Store: Fixed AccessViolationException when initializing matchmaking in UNet. (747549) - Windows Store: Fixed player crashing on startup on .NET scripting backend. (746301) - Windows Store: Fixed Screen.SetResolution when upscaling lower resolution to fullscreen, previously you would see a corrupt image on the screen. (756086) - Windows Store: Fixed TouchScreenKeyboard crashes when it's members are used immediately after Open(). (755473) - Windows Store: Fixed WheelCollider on x64 (NullReferenceException occurring). (730289) - Windows Store: RunInBackground option will be respected when application window looses focus, and if enabled, the application will keep updating. Note: if application window is minimized it will be still paused, because OS suspends the application. (759166) Revision: e87ab445ead0 Changeset: e87ab445ead0 Unity 5.3.2
https://unity3d.com/fr/unity/whats-new/unity-5.3.2
CC-MAIN-2021-31
refinedweb
1,955
51.34
Hi Daniel, I hope you are doing well. I've been struggling to run step-40 using Trilinos instead of Petsc. I appreciate it if you could let me know which parts of the code should be changed and how. Documentation: namespace LA { #if defined(DEAL_II_WITH_PETSC) && !(defined(DEAL_II_WITH_TRILINOS) && defined(FORCE_USE_OF_TRILINOS)) using namespace ::LinearAlgebraPETSc <>; # define USE_PETSC_LA #elif defined(DEAL_II_WITH_TRILINOS) using namespace ::LinearAlgebraTrilinos <> ; #else # error DEAL_II_WITH_PETSC or DEAL_II_WITH_TRILINOS required #endif } Thanks On Monday, July 4, 2016 at 5:00:51 PM UTC-5, Daniel Arndt wrote: > > Ehsan, > > All I can say: After switching the order of arguments in > SparseMatrix::add, your code runs for me with a recent developer version > and Trilinos at least. > > Best, > Daniel > > Am Montag, 4. Juli 2016 18:59:05 UTC+2 schrieb Ehsan Esfahani: >> >> Dear Professor Bangerth, >> >> Thanks for your response. Yes, I did. As I mentioned, I got a backtrace >> in the debugger (eclipse) and I find out that the problem is in the line I >> have mentioned but I couldn't find out what's the problem in that line of >> the code which causes segmentation violation. >> >> Best, >> Ehsan >> >> >> On Sunday, July 3, 2016 at 4:32:16 PM UTC-5, bangerth wrote: >>> >>> On 07/03/2016 03:50 PM, Ehsan Esfahani wrote: >>> > Dear All, >>> > >>> > Greetings. I changed step-25 (minor changes) in order to solve my >>> problem. Now >>> > I want to change this code for parallel computation based on the >>> method >>> > mentioned in step-40. I got several errors and solved them one by one >>> but the >>> > following error: >>> > >>> > /Number of active cells: 1024 >>> > // Total number of cells: 1365 >>> > //{[0,4224]}// >>> > //Time step #1; advancing to t = 0.1. >>> > [...] >>> > //[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation >>> Violation, >>> > probably memory access out of range >>> > [...] >>> > >>> > / >>> > / >>> > Eclipse gives me a backtrace in the following line of the code: >>> > / solver.solve (system_matrix, >>> completely_distributed_solution_update, >>> > system_rhs,/ >>> > / preconditioner);/ >>> > I have no idea why I got this error. The code is running properly for >>> /fe(1) >>> > /and /n_global_refinements (4)/ but when I change them to /fe(2)/ and >>> > n_global_refinments (4) I got that error related to /Segmentation >>> Violation. >>> > /Do you know what's going on? Also, I have attached the code here . >>> Thanks in >>> > advance for your help. >>> >>> Ehsan, >>> >>> segmentation violations (SEGV) are typically easy to debug because you >>> can get >>> a backtrave in the debugger of the exact place where it happens, and you >>> can >>> then look at the local variables to see why this may have happened. Have >>> you >>> tried to run the program in a debugger and see what is going on? >>> >>> Best >>> W. >>> >>> -- >>> ------------------------------------------------------------------------ >>> Wolfgang Bangerth email: bang...@math.tamu.edu >>>.
https://www.mail-archive.com/dealii@googlegroups.com/msg01297.html
CC-MAIN-2017-47
refinedweb
434
62.27
In the most recent episode of Edge Cases, Wolf and Andrew discuss dependency management, specifically as it pertains to Objective-C applications that import libraries using the Cocoapods tool. In one app I worked on a few years ago, two different libraries each tried to include (as part of the libraries themselves, not as dependencies) the Reachability classes from Apple’s sample code. The result was duplicate symbol definitions, because my executable was trying to link both (identical) definitions of the classes. Removing one of the source files from the build fixed it, but how could we avoid getting into that situation in the first place? One way explored in the podcast is to namespace the classes. So Mister Framework could rename their Reachability to MRFReachability, Framework-O-Tron could rename theirs to FOTReachability. Now we have exactly the same code included twice, under two different names. They don’t conflict, but they are identical so our binary is bigger than it needs to be. It’d be great if they both encoded their dependency on a common class but didn’t try to include it themselves so we could just fetch that class and use it in both places. Cocoapods’s dependency resolution allows for that, and will work well when both frameworks want exactly the same Reachability class. However, we hit a problem again when they want different libraries, with the same names in. Imagine that the two frameworks were written using different versions of libosethegame. The developers changed the interface when they went from v1.0 to v1.1, and Framework-O-Tron is still based on the older version of the interface. So just selecting the newer version won’t work. Of course, neither does just selecting the older version. Can we have both versions of libosethegame, used by the two different frameworks, without ending up back with the symbol collision error? At least in principle, yes we can. The dynamic loader, dyld (also described in the podcast) supports two-level namespace for dynamic libraries. Rather than linking against the osethegame library with -losethegame, you could deploy both libosethegame.1.0.0.dylib and libosethegame.1.1.0.dylib. One framework links with -losethegame.1.0, the other links with -losethegame.1.1. Both are deployed, and the fact that they were linked with different names means that the two-level namespace resolves the correct symbol from the correct version of the library, and all is well. Of course, if you’ve got dynamic libraries, and the library vendor is willing to do a little work, they can just ship one version that supports all previous behaviour, looking at which version of the library the client was linked against to decide what behaviour to provide. Despite Mac OS X providing a versioned framework bundle layout, Apple has never (to my knowledge) shipped different versions of the system frameworks. Instead, the developers use the Mach-O load headers for an executable to find the linked version of their library, and supply behaviour equivalent to that version. The above two paragraphs do rather depend on being able to use the dynamic linker. We can’t, on iOS, at the moment.
http://www.sicpers.info/2013/05/page/2/
CC-MAIN-2018-51
refinedweb
532
62.98
0 Hello. I'm trying to store a child class into it's a pointer of it's baseClass, like this: baseClass*test = new subClass; Which works and all, except I'm having trouble using 'delete' to destroy the newly created subClass. So something like this: baseClass*test = new subClass; delete baseClass; Here's a more accurate example of what I'm trying to do, exactly: #include <iostream> using namespace std; class baseClass { public: virtual void Blah() = 0; }; class subClass : public baseClass { public: virtual void Blah() { cout << "I'm sub class 1" << endl; } ~subClass() { cout << "Sub class 1 destroyed..." << endl; } }; class subClass2 : public baseClass { public: virtual void Blah() { cout << "Hi there! I'm sub class 2" << endl; } ~subClass2() { cout << "Sub class 2 destroyed..." << endl; } }; int main() { baseClass*test; int input; cout << "Enter 1 or 2:" << endl; cin >> input; if (input == 1) { test = new subClass; } else if (input == 2) { test = new subClass2; } else { cout << "I said, 1 OR 2, bye bye" << endl; return 0; } test->Blah(); delete test; return 0; } The "Hi I'm subclass 1/2" thing comes up, but the "Sub class x destroyed..." does not. Leading me to believe that it's not being destroyed. How can I delete the subclass with the 'test' pointer, without knowing if the user entered 1 (for subClass) or 2 (for subClass2)? Thanks.
https://www.daniweb.com/programming/software-development/threads/301056/problems-with-deleting-child-class
CC-MAIN-2018-22
refinedweb
221
58.55
. For a reference type, you use the New keyword to create a new instance of the class or structure that is specified by the data type. If you use New, you do not use an initializer expression. Instead, you supply arguments, if they are required, to the constructor of the class from which you are creating the variable. You can declare a variable in a procedure, block, class, structure, or module. You cannot declare a variable in a source file, namespace, or interface. For more information, see Declaration Contexts and Default Access Levels (Visual Basic). A variable that is declared at module level, outside any procedure, is a member variable or field. Member variables are in scope throughout their class, structure, or module. A variable that is declared at procedure level is a local variable. Local variables are in scope only within their procedure or block. The following access modifiers are used to declare variables outside a procedure: Public, Protected, Friend, Protected Friend, and Private.). Specifying an Initial Value You can assign a value to a variable when it is created. For a value type, you use an initializer to supply an expression to be assigned to the variable. The expression must evaluate to a constant that can be calculated at compile time. If an initializer is specified and a data type is not specified in an As clause, type inference is used to infer the data type from the initializer. In the following example, both num1 and num2 are strongly typed as integers. In the second declaration, type inference infers the type from the value). Declaring Multiple Variables You can declare several variables in one declaration statement, specifying the variable name for each one, and following each array name with parentheses. Multiple variables are separated by commas. If you declare more than one variable with one As clause, you cannot supply an initializer for that group of variables. You can specify different data types for different variables by using a separate As clause for each variable you declare. Each variable takes the data type specified in the first As clause encountered after its variablename part. Arrays You can declare a variable to hold an array, which can hold multiple values. To specify that a variable holds an array, follow its variablename immediately with parentheses. For more information about arrays, see Arrays in Visual Basic. You can specify the lower and upper bound of each dimension of an array. To do the index, not the length of the dimension. The length of the dimension is the upper bound plus one. An array can have from 1 to 32 dimensions. You can leave all the bounds blank in an array declaration. If you do this, the array has the number of dimensions you specify, but it is uninitialized. It has a value of Nothing until you initialize at least some of its elements. The Dim statement must specify bounds either for all dimensions or for no dimensions. If the array has more than one dimension, you must include commas between the parentheses to indicate the number of dimensions. You can declare a zero-length array by declaring one of the array's dimensions to be -1. A variable that holds a zero-length array does not have the value Nothing. Zero-length arrays are required by certain common language runtime functions. If you try to access such an array, a runtime exception occurs.. Default Data Types and Values The following table describes the results of various combinations of specifying the data type and initializer in a Dim statement. If you specify a data type but do not specify an initializer, Visual Basic initializes the variable. Static Local Variable Lifetime A Static local variable has a longer lifetime than that of the procedure in which it is declared. The boundaries of the variable's lifetime depend on where the procedure is declared and whether it is Shared. Attributes and Modifiers You can apply attributes only to member variables, not to local variables. An attribute contributes information to the assembly's metadata, which is not meaningful for temporary storage such as local variables. At module level, you cannot use the Static modifier to declare member variables. At procedure level, you cannot use Shared, Shadows, ReadOnly, WithEvents, or any access modifiers to declare local variables. You can specify what code can access a variable by supplying an accessmodifier. Class and module member variables (outside any procedure) default to private access, and structure member variables default to public access. You can adjust their access levels with the access modifiers. You cannot use access modifiers on local variables (inside a procedure).). Releasing Managed). The following example declares variables by using the Dim statement with various options. ' Declare and initialize a Long variable. Dim startingAmount As Long = 500 ' Declare a variable that refers to a Button object, ' create a Button object, and assign the Button object ' to the variable. Dim switchButton As New System.Windows.Forms.Button ' Declare a local variable that always retains its value, ' even after its procedure returns to the calling code. Static totalSales As Double ' Declare a variable that refers to an array. Dim highTemperature(31) As Integer ' Declare and initialize an array variable that ' holds four Boolean check values. Dim checkValues() As Boolean = {False, False, True, False} The following example lists the prime numbers between 1 and 30. The scope of local variables is described in code comments. Public Sub ListPrimes() ' The sb variable can be accessed only ' within the ListPrimes procedure. Dim sb As New System.Text.StringBuilder() ' The number variable can be accessed only ' within the For...Next block. A different ' variable with the same name could be declared ' outside of the For...Next block. For number As Integer = 1 To 30 If CheckIfPrime(number) = True Then sb.Append(number.ToString & " ") End If Next Debug.WriteLine(sb.ToString) ' Output: 2 3 5 7 11 13 17 19 23 29 End Sub Private Function CheckIfPrime(ByVal number As Integer) As Boolean If number < 2 Then Return False Else ' The root and highCheck variables can be accessed ' only within the Else block. Different variables ' with the same names could be declared outside of ' the Else block. Dim root As Double = Math.Sqrt(number) Dim highCheck As Integer = Convert.ToInt32(Math.Truncate(root)) ' The div variable can be accessed only within ' the For...Next block. For div As Integer = 2 To highCheck If number Mod div = 0 Then Return False End If Next Return True End If End Function In the following example, the speedValue variable is declared at the class level. The Private keyword is used to declare the variable. The variable can be accessed by any procedure in the Car class. Public Class Car ' The speedValue variable can be accessed by ' any procedure in the Car class. Private speedValue As Integer = 0 Public ReadOnly Property Speed() As Integer Get Return speedValue End Get End Property Public Sub Accelerate(ByVal speedIncrease As Integer) speedValue += speedIncrease End Sub End Class
http://msdn.microsoft.com/en-us/library/Vstudio/7ee5a7s1.aspx
CC-MAIN-2014-52
refinedweb
1,167
56.05
I've been working with the Reactive Extensions (Rx) library quite a bit lately and am very impressed. While it is a new way of thinking about services, it certainly makes life much easier. In this example, I'll show you a way to simplify your web service calls using Rx. In fact, even if you don't use Reactive Extensions, you may benefit from the proxy wrappers that I'll describe. I'm assuming you are familiar with Silverlight, web services, and have some exposure to the Managed Extensibility Framework. You'll also want to make sure you've got the latest version of Rx for Silverlight 4. Let's get started! First, create a new Silverlight 4 Application. Keep all of the defaults: we do want a web project, but we aren't using RIA. The Service: Server Side Let's create a simple calculator service. Sure, it is a simple example, but it will make it easier to focus on the details of Rx rather than puzzling over a more complex web service example. Create a new service and call it "Calculator." Just place it in the root of the web application. Create a contract and implement it, so that your service ends up looking like this: namespace RxWebServices.Web { [ServiceContract(Namespace = "")] public interface ICalculator { [OperationContract] long Add(int operand1, int operand2); } [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class Calculator : ICalculator { readonly Random _random = new Random(); public long Add(int operand1, int operand2) { Thread.Sleep(TimeSpan.FromMilliseconds(_random.Next(1000) + 50)); return operand1 + operand2; } } } Notice I built in a delay. This is important to see how Rx helps us handle the asynchronous nature of web service calls. Go ahead and hit CTRL+F5 to build and run without debugging. This will set up the service end point for us to grab in Silverlight. Now, in your Silverlight project, let's set some things up. The Service: Client Side First, we want to add some references to both Reactive Extensions (Rx) and the Managed Extensibility Framework. Below, I've highlighted the references to add: Now we can add our service reference. Right-click references, choose "Add Service" and select "Discover Services in Solution". You should be able to select the calculator service. Put it in the namespace "Calculator Service" as depicted below. Making Things Easy: ServiceProxy Services can seem complex, but with the factory patterns provided by the framework and the support of relative paths, abstracting the creation of an end point is easy. I like to create a proxy class thas manages the end points for me. In this example, I store the end point as a constant. However, you can easily make it a parameter for your Silverlight application and construct it on the fly. All my "consumer" really cares about is the service contract, not the details of how to wire in the service endpoint. So, let's make it easy. Take a look at the following class. The class itself is never instanced directly, but it will export the service contract so that wherever I import it, I'll have a fully wired version of the proxy ready to use. Create a folder called "Implementation" and add "ServiceProxy.cs". Your class will look like this: namespace RxWebServices.Implementation { public class ServiceProxy { private const string CALCULATOR_SERVICE = "../Calculator.svc"; private const string NOT_SUPPORTED = "Type {0} not supported"; private static readonly Dictionary<Type, Uri> _serviceMap = new Dictionary<Type, Uri> {{typeof (ICalculator), new Uri(CALCULATOR_SERVICE,UriKind.Relative)}}; public static T GetProxyFor<T>() { if (!_serviceMap.ContainsKey(typeof(T))) { throw new TypeLoadException(string.Format(NOT_SUPPORTED, typeof (T).FullName)); } return new ChannelFactory<T>(new BasicHttpBinding(), new EndpointAddress(_serviceMap[typeof (T)])). CreateChannel(); } [Export] public ICalculator CalculatorService { get { return GetProxyFor<ICalculator>(); } } } } Take a look. We are mapping the service contract to the end points. In our case, it is relative to the site serving the Silverlight application. Because the application is in ClientBin, we back up one level to access the service. Note this will work just as easily for a service hosted somewhere else: I would simply specify a relative or absolute uri. We only have one service, but the dictionary makes it easy to map multiple ones. The export uses the channel factory to generate an instance and return the client. Our Internal Contract I rarely let the rest of my application concern itself with the details of the service. Any other area of my application is simply asking for results based on input, regardless of how it is obtained. Therefore, I'll create a very light contract for the calculator service internally - one that is easy to mock and test. Create a folder called "Contract" and add one interface, ICalculatorService. The interface looks like this: namespace RxWebServices.Contract { public interface ICalculatorService { IObservable<long> Add(int operand1, int operand2); } } Here is where things get interesting. You should be familiar with IEnumerable which we'll call a "pull" sequence of elements: you pull the values from the iterator. With Reactive Extensions, we invert this using IObservable to create a "push" sequence. With the push sequence, you subscribe and receive an event (pushed to you) when an element is available. In this case, we'll subscribe by sending in two operands, and wait to be pushed the result when it comes back. Wrapping the Service Now we've got a service proxy and an interface. Let's satisfy the contract. I'll show you the code, then explain it. Under the implementation folder, create a Calculator.cs class and wire it up like this: namespace RxWebServices.Implementation { [Export(typeof(ICalculatorService))] public class Calculator : ICalculatorService, IPartImportsSatisfiedNotification { [Import] public ICalculator CalculatorProxy { get; set; } private Func<int,int,IObservable<long>> _calculatorService; public IObservable<long> Add(int operand1, int operand2) { return _calculatorService(operand1, operand2); } public void OnImportsSatisfied() { _calculatorService = Observable.FromAsyncPattern<int, int, long> (CalculatorProxy.BeginAdd, CalculatorProxy.EndAdd); } } } Let's break it down. First, you'll notice we import the calculator service. This is the actual proxy we set up in the previous class. When the import is satisfied, we use a helper method provided by Rx to convert the asynchronous call into an observable list. The FromAsyncPattern takes in the types of the inputs, followed by the type of the output. It creates a function that, when called, returns an observable list of the results. In this case, we cast it from the beginning call to our calculator service to the return call. This is the way we take the asynchronous call and turn it into an observable list. When we actually want to use the method, we call the function with the inputs, and receive the output as the observable. Thus, we do all of the conversion internally, hide the implementation details, and just return a stream that can be subscribed to in order to fetch the results. Take a look at the signature for the actual service: private interface ICalculator { IAsyncResult BeginAdd(int operand1, int operand2, AsyncCallback callback, object asyncState); long EndAdd(IAsyncResult result); } To use Rx, we want a function that takes all of the inputs up until the AsyncCallback parameter, and returns an observable list of the return value. In this case, our two inputs are int, and it returns a long, so our function signature is Func<int,int,IObservable<long>>. By using these same types on the FromAsyncPattern extension method, Rx will return us the appropriate function and expect a pointer to the methods to start and the end the call. Fibonnacci Sequence Now we can get to the fun part: using the service. We'll use the service two different ways to illustrate how the observable lists work. In the MainPage.xaml, add some rows, a set of buttons, and a stackpanel. Generate code behind for the buttons. It will look something like this: <Grid x: <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <Button Content=" GO " Click="Button_Click" Margin="5"/> <Button Content=" GO " Click="Button_Click_1" Margin="5"/> </StackPanel> <StackPanel Orientation="Horizontal" Grid. </Grid> Next, let's go to the code behind and wire in the first example. First, we'll add some properties we're going to be using: [Import] public ICalculatorService Calculator { get; set; } private IDisposable _sequence; private readonly Subject<long> _watcher = new Subject<long>(); private int _x, _y, _iterations; The first piece is the service, which we import using MEF. When we subscribe to services, we receive a disposable observer. In order to cancel observations in progress and start new ones, we'll keep a reference to this using the _sequence field. What's Your Favorite Subject? The subject is interesting. Subjects are used to set up a publisher/subscriber model. The subject here is a long. Anyone with access to the subject can publish (send it a long value) and/or subscribe (receive notifications when values are published). We'll use this to bridge between our UI and the service. Finally, we've got some local variables to use to keep track of state. Next, we'll set everything up in the constructor: public MainPage() { InitializeComponent(); if (DesignerProperties.IsInDesignTool) return; CompositionInitializer.SatisfyImports(this); _watcher.ObserveOnDispatcher().Subscribe( answer => { var grid = new Grid { Width = answer, Height = answer, Background = new SolidColorBrush(Colors.Red), Margin = new Thickness(5, 5, 5, 5) }; var tb = new TextBlock {Margin = new Thickness(2, 2, 2, 2), Text = answer.ToString()}; grid.Children.Add(tb); MainSurface.Children.Add(grid); _Add(); }); } The first thing you'll notice is that if we're in the designer, all bets are off and we drop out. Otherwise, we compose the parts, which gives us our service import. Next, we'll subscribe to our subject. Notice that we don't have any service interaction yet. The subscription basically breaks down like this: - I'm interested in the subject with long values - When something happens, let me know on the dispatcher thread (as I'm going to do something with the UI) - When a long value is observed, give it to me: I'll make a grid as big as the value I received, put some text inside it, add it to the stack panel and then call the _Addmethod That's very simple and straightforward. No we can explore the missing method. First, let's kick things off when the user clicks the first button. I want to use the add service to compute a fibonnacci ratio (each number is the sum of the previous two, started with 1 and 1). I'll implement the button click code-behind and add the missing method here: private void Button_Click(object sender, RoutedEventArgs e) { if (_sequence != null) { _sequence.Dispose(); } MainSurface.Children.Clear(); _x = 1; _y = 1; _iterations = 0; _watcher.OnNext(_x); } private void _Add() { _sequence.Dispose(); if (++_iterations == 20) { return; } _sequence = Calculator.Add(_x, _y).Subscribe(answer => { _x = _y; _y = (int)answer; _watcher.OnNext(answer); }); } So the first part should be straight forward. If we had another sequence, dispose it. This will cancel any observations in progress. Clear the surface, initialize our variables, and then call the OnNext function on our subject. What's that? Simple: we just published a number. The subject will receive the number (1) and then push it to any subscriptions. We subscribed earlier, so we'll create a 1x1 grid and call the _Add method. This method is even more interesting. First, we stop after 20 iterations. No sense in going to infinity. Next, we subscribe to the calculator service. Subscriptions to observable lists are the same as subscriptions to subjects. We're asking to watch for a value, and initiating the "watch" by sending in our first values (1 and 1). When we receive the answer, we shift the numbers to continue the sequence, and then publish the number to the subject. This allows us to "daisy chain" service calls. We wait until we receive the first answer before we ask the next question. At this point, if you hit F5 (or CTRL-F5) to run it, and click the first button, you should see this: Note if you keep clicking while it is rendering, it will start over. There will be no "hanging" calls because the calls are daisy chained. We are also not blocking the UI while waiting, or you wouldn't be able to click the button again. You can clearly see the delays on the server as the results are returned. Here is a simplified overview of what is happening: Random Addition Now we'll throw another function into the mix. It's time to set up the second button. For this button, we're going to add two methods. The first is an enumerable that returns nothing but random numbers. It loops infinitely so we obtain as many numbers as we like, and we'll receive them in tuples: private static IEnumerable<Tuple<int,int>> _RandomNumbers() { var random = new Random(); while (true) { yield return Tuple.Create(random.Next(100), random.Next(100)); } } In the event handler for the second button, add this bit of code: private void Button_Click_1(object sender, RoutedEventArgs e) { if (_sequence != null) { _sequence.Dispose(); } MainSurface.Children.Clear(); _sequence = _RandomNumbers() .ToObservable() .Take(20) .Subscribe(numbers => Calculator.Add(numbers.Item1, numbers.Item2) .ObserveOnDispatcher() .Subscribe(result => { var text = string.Format("{0}+{1}={2}", numbers.Item1, numbers.Item2, result); var tb = new TextBlock {Margin = new Thickness(5, 5, 5, 5), Text = text}; MainSurface.Children.Add(tb); })); } This is a little different. First, we're taking the enumerable list of random numbers and turning it into an observable list so the values will be pushed to us. This is just by way of demonstration; we could have just as easily iterated the list with a foreach loop instead. What's interesting here is that I can limit how many I grab with the Take(20) extension. I subscribe like I do to any other observable list, and when I receive the next number pair, I turn around and subscribe to the calculator service to add the numbers for me. Instead of publishing the result to the subject, I'm handling it myself. I observe on the dispatcher thread, then add a text block with the addition statement to the stack panel. Go ahead and run the application, click the button, and you'll receive output that looks like this: Observations (Pardon the Pun) If you run this and click the go button, you might notice something interesting. No matter how many times you click, you get the full sequence of numbers. In other words, if I let 5 numbers come back, then click go, I'll receive a sequence of 35 numbers, not 25. Even more interesting is if you click the second go button, wait until most (but not all) of the 20 numbers return, then click the first go button. You'll see the screen clear, but you'll receive a few sequences of added numbers before the fibonnacci sequence starts. What's Going On? But we disposed of the subscription, right? Not exactly. In this implementation, we always getting the same service subscription. The subscription we cancel is the outer observation. To better understand this, load up a tool like Fiddler and watch the service calls. In the first example, the call is made, there is a delay, it returns, and then the next call is made. In the second example, however, almost all of the calls are made almost all at once. They return at different rates due to the delays on the server. So, when you start a new sequence, you subscribe to the same service and therefore get to watch the results return that hadn't made it back from the initial calls. This is important to understand as you are building your extensions, because in some cases you might want a new observable sequence, while in others it makes sense to keep the existing one. It depends on your needs and the desired behavior. Hopefully this will help open some new doors of understanding about Reactive Extensions! Nice post. A couple comments: 1) Generally the use of Subjects is discouraged. I think in this case, you can use Observable.Create instead. 2) Regarding your issue unwiring the subscriptions, .Subscribe returns an IDisposable object. If you hold onto that, you can dispose it to unwire the subscription and roll back the event stack correctly. Appreciate the feedback. Why are Subjects discouraged, out of curiosity? I'm obviously learning as I go and that's an interesting observation. Would like to understand more - any specific reasources you can point me to? Thanks!
https://csharperimage.jeremylikness.com/2010/08/simplifying-silverlight-web-service.html
CC-MAIN-2017-39
refinedweb
2,748
56.86
ChiChinLighting LED Motion Sensor Light Bulb 6 Watts Warm White PIR LED Light G60 E26 E27 Base 12.99 SODIAL(R) Infrared PIR Auto Switching Motion Sensor Detector Adjustable 6 LED Light Lamp 8.38 Solar Power 6 LED PIR Motion Sensor Light Outdoor Garden Wall Lamp for Waterproof Garden Lawn Lamps Landscape Yard Lights $ 38.15Get a Quote Only US$5.99, buy best solar power 20 led pir motion sensor wall light waterproof outdoor path yard garden security lamp sale online store at wholesale price. integrated solar panels for street light poles, you can buy good quality integrated supply solar street lamp outdoor street of page 2, we are integrated one led street light china solar distributor & integrated 40w all in two solar street lamp used for highway manufacturer from china market. 40watt integrated china solar powered led light, integrated / led road lamp 50000 hours lifespan: ip65 integrated solar led street light gst rate list for bags with 120 degree angleGet a Quote Light & Human Sensing. Solar lights have light sensors and human infrared sensing, can perceive light and human motion. When the night time sensing and the human sensor work at the same time, when someone walks in within a distance of 3 meters, the lamp automatically lights up and keep for 20-25 seconds.During the day, solar wall lights convert the charging state, even if someone is close will aluminum material outdoor ip65 80w street led lighting 100 watts solar 100w 18v solar panel all in one solar led street light; high lumen led flood light white ip66 outdoor cob led 200w flood light for sports field; new arrival light control cob ip65 integrated road lamp uv 150w ledGet. angel eye series outdoor solar led street lamp/integrated solar are the innovative model in all in one solar street lamp market. click to view more details. and lifepo4 lithium battery combine with high output leds and a human infrared sensor to achieve multiple features. lumen value up to 150lm/wGet a Quote.Get a Quote Solar Power PIR Motion Sensor Wall Lights 260 LED Outdoor Garden Pathway Lamp US. C $19.64. Free shipping manufacturer of best solar outdoor streetlight & streetlight 2021 - 16w sls lp solar street lights, 20w ssl m1 light, solar light - shenzhen daxie, solar street lights and 12w sls l solar street lights offered by visionary lighting & energy private limited, hyderabad, telanganaGet a Quote all in one street integrated solar street light led road (40w) $291.08. led: 40w 6500k li-ion battery: 70ah 3.7v solar panel: 30.6w install height: 4m ~ 6m waterproof : ip65 solar charging time: 10 hours by bright sunlight. add to cart. add to wishlist. product added! browse wishlist. the product isGet a Quote SUPER BRIGHT SOLAR POWERED LIGHTS: Solar lights outdoor equipped with 100 super bright LED beads, Solar motion senaor lights outdoor provides excellent illumination of up to 1000 lumens, Solar sensor lights which is far brighter than other similar LED solar lights.Get a Quote 100 LED Solar Powered PIR Motion Outdoor Garden Light Security Flood Wall Lamp. 5 out of 5 stars (1) Total ratings 1, £10.99 New £16.95 New. Rockline SL60684 2 LED Solar Powered Motion Sensor Outdoor Security Light. 4.1 out of 5 stars (35) Total ratings 35, £11.95 New. Solalite 36126SL Solar Powered Stainless Steel 2-in-1 Wall Light Installed stents helps you to get the better sunlight, charge more quickly, and expand the illumination area. 4 pack 30w max3000lm commercial street-light solar-controller led 24v and dc outdoor ip65 with remote post area lighting dusk to dawn pir motion sensor wall mount night lights (40 leds, 6500k) max3000lm super bright solar post lamp + 115.44wh lithium battery(replaceable) +Get a Quote 3 Head Solar PIR Motion-Sensor LED Light Outdoor Garden Wall Security Flood Lamp. £15.23 + £4.99 P&P. Seller 97.1% positive. 118LED Solar Powered PIR Motion Sensor Wall Security Light Lamp Garden Outdoor. £17.27. £18.18 previous price £18.18. Free P&P. Seller 98.2% positive. stadium lighting, best solar street lamp post 3 light outdoor pot, solar aluminum lighting manufacturer / supplier in china, offering best pyramid solar post street light weatherables aluminum stadium lighting with meanwell led phillips 3030, waterproof ip67 motion sensor outdoor all in one solar led street garden light, 150w polycrystalline solar panel with solar cell for solar system and so on.Get a Quote Free 2-day shipping. Buy Solar Street Light, 300W 600W 1000W LED Solar Lights Outdoor Street PIR Motion Sensor Outdoor Garden Wall Lamp for Park, Garden,Courtyard, Street, Walkway, Deck Waterproof at Walmart.comGet a Quote Cheap Solar Lamps, Buy Quality Lights & Lighting Directly from China Suppliers:ROMWISH Powerful Remote Control COB Solar Light Led Outdoor Solar Lamp PIR Motion Sensor Garden Wall Street Lights Decorative Enjoy Free Shipping Worldwide! Limited Time Sale Easy Return.Get a Quote. import quality tiffany led solar light with battery backup supplied by experienced manufacturers at global sources. we use cookies to give you the best possible experience on our website. for more details including how to change your cookie settings, please read our cookie policy .Get in factory ul2054 li-ion battery pack 3s2p lithium ion bateria 18650 11.1v 4ah/4000mah for smart solar garden street lamp used/bluetooth speaker fob price: us $9.85-10.55 / piece min. order: 100 piecesGet Built-in PIR Motion Sensor: The updated PIR motion sensor detects people up to 0~6m/0~20ft within the angle of 120 degrees. IP65 Weatherproof: Solar light waterproof rating IP65. No fear of dust and rain outdoor, suitable for any weather conditions.Get a Quote Solar Lights Outdoor with Motion Sensor, 800LM 112 LED Wireless Security Flood Light, 3 Adjustable Heads, 360° Rotatable Wide Angle Illumination, IP65 Waterproof for Porch Garage Yard Entryways PatioGet high brightness outdoor ip65 50 watt 100w 300w 200w solar led street light; 2020 new style lamp pole light waterproof outdoor led garden light with big solar panel. fob price: us $58.94 - $67.31 / piece new design 50 100 watt 30 100w 2021 dubai exhibition - solar lamp,solar lampGet a Quote
https://www.gierman.pl/f92b01311a363d3b9bb9b3f09834804c
CC-MAIN-2021-21
refinedweb
1,029
60.24
NAME vnode - internal representation of a file or directory SYNOPSIS #include <sys/param.h> #include <sys/vnode.h> DESCRIPTION The vnode is the focus of all file activity in UNIX. A vnode is described by struct vnode. file system.get file system which owns the vnode, v_type which contains the type of object the vnode represents and v_data which is used by file systems to store file system specific data with the vnode. The v_op field is used by the VOP_* macros to call functions in the file system will not work on this. VFIFO A FIFO (named pipe). Advisory locking will not work on this. VBAD An old style bad sector map IMPLEMENTATION. Calls to malloc(9) or free(9) when holding a vnode interlock, will cause a LOR (Lock Order Reversal) due to the intertwining of VM Objects and Vnodes. SEE ALSO malloc(9), VFS(9) AUTHORS This manual page was written by Doug Rabson.
http://manpages.ubuntu.com/manpages/maverick/man9/vnode.9freebsd.html
CC-MAIN-2014-41
refinedweb
157
66.23
. Each "vertex node" has a field which is an ArrayList. The ArrayList is a list of the "edge nodes" for which the particular vertex is a starting point of an edge which terminates at that "edge node". To traverse, you iterate down the "vertex list" to the starting point of interest, then out on the member "edgeList" to get to the nodes which are directly connected to that vertex. Thanks a lot. Let me just make sure I understand this; I should have a class for vertex nodes, and a class for edges, and a class for the graph itself. In the graph class, there's an Array List. Each value of the Array List (ArrayList[1], ArrayList[2], etc.) represents a vertex node, and contains a list of edges which begin that that vertex. Or does each value in the array list represent a vertex and contain a list of vertices which are connected to that vertex? When I reference edges, I mean the directed paths that lead between nodes in the graph. ps. I'm not trying to be annoying or ask for too much here, but any pseudo-code or an actual example of something like this would be awesome if anyone knows where i can find it. I am familiar with the terminology you are using. My suggestion is that, as you said, the graph class has a VertexList of VertexNode objects which have at least two data members: some means to identify the object (if you want) plus an "EdgeList" - an ArrayList of "EdgeNodes", one for each "edge" which radiates from the particular Vertex. The EdgeNode can store data such as weight (if you have any) and its name can just be some "edge" designation. Perhaps a structure like this ... public class SparseGraph { private static int count; private ArrayList<VertexNode> myVertexList; private static class VertexNode { private int countEdges; private int nodeID; private ArrayList<EdgeNode> myEdgeList; and so on ... } private static class EdgeNode { private int nameEdge; private int edgeWeight; private double edgeWeight2; and so on ... } public SparseGraph() { ... } and so on } Alright. This gives me a much better idea of how to get this off the ground. Thanks a lot, and if I have more questions I'll be sure to ask! Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?148238-Adjacency-list-representation-of-a-graph&mode=hybrid
CC-MAIN-2016-22
refinedweb
393
69.82
Unicode has enabled the internet as we know it! It has largely resolved the issues with Mojibake (garbled text) between different character sets being used. Unicode is essentially a huge dictionary with mappings (for example, U+058E) that point to symbols we can use (֎). In the past, there were issues when different countries, corporations, or individuals all had different ideas about which character mapping should correspond to which symbols. This means you’d often try to read webpages ��like�� thisⅇ↫⋀₿≝┿⣶⤚⺬⨺➄⧻⒏⥱➯⢃⢻⽱⓽⅘⋵⟖➝⚠❐≷↼Ⲝ┃ℵ⚔☪⸺⧷ℱ⥹⧖⏗ₓ⡐⟄⏪┸⓳⡷⊢⠒┣⋰⡫. Messy, right? Unicode compliance now allows us to not only reliably use different languages other than English, but it also enables us to use other unique characters, like Egyptian hieroglyphics (𓀀 𓂕 𓁎), Sumerian Cuneiform characters (𒀖 𒁝𒃿), or emojis (😃). You can look at every emoji defined in the Unicode character set here. Using emojis has been proven to boost engagement by 25.4% in certain situations, which explains why — based on 6.7 billion tweets over the last decade — emoji use has never been higher. But just knowing that Unicode has enabled consistent emoji use isn’t enough – there are still plenty of questions around including emojis, such as: - How can we create reusable React components for them? - How can we ensure the emojis are accessible for screen readers? - What are the best practices when using them? - Should we be using the emojis themselves or use the mapping instead? Let’s dig in! ⛏️ Emoji usage in React There are multiple ways to include emojis in a codebase, but some are better than others. Some of your options are: - To copy and paste the emoji inline: 😃 - To reference the Unicode identifier/mapping of the emoji HTML entity, like: & #x1F603; - To copy and paste the emoji inline, then wrap it in a HTML element: 😃 - Install a dependency to deal with it All of these would work! But “making it work” isn’t always the optimal solution, so let’s discuss the benefits and drawbacks to each of these approaches. Using emojis inline This is probably the simplest solution. Copy and paste the emoji, and then the job’s done. In addition, screen readers are smart enough to know how to read emojis. So, simply doing this would leave your application utilizing emoji, and, for the most part, they’d be accessible. But emoji often aren’t as plain as screen readers can describe. They often take on a second meaning. For example, do you know these “second meanings” of emojis? The goat emoji is an acronym for the Greatest Of All Time, so people may often say something like the greatest rapper of all time (GOAT) is Tupac, or the greatest hockey player of all time (GOAT) is Wayne Gretzky. The snake emoji is commonly used to describe people backstabbing, or being two clapping hands are commonly used to emphasize a point, like 👏 this 👏 between 👏 each 👏 word. Note that this particular type of writing is terrible for screen readers and discouraged for keeping your content friendly to screen readers. If you are interested in reading more about accessible emoji usage, here is an excellent resource, and so is this one. But screen readers can’t convey that meaning. Even if they did, with language changing so much, the definition could be wrong and would need to constantly be updated. It’s for this reason I would advise against using emoji inline. That’s one emoji approach ticked off. Let’s talk about another. Use the HTML entity Unicode mapping You can use emojis hex or decimal code points from Unicode directly in your code, so that something like this: <!DOCTYPE html> <div> <h1>Unicode Character</h1> <h3 style="display: inline">(U+1F436)</h3> <h3 style="display: inline">Dog Face</h3> <h3 style="display: inline">🐶</h3> </div> Would render the below: You could find these hex/decimal representations pretty easily, too. Here is an example of a huge list of emoji HTML hex codes. Or you can find them via a lookup table available here. Even using this method still doesn’t fix the issues we mentioned about second meanings being lost, and, as a developer, I think it is harder to work with. It’s much easier to see the emojis in your code like this: 😃, rather than read their mappings like so: & #x1F603; When you copy/paste an emoji directly, you immediately know what the emoji is and the context in which it is used. Emojis keep their semantic meaning a little better, and I’d argue that they are simpler to work with. Wrap the inline emoji in a DOM Element This is the best approach. You can simply wrap an inline emoji with a basic DOM element, something like this: <span role="img" aria-🐕</span> This allows you to add better alt text if you think what a screen reader would pick is unclear with your writing. If you are unsure what a screen reader would read for each emoji, you can search on Emojipedia, and the title of the emoji is what a screen reader would likely say. Some examples are: - 🥰: Smiling face with hearts - 🌏: Globe showing Asia-Australia - 🕴️: Person in suit levitating Writing quality alt text is hard, and if you decide to re-word what the default text is slightly to better convey your meaning, remember that emotion matters. For optimal reuse, you could make a very simple functional component that passes in all the necessary meta-data, something like this: import React from 'react'; const Emoji = props => ( <span className="emoji" role="img" aria-label={props.label ? props.label : ""} aria-hidden={props.label ? "false" : "true"} > {props.symbol} </span> ); export default Emoji; Then this component could be imported and used in a standardized way throughout the codebase: <Emoji symbol="🐑" label="sheep"/> With this method, you ensure a consistent and reusable pattern to use emoji in your codebase, and because <span> displays inline by default, it can be used right in the middle of text with minimal CSS rework. You can provide sensible defaults for the components properties, such as defaulting the aria-hidden to false if that also suits your use cases. Doing so ensures your emojis are accessible, where you can explain any second meanings or extra explanation you want to accompany your emoji. Install a dependency to deal with it Installing a dependency makes the job easier, but it is generally less configurable if you need to do something specific or as a unique use case just for you. There are several great npm packages available: emoji-picker-react, Unicode Emoji, or node-emoji. They are all easily and similarly installed. For example, emoji-picker-react has a really simple setup. To include it in your dependencies, simply run npm i emoji-picker-react. It offers a dropdown option for the emoji you want to choose. The npm package does use React Hooks, so you will need to use React 16.8 or higher! The docs have a useful explanation of how to include it also: import React, { useState } from 'react'; import Picker from 'emoji-picker-react'; const App = () => { const [chosenEmoji, setChosenEmoji] = useState(null); const onEmojiClick = (event, emojiObject) => { setChosenEmoji(emojiObject); }; return ( <div> {chosenEmoji ? ( <span>You chose: {chosenEmoji.emoji}</span> ) : ( <span>No emoji Chosen</span> )} <Picker onEmojiClick={onEmojiClick} /> </div> ); }; Conclusion I hope this has been informative! 🤓 There are multiple ways to add emojis into your React app, but with accessibility and reusability in mind, a simple functional component should fit almost every single use case..
https://blog.logrocket.com/adding-emojis-react-app/
CC-MAIN-2022-40
refinedweb
1,248
60.75
Hey there, how are you? I'm an 18 year old a backend developer and an aspiring Machine Learning Engineer. And in this article, I'm going to be writing about how to build a web app on your phone using Python 😁. Let's dive into it. Requirements The first thing we need here is an Android phone, at least version 6.0 and upward. But what if I told you that's all we need? Seems too good to be true. Now the next thing we need to do is install a mobile application on our phone called pydroid3. As you can see, pydroid3 is a mobile application that lets you write Python on your mobile phone, so go ahead and install it. The next thing we need to do is install Django. If you're not familiar with Django, please check out the Django docs here. To install Django we need to open up the side navigation in our pydroid3 and select Terminal: Then click on it and we should see this: Once that is done all you need to do is type the following command: pip install django And you should get the below. I am getting a "requirements satisfied" message because I already have it installed. It has installed successfully, but let's confirm that. In the terminal type django-admin and hit enter. You should get this: This means that it's actually installed already. How to Build our Project So let's get started with building our project. Open up your terminal and type in the following command: django-admin startproject myapp This creates a Django application called myapp in your root folder. Change directory to it by typing cd myapp and type in python manage.py runserver. Then you should get this: Now the server has started. Next, to test it in the browser visit 127.0.0.1:8000. And boom! You should see that Django has been setup successfully. The next thing we need to do is create our Django app. In Django, the project folder serves as the root while the app serves as the application itself. To create a Django app, make sure you are still in the directory, then type python manage.py startapp todo. This creates a To-do app in our myapp project like this: Then inside the todo folder we should see something like this: We will take a further look at the files when we begin working with them. How to Configure our Application Now let's make it possible for the app to be served by the Django project. First of all, open up your settings.py file in the myapp folder and add 'todo' to the installed apps like this: Next we need to open up our urls.py and add the following to your code: from django.urls import path, include path('', include('todo.urls')) What actually happened was that I added include to the from the django.urls import path. And below the path ( admin) , we created an empty path that points to or includes the urls.py file in the todo app directory. I hope that's clear. Next we need to create a new file in the todo file directory named urls.py and add the following code in it: from django.urls import path from . import views urlpatterns = [ path('', views.index, name='home') ] We imported path from Django.urls and also imported views from the root directory. Then we created our urlpatterns with the first part as the root link. As you can see, the views.index just means that we're pointing this views to the index function in on views.py file. You will see how that works in a jiffy. Let's go ahead to our views.py file and add some code. At the top, import HttpResponse like this: from django.http import HttpResponse And add this below it: def index(request): return HttpResponse('Hello') As you can see, we created the index function we called in our urls.py and we passed in a request parameter into it. Then we returned an HttpResponse. But before the HttpResponse can work, we have to import it from django.http import HttpResponse – as simple as ABC. Let's try this: open up your terminal and cd into myapp and type python manage.py runserver to test it. As you can see, it returned the response. So next we will load our template HTML files. To load our HTML files we need to create a folder like this in the todo directory in this order: todo/templates/todo In the todo directory, create a folder called templates. Inside that folder, create a folder called todo, as simple as that. Then go ahead and create a simple HTML file called index.html and write this in it: <h1>Hello world</h1> To load it, make your views.py code look like this: def index(request): return render(request, 'todo/index.html') Now instead of returning response we returned a render view that allows us to render our HTML template now, save this open up your terminal cd into myapp and run it. We should have this As you can see it works well - on to the next step. How to Set Up the Static Files Now to set up the static files, create a new folder in your todo directory and name it static. Inside that folder, create a folder and name it todo. So it should be like this: /static/todo/. In the todo directory, create a file and name it main.css. Then let's write a little styling in it: body { background-color: red; } And save it. Now let's re-edit our index.html file by writing this code: {% load static %} <!Doctype html> <html> <head> <title>My page</title> <link rel="stylesheet" href="{% static 'todo/main.css' %}" > </head> <body> Hello </body> </html> And now let's run it: If you've followed along with me, then you should have the above. How to Load the Models and Admin Panel Now to load up our admin panel, we need to create a superuser. This is simple to do – just open up your terminal and cd into the myapp folder then type python manage.py createsuperuser and hit enter. You should see this: We get an error because we haven't run python manage.py migrate yet. So type that and hit enter, and you should have something like this: Now type in python manage.py createsuperuser and hit enter: Just fill in the credentials. The next thing we need to do is to run our server and point to 127.0.0.1:8000/admin. Login and you will be directed to the dashboard: Now that we have done the admin panel, let's work with the model (database). We'll create a model that collects contents. So open your models.py file and type in this code: class Post(models.Model): content = models.CharField(max_length=255, null=False) def __str__(self): return self.content We create a class that has the parameter models.Model and gives a variable content that holds a CharField(), more like a text field. Lastly we create a magic str that returns the name of the model instead of an object. So next we need to run the migration. Open your terminal, cd into myapp, and type python manage.py makemigrations. You should see this: That means it has created the Post table in our database. Then also run python manage.py migrate which will result in the following: This means that all is clear. Now to add it to the admin page, open up admin.py and type in this code: from .models import * admin.site.register(Post) We imported all model classes from the model and registered the post model in the admin panel. Now if we open the admin panel we should see the post and save some data. Notice that it's now in the todo app list: After clicking on it you should see this: Then you can create a post if you like. How to Render Data from DB to View Lastly we will fetch our data from the DB. To do so we need to update our views.py as follows: from .models import * def index(request): content = Post.objects.all() context = {'content': content} return render(request, 'todo/index.html', context) It's as simple as that: we imported all from models.py, created a variable called content, and retrieved all the data from the table Post. Then we passed it as a dictionary to our view. So in our index.html to make it work just add this: {% for contents in content %} {{content.content}} {% endfor %} Here, we wrote a loop using the templates tag and fetched all the data content. Now open your terminal, cd into myapp, and run the server to see the magic happen: It works, but let's confirm that it does: And the result should be the following: Violà – it works fine. Lastly you can just add a line break so you can read it more clearly. And we're done! Thank you for reading. If you want to go through an in-depth Django tutorial please visit my YouTube channel Devstack and subscribe.
https://www.freecodecamp.org/news/how-to-code-on-your-phone-python-pydroid-android-app-tutorial/
CC-MAIN-2021-21
refinedweb
1,557
76.11
0 Hey Everybody, I'm making a Java applet and trying to post it online. Whenever I open the HTML page where the applet is, i get an error message. Here's the website with the applet on it: Applet Website Link Thanks in advanced, C++ Hey Everybody, I'm making a Java applet and trying to post it online. Whenever I open the HTML page where the applet is, i get an error message. Here's the website with the applet on it: Applet Website Link Thanks in advanced, C++ this is the buggy line <applet code = "[B]/[/B]" width = "400" height = "180"> you need to remove the last slash <applet code = "" width = "400" height = "180"> Is that your website? srry 2 double post, but: tonakai, it worked but the applet freezes on the last JOptionPane screen. Here's my code: import java.awt.Graphics; import javax.swing.*; import java.awt.*; public class quiz extends JApplet { double t; int corCount; String input1; String input2; String input3; String input4; String input5; String input6; String input7; String input8; String input9; String input10; public void init(){ try{ JOptionPane.showMessageDialog(null, "Welcome to the quiz.\n\n" + "For multiple choice questions, enter the best LETTER.\n" + "For true / false quetions, enter 'T' or 'F'.\n\n" + "This quiz consists of 10 questions.", "R E A D T H I S M E S S A G E !", JOptionPane.WARNING_MESSAGE); input1 = JOptionPane.showInputDialog(null, "True or False:\n" + "Cancer cells can mutilate and become resistent to\n" + "chemotherapy drugs.\n", "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input1.equals("T") || input1.equals("t")) { corCount++; } input2 = JOptionPane.showInputDialog(null, "Multiple Choice:\n" + "Ectoderm gives rise to which of the following:\n\n" + "A nerve+skin\n" + "B internal organs\n" + "C muscles and bone\n" + "D all of the above\n", "Multiple Choice", JOptionPane.INFORMATION_MESSAGE); if (input2.equals("A") || input2.equals("a")) { corCount++; } input3 = JOptionPane.showInputDialog(null, "True or False:\n" + "The hair-like structures covering the\n" + "paramicium are flagella.\n", "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input3.equals("F") || input3.equals("f")) { corCount++; } input4 = JOptionPane.showInputDialog(null, "True or False:\n" + "The chloroplast detects light in a euglena.\n" , "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input4.equals("F") || input4.equals("f")) { corCount++; } input5 = JOptionPane.showInputDialog(null, "Multiple Choice:\n" + "While cloning, a specialized cell nucleus is inserted into:\n\n" + "A another specialized cell\n" + "B a normal egg\n" + "C an egg that had its nucleus removed\n" + "D the uterus of a female\n", "Multiple Choice", JOptionPane.INFORMATION_MESSAGE); if (input5.equals("C") || input5.equals("c")) { corCount++; } input6 = JOptionPane.showInputDialog(null, "True or False:\n" + "100,000,000,000 neurons are in a piece of tissue the size of a grain of rice.\n" , "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input6.equals("F") || input6.equals("f")) { corCount++; } input7 = JOptionPane.showInputDialog(null, "True or False:\n" + "The PR is in the bottom of the bud of the hand.\n" , "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input7.equals("T") || input7.equals("t")) { corCount++; } input8 = JOptionPane.showInputDialog(null, "True or False:\n" + "The SRC gene is the key to determining gender.\n" , "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input8.equals("F") || input8.equals("f")) { corCount++; } input9 = JOptionPane.showInputDialog(null, "True or False:\n" + "Select the gene rules of pairing.\n\n" + "A T,C and A,G\n" + "B A,T and G,C\n" + "C T,G and A,C\n" + "D both b and c\n" , "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input9.equals("B") || input9.equals("b")) { corCount++; } input10 = JOptionPane.showInputDialog(null, "True or False:\n" + "Cells need glucose to construct new parts.\n" , "TRUE OR FALSE ('T' or 'F')", JOptionPane.INFORMATION_MESSAGE); if (input10.equals("T") || input10.equals("t")) { corCount++; } if (corCount >= 8) { JOptionPane.showMessageDialog(null, "\nNumber Correct: " + corCount + " / 10", "Congratulations!!!", JOptionPane.INFORMATION_MESSAGE); } else if (corCount >= 5) { JOptionPane.showMessageDialog(null, "\nNumber Correct: " + corCount + " / 10", "Better Luck Next Time...", JOptionPane.INFORMATION_MESSAGE); } else { JOptionPane.showMessageDialog(null, "\nNumber Correct: " + corCount + " / 10", "Go Back to Preschool", JOptionPane.INFORMATION_MESSAGE); } } catch(Exception e){ } } public void nothing(){ try{ //this method halts the program - does nothing, hence the name nothing } catch(Exception e3){ } } public void paint(Graphics g) { super.paint(g); try{ g.setFont(new Font("arial", Font.BOLD, 16)); g.drawString(" Q U I Z ! ! ! ", 115, 15); g.drawRoundRect(45, 30, 270, 20, 15, 15); g.setFont(new Font("arial", Font.BOLD + Font.ITALIC, 14)); g.drawString("Questions Correct: " + corCount + " out of 10", 50, 45); g.setFont(new Font("arial", Font.PLAIN, 12)); g.drawString("Your Input:", 155, 70); g.drawString("1.) " + input1 + " ", 10, 80); g.drawString("6.) " + input6 + " \n", 300, 80); g.drawString("2.) " + input2 + " ", 10, 100); g.drawString("7.) " + input7 + " \n", 300, 100); g.drawString("3.) " + input3 + " ", 10, 120); g.drawString("8.) " + input8 + " \n", 300, 120); g.drawString("4.) " + input4 + " ", 10, 140); g.drawString("9.) " + input9 + " \n", 300, 140); g.drawString("5.) " + input5 + " ", 10, 160); g.drawString("10.) " + input10 + " \n", 300, 160); String in = JOptionPane.showInputDialog(null, "Type \"answers\" to view the answers,\n" + "\"exit\" to exit, and \"ignore\" to ignore this message.", "Answers", JOptionPane.QUESTION_MESSAGE); if (in.equals("answers") || in.equals("ANSWERS")) { JOptionPane.showMessageDialog(null, "Answers (Correct Letter)\n\n" + "1.) Cancer cells can mutilate to become resistent to chemo ( T )\n\n" + "2.) Ectoderm gives rise to nerve and skin ( A )\n\n" + "3.) Hair structures are called cilia ( F )\n\n" + "4.) The eyespot detects light ( F )\n\n" + "5.) Cell nucleus is inserted into an cell without a nucleus ( C )\n\n" + "6.) 100,000 neurons in brain tissue are about the size of rice ( F )\n\n" + "7.) The PR is at the bottom of the hand bud ( T )\n\n" + "8.) The SRY gene is the key to determining gender ( F )\n\n" + "9.) A,T and G,C are the rules of pairing ( B )\n\n" + "10.) Cells need raw materials to construct new parts ( T )", "Answers", JOptionPane.WARNING_MESSAGE); nothing(); } else if (in.equals("exit") || in.equals("EXIT")) { System.exit(0); } else { nothing(); } } catch(Exception e2){ } } } Can you help me? Thanks in advanced for your help, C++ server_crash, that is my website - do u like it? Yes, looks very nice. well, i run your program and it crushes on my computer too(wow, its portable....) :) i think putting these code may cause it, but i am not sure String in = JOptionPane.showInputDialog(null, "Type \"answers\" to view the answers,\n" + "\"exit\" to exit, and \"ignore\" to ignore this message.", "Answers", JOptionPane.QUESTION_MESSAGE); in your "paint" method, because it not totally inyour hand at when it is going to called. so putting selections it not a good way in paint method, paint method needs to be fast, i think move it somewhere else :D and using an array string makes your code shorter.... see ya, thank u for responding. I don't think having some JOptionPane statements in the paint method is the main problem. I think that ur correct, and it has to do with the JOptionPane u pointed out. I tried making another method - but i get the same result. More help would be appreciated :( :( server crash, thanks. the website's for science class (u probably figured that out). My teacher's paying for it ;) Do you have any suggestions well then, i must insist that it is what causes the bug. every time applet tries to paint itself, it just re-ask you the same question... putting things like JOptionsPane is still not a good idea i think, but you can create a first time flag, which when you start to program, initilize it to false, then before you display your Joptionpane in paint method check if it is false, if false then display the JOptionPane and make it true... but it is a poor solution. server crash, thanks. the website's for science class (u probably figured that out). My teacher's paying for it ;) Do you have any suggestions The only thing I see that needs to be changed, is the JOptionPanes. They pop up when you browse to that page. I think the user should have the choice of taking it, although that is the page for taking it! I believe if you got rid of the JOptionPanes it would look much better. tonakai, what you helped me with was working perfectly... until now. I get the same error message as before, bad magic number. My website is: can you help me and also explain what the magic number is? Thank you SO much in advanced,
https://www.daniweb.com/programming/software-development/threads/21168/java-applet-help
CC-MAIN-2016-50
refinedweb
1,441
69.99
non slippery wood flooring export compare ... import export 12mm high gloss non slip flooring laminate flooring. ... outdoor engineered wood porch flooring materials export dubai wpc porch. anti temperature wood flooring export blog - wholesale composite decking products, durable wpc floor ... when to use ... outdoor engineered wood porch flooring materials export dubai wpc. outdoor engineered wood porch flooring materials export dubai wpc porch flooring materials is made of wood-plastic compounded material , and the floor can be machined as the same as the wood ... outdoor engineered wood porch flooring materials export dubai ... custom wpc diy decking material. diy wood deck wpc - seven trust diy outdoor wpc deck tile wood plastic composite board . ... anhui sentai wpc new material co., ltd. under anhui sentai group ... the hometown of bamboo in ... wpc composite deck exports - kompozit ahşap deck fiyatları ... patio deck portable vietnam · ecological products wooden post suppliers · plastic extrusion... wpc material floor, wall panel, fence, outdoor furniture - shanghai ... bench · economical outdoor solid wpc floor material anti skids. solid decking · cheap diy wpc floor material in patio ... wall panel anti-uv use wpc material... outdoor porch wood plastic composite railing - cheap wpc floor ... you have just completed your porch project, now need outdoor railing. ... wpc decking, -- solid decking, -- hollow decking, -- diy decking ... one of the types of low maintenance porch railing materials is outdoor porch wood plastic composite railing. ... every year, we will be our products are exported to all over the world,... china bio fiber pe wpc decking used near dock /vinyl composite ... china bio fiber pe wpc decking used near dock /vinyl composite flooring ... waterproofing material wood composite diy tiles wpc deck tile and so on. .... outdoor patio e1 waterproof wood wpc material products flooring export price. import wpc decking - wpc decking plank wpc composite import export, - outdoor wpc decking board,wood ... wpc composite ... russia import wholesale wpc decks floor - patio decking material ... indonesia import wholesale wpc decking; graceful outdoor diy wpc floor tile;... anti skid water wood flooring export anti skid material for wood export anti-slip floor paint - anti slip tape, uk - paco systems ... composite decking material blog - wpc outdoor flooring anti skid pool fence . ... anti-skid porch and floor paint coating in a long,. ... composite decking brands reviews · outside stair railing diy floor · how to use for waterproofing... best outdoor decking wood, best product for outdoor decking ... we offer the best price for composite decking material, that is wood plastic composite. ... we are wood plastic composite decking production and exporter in china. ... be sure to choose the right color for the flooring material of your patio area. ... wpc outdoor decking, skid wood flooring, diy flooring,if you are interested,... Related Articles - wooden deck diy cape town - pergola design diy in myanmar - outdoor patio diy flooring on grass - diy outdoor 2nd floor deck - diy wood handrail steps - diy composite retaining wall railroad ties - diy cheap but beautiful patios floor - best material for building diy floor - diy eco friendly flooring in malaysia - fence cost calculator diy - diy wood deck with concrete tiles - diy under deck drainage - floating diy composite deck - uae dubai wooden decking diy - diy network decorative outside walls - composite fencing panels diy thailand - small garden fence diy - diy front steps handrail - different style of composite diy floor - diy pavilion kits prices -
https://www.lindeslane.com.au/deck2/3623-diy-porch-wpc-material-export.html
CC-MAIN-2020-50
refinedweb
541
58.28
Introduction Recently, there are many live cameras/CCTVs (closed-circuit television) has been installed in many places such as offices, roads, and homes, in order to help human work. There are various purposes of installing CCTVs, such as security, surveillance, and data analysis. One of the challenges of installing CCTVs is about data management. Most CCTVs are set to take data/image near realtime conditions. If a CCTV takes data/image every minute and each data size is 50 KB, then we need 72MB of hard disk capacity per day, or 26.28 GB per year. If the purpose of using CCTV is to analyze data, we need to take data in a long span. In this scale we can say this is big data, therefore we need to compress data. Compressing live camera data are commonly done in the following two phases: In the first phase, the CCTV compress the data which will be delivered into the data centre. This is commonly done by JPEG compression. In the second phase, we compress data in the data centre. What I want to explain in this article is a compression which is done in the second phase. In this phase, images are commonly compressed by image subtraction method, then the run-length enconding method. Here we increase the compression rate of the method above by calculating the similarity of images. Required Application and Libraries - Python 3.5 - OpenCV 3.1 - scikit-image (compare_ssim) 0.13.0 - PIL Tested on macOS Sierra 10.12.2 and Linux (Ubuntu 16.04) Implementation 1. Explanation about the compression method We combine two methods; First, the image subtraction method, then the run-length compression method. The image subtraction method increases the number of redundant pixels by subtracting pixels from two similiar images, we then compress the redundant pixels by the run-length encoding method. The formula to subtract the image is below: C = B - A which is: C is subtracted image (image member) B is subtraction subject A is subtraction object (image key) The algorithm of image subtraction uses an iteration that compares the similarity between two images. If the similarity is less than the threshold then the image member will become image key, else, the image key will be the same until the iteration process finds an image member that has similarity bigger than threshold. The next step, the program compress image key and image member using run-length encoding compression. Bellow is the pseudocode: image key="" while image(n) < total image image member="" if (keyfile == "") image key = image(n) image member = image(n+1) else image key = image(n) endif if similarity(image key, image member) < thresshold if (image key == image(n)) compress(image key) endif compress (image member) image key = image member continue endif image subtracted = image member - image key compress (image subtracted) if (image key == image(n)) compress(image key) endif end Based on that algorithm, the compression rate depends on the total number of image keys and image members. The more image member and less image key that are created, the higher rate compression that will be got. 2. How does image subtraction work? The image subtraction method uses the OpenCV subtraction function. If there is a same pixel between two images, then the pixel will be colored black, or (0,0,0) in RGB form. Below is the sample of image subtraction using OpenCV and the result: img1 = cv.imread("./img1.jpeg") img2 = cv.imread("./img2.jpeg") img3 = cv.subtract(img2, img1) Here we will compare the original image and the subtracted image using OpenCV-im.load(): im = Image.open("./data/img1.jpeg") pix = im.load() w,h = im.size total_ori = 0 for i in range (0,w): for j in range (0,h): if "(0, 0, 0)" in pix[i,j]: total_ori = total_ori + 1 im2 = Image.open("./data/img3.jpeg") pix = im2.load() w,h = im2.size total_subtraction = 0 for i in range (0,w): for j in range (0,h): if "(0, 0, 0)" in pix[i,j]: total_subtraction = total_subtraction + 1 print ("original image 0 pixel: %d"% total_ori) print ("subtraction image 0 pixel: %d"% total_subtraction) Result: original image 0 pixel: 25 subtraction image 0 pixel: 112833 Based on the result above, it is clear that the image subtraction method can increase the number of consecutive pixels of identical color. Then the next step is to compress the subtracted images by run-length encoding compression. 3. How does the run-length encoding work? Run-length encoding encodes images. There are many types of run-length encoding; here we use Packbits. Below is a sample code: foo=Image.open(filein) foo.save(fileout, compression = 'packbits') How does it actually work? Let us explain the method using an example. Here is a 16-byte image: W W W W W B B W W B B W W W W W Compression Result: 5W 2B 2W 2B 5W The 16-byte image above will be written as a file with the following format/sequence: "W W W W W B B W W B B W W W W W". Based on the sequence, it is easy to understand that there are redundant pixels in the sequence. Packbits algorithm encodes the redundant pixels by storing series (run) of identical pixels color. In our case, the sequence "W W W W W B B W W B B W W W W W" will encode as "5W 2B 2W 2B 5W". Thus, the total number of bytes decreases from 16 bytes to 10 bytes. Result The best compression rate is obtained when the image similarity is 50% Summary The key point to get higher compression rate with this method is to increase the number of image members and to decrease the number of image keys. However, we have to maintain the similarity between image keys and image members. The more similar the images, the higher compression rate we will get from Packbits.
https://qiita.com/billy98/items/0845f43c079f36629bf9
CC-MAIN-2021-49
refinedweb
991
54.83
Hiya, Today pacman upgraded anki from 2.0.3-1 to 2.0.4-1, which seems to have broken anki. When try to run Anki, I get a message saying Please build and install the PortAudio Python bindings first. (which I got before, when Anki worked correctly) and a popup saying Error during startup: Traceback (most recent call last): File "/usr/share/anki/aqt/main.py", line 42, in __init__ self.setupAddons() File "/usr/share/anki/aqt/main.py", line 500, in setupAddons import aqt.addons File "/usr/share/anki/aqt/addons.py", line 13 in <module> from aqt.downloader import download File "/usr/share/anki/aqt/downloader.py", line 7, in <module> from anki.sync import httpCon File "/usr/share/anki/anki/sync.py", line 22, in <module> _proxy_info_from_environment = httplib2.ProxyInfo.from_environment AttributeError: 'module' object has no attribute 'ProxyInfo' EDIT: Sorry, should I have made a bug report for this instead? I'll file one now. Regards, Last edited by Snark1994 (2013-01-09 22:13:48) Snark1994 Offline Have you tried installing or ? Offline I confirm this is a bug. It has nothing to do with pyaudio. It seems there is some problem with sync.py. The problematic code quoted in the error messages runs without problem independently but fails when called by anki. We probably should file the bug in anki. In the meantime, a dirty fix is to comment out the following to lines from anki.sync import httpCon import aqt.sync # monkey-patches httplib2 in /usr/share/anki/aqt/downloader.py. After this, anki is usable but the sync function will be temporarily disabled. Offline Same here. Using your workaround anki starts. Offline I reported the bug yesterday: I don't know what the procedure is, but I assume that an upstream bug will be opened if necessary... python-pyaudio doesn't fix the problem, as already noted - and the message had appeared without an error in the previous version. Last edited by Snark1994 (2013-01-09 10:07:50) Snark1994 Offline Turns out to be a packaging bug. If you chmod /usr/share/anki/thirdparty/httplib2 to 644 instead of 640 it works again. Should be fixed in the next release. Offline As Hwesta said, fixed in Anki 2.0.4-2. Snark1994 Offline
https://bbs.archlinux.org/viewtopic.php?pid=1215499
CC-MAIN-2016-26
refinedweb
381
62.04
Search... FAQs Subscribe Pie FAQs Recent topics Flagged topics Hot topics Best topics Search... Search Coderanch Advance search Google search Register / Login Win a copy of Functional Design and Architecture this week in the Functional programming forum! Murali Nanchala Ranch Hand 74 0 Threads 0 Cows since Mar 14, (74 (74/10) Number Threads Started (0/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Murali Nanchala Data preservation I see two places where there is a potential for trouble. First, when you are reading in from the Spreadsheet. Second, when you are writing to the serialized object. Not sure what your access modifiers are on the two methods, but yes, two users can potentially corrupt the read and/or writes. What kind of access does the method (you posted) in the service allow? Since it is in turn calling the loadSpreadSheet() method, restricting access to this method should suffice your need. But if you want a less obtrusive way, just synchronize the serialized object and go from there. show more 13 years ago Java in General Handle BatchUpdateException You can get some info with this method I would probably NOT run the rest of the statements if something fails. But if they are unrelated and not transaction-oriented, you could. show more 13 years ago Java in General Data preservation Need more information about the code and how you actually do the reads and writes. On the outset, it looks like a synchronization issue. Depending on your current implementation, the solution could be very simple to somewhat complicated. But definitely do-able. show more 13 years ago Java in General Interfaces, why do we have them? Read Jan van Mansum 's reply one more time. Slowly. Yes, you want to introduce the 'in-directness' (as you call it) into the code to de-couple the classes. So, in the future you can update/change the implementation of the method with [almost] no effect to the rest of the code. This has more advantages in distributed applications. You will find out if you don't already know. show more 13 years ago Java in General Which Java Collection construct will be suitable ? Then you can use a HashMap object AND synchronize it. But if you intend to put million items in the collection, I suggest you re-think your approach. It is definitely going to slow down with increase in size. Especially after synchronization. What would be your servlet's client [traffic] volume? [ November 29, 2007: Message edited by: Murali Nanchala ] show more 13 years ago Java in General variable prefiexed with $ symbol Off topic. But most people who come from countries which were colonised by the English would understand what perchance means. Even today. Other than that, it is a good find on the debug message. show more 17 years ago Java in General Totally baffled with ArrayList throwing NullPOinterException ! OK. Look at this piece of code if (txrecord == null) {System.out.println("txrecord is null!");} else {txrecord.insertTXRecord(var_acctName, var_unitCost); txrecord.getTXRecord(); // You can see that the statement txrecord is null! is being printed out. Meaning that txrecord is actually null. And then you call a method on the null object. [ March 02, 2004: Message edited by: Murali Nanchala ] show more 17 years ago Java in General StringTokenizer Neverming my babble about the getText() method. What you had in your code works just fine. I cleaned up the program, left your original lines in there. So you know what was happening. import javax.swing.*; import java.util.*; import java.awt.event.*; import java.awt.*; public class TokenTest extends JFrame { private JLabel prompt; private JTextField input; private JTextArea output; //private String out; //ORIGINAL CODE private String out = ""; //ADDED public TokenTest() { super (" testing Class StringTokenizer "); Container c = getContentPane(); c.setLayout(new FlowLayout()); prompt = new JLabel("Enter a sentence and press enter"); c.add (prompt); input = new JTextField (30); input.addActionListener( new ActionListener() { public void actionPerformed (ActionEvent e) { String st = e.getActionCommand(); StringTokenizer tokenarr = new StringTokenizer(st); //StringTokenizer tokens[]= new StringTokenizer[st.countTokens()]; //ORIGINAL CODE String tokens[] = new String[tokenarr.countTokens()]; // ADDED System.out.println("Number of tokens: " + tokens.length); output.setText("The reverse string is as follows:"); // int l= tokens.length; //for (int i = tokens.length;i >= 0; i--) { //ORIGINAL CODE for (int i = tokens.length-1; i >= 0; i--) { //ADDED //out += tokens[i].nextToken(); //ORIGINAL CODE tokens[i] = tokenarr.nextToken(); //ADDED } for (int i = 0; i <= tokens.length-1; i++) { //ADDED out = out + tokens[i] + " "; //ADDED } output.append(out); //output.append("i ma "); //ORIGINAL CODE } //end of actionperformed } //end of new actionlistener ); // end of add actionlistener c.add(input); output= new JTextArea(10,20); output.setEditable(false); c.add(new JScrollPane(output)); setSize (400,300); show(); } //end of Tokentest constructor public static void main (String args[]) { TokenTest tok = new TokenTest(); tok.addWindowListener( new WindowAdapter() { public void windowClosing (WindowEvent e) { System.exit(0); } }//end of windowAdapter );//end of new WindowAdapter }//end of main method } // end of Class TokenTest show more 17 years ago Beginning Java StringTokenizer What are you feeding the StringTokenizer? I don't see any getText() method calls to get the value of the textfield. Get the String from the textfield and then feed it to the tokenizer. Make sure it is not null or empty. [ December 01, 2003: Message edited by: Murali Nanchala ] show more 17 years ago Beginning Java Black Jack program Nice Try! show more 17 years ago Beginning Java Problem with JVM version Check your PATH variable. show more 18 years ago Java in General How use Ant Programmatically from Java? All the documentation you need comes with every distribution of Ant in the 'docs' folder. This includes the Ant API. show more 18 years ago Other Build Tools Need help.. weblogic ejbc error.. Can you do a jar -tvf on the jar you created and post the output here. show more 18 years ago BEA/Weblogic DTD or Schema of ejb-jar.xml for EJB2.0 show more 18 years ago EJB and other Jakarta /Java EE Technologies How to run servlet in JBuilder 9.0 Personal? I assume you would want this in order to debug your servlet. 1. Compile your classes using the debug option 2. Make sure Tomcat's jar is in your classpath in Jbuilder. 3. Configure a run item for starting Tomcat. It may be available in the script that usually launches Tomcat. These go into the VM parameters section of the run configuration. 4. Start Tomcat in JBuilder in debug mode. 5. Hit your servlet in a browser (or however you usually do). Keep the rest of the servlet deployment same. I run WebLogic 7.1 in JBuilder 5 Enterprise. Let know if this helps. show more 18 years ago Servlets
https://www.coderanch.com/u/11038/Murali-Nanchala
CC-MAIN-2021-43
refinedweb
1,159
67.65
How to package Buildbot plugins¶ If you customized an existing component (see Customization) or created a new component that you believe might be useful for others, you have two options: - submit the change to the Buildbot main tree, however you need to adhere to certain requirements (see Buildbot Coding Style) - prepare a Python package that contains the functionality you created Here we cover the second option. Package the source¶ To begin with, you must package your changes. If you do not know what a Python package is, these two tutorials will get you going: The former is more recent and, while it addresses everything that you need to know about Python packages, is still work in progress. The latter is a bit dated, though it was the most complete guide for quite some time available for Python developers looking to package their software. You may also want to check the sample project, which exemplifies the best Python packaging practices. Making the plugin package¶ Buildbot supports several kinds of pluggable components: - buildslave - changes - schedulers - steps - status - util which are described in Plugin Infrastructure in Buildbot. Once you have your component packaged, it's quite straightforward: you just need to add a few lines to the entry_points parameter of your call of setup function in setup.py file: setup( ... entry_points = { ..., 'buildbot.kind': [ 'PluginName = PluginModule:PluginClass' ] }, ... ) (You might have seen different ways to specify the value for entry_points, however they all do the same thing. Full description of possible ways is available in setuptools documentation.) After the setup.py file is updated, you can build and install it: $ python setup.py build $ sudo python setup.py install (depending on your particular setup, you might not need to use sudo). After that the plugin should be available for Buildbot and you can use it in your master.cfg as: from buildbot.kind import PluginName ... PluginName ... Publish the package¶ This is the last step before the plugin is available to others. Once again, there is a number of options available for you: - just put a link to your version control system - prepare a source tarball with the plugin (python setup.py sdist) - or publish it on PyPI The last option is probably the best one since it will make your plugin available pretty much to all Python developers. Once you have published the package, please send a link to buildbot-devel mailing list, so we can include a link to your plugin to Plugin Infrastructure in Buildbot.
https://buildbot.readthedocs.io/en/v0.8.12/developer/plugins-publish.html
CC-MAIN-2017-17
refinedweb
412
61.46
Semihosting is a technique to do printf() debugging through an active debug connection. So instead using a physical connection like RS-232 or USB CDC, the connection to the host machine is through the debugger. This post is about enabling and using semihosting with gcc and newlib/newlib-nano in Freescale Eclipse based Kinetis Design Studio (KDS) using the GNU ARM Eclipse plugins. Stack and Heap Size Printf() and the like use a lot of stack (typically a KByte, or even more). So in your project, make sure you have enough stack and heap allocated. With a Processor Expert project, you can increase the stack and heap size in the CPU component, ‘Build options’ tab. I recommend at least 0x400 for stack and 0xc00 for the heap: If not using Processor Expert, then you need to check and update your linker file: There needs to be a __HeapBase and __HeapLimit symbol in it. ._user_heap_stack : { . = ALIGN(4); PROVIDE ( end = . ); PROVIDE ( _end = . ); __heap_addr = .; __HeapBase = .; . = . + __heap_size; __HeapLimit = .; . = . + __stack_size; . = ALIGN(4); } > m_data The __heap_size and __stack_size are defined like this for me: __heap_size = 0x0C00; /* required amount of heap */ __stack_size = 0x0400; /* required amount of stack */ Debugger Settings In the debugger settings, you need to have support for semihosting enabled. I’m showing here the settings for Segger J-Link in the GNU ARM Eclipse plugins. Enable ‘Allocate console for semihosting and SWO’ in the debugger tab: In the ‘Startup’ tab, check the ‘Enable semihosting’ box: 💡 At the time of this post, semihosting is not yet supported by the P&E GDB Eclipse plugin, but I’m told that should be supported soon. The GNU ARM Eclipse OpenOCD debug panel have similar settings. Usage in Application To use printf(), make sure you include #include <stdio.h> Below is a simple test routine which writes 2000 characters in a loop: int i; for(i=0;i<100;i++) { printf("Hello world!013456\r\n"); /* 20 characters */ } printf("***FINISHED***\n"); /* 2000 characters finished */ Code and Data Size Using printf() is not for free (see “Why I don’t like printf()“). Without using printf(), I have this code size (see “text, data and bss: Code and Data Size Explained” for reading the numbers): arm-none-eabi-size --format=berkeley "semi.elf" text data bss dec hex filename 4196 44 4164 8404 20d4 semi.elf With the above calls to printf() added, I get: arm-none-eabi-size --format=berkeley "semi.elf" text data bss dec hex filename 10076 2152 4396 16624 40f0 semi.elf So this adds a lot of code and data overhead. The above code size is with newlib and Kinetis Design Studio. Newlib is not ideal for embedded applications, so there is a newlib-nano which has a smaller footprint. For this I need to add the -nanolibc option to the linker settings: With this, the footprint is still high, but reduced compared to standard newlib: arm-none-eabi-size --format=berkeley "semi.elf" text data bss dec hex filename 7056 712 4348 12116 2f54 semi.elf Printing Floating Point By default, printing floating point is *not* enabled in newlib by default. To enable floating point printing, I need to add -u _printf_float to the linker options: ❗ In KDS, adding -u _printf_float adds about 7 KByte of code! So do not add it if you do not really need printing of float numbers. With this, I can print float and doubles like double d = 3.51; ... printf("double: %f\r\n", d); ❗ To my knowledge, 64bit integer (long long) printing is *not* supported in newlib-nano. Debugging When debugging, there is now a ‘Semihosting and SWV’ entry in the debug view: Clicking on that entry will show the semihosting Console view: That ‘subview’ of the Console view is accessible with a small sub-menu too: Summary I usually avoid to use printf() at any price. But sometimes libraries come with it, and sometimes I need to jump over my shadow. Then printf() can be useful sometimes, even if it comes with a high price in code size, RAM usage and stack usage. Happy Semihosting 🙂 For Cortex-M3/M4 projects generated with the GNU ARM Eclipse plug-ins you have even more options, you can select the output stream to be the semihosting STDOUT or DEBUG channel, and you can run the same code without the debugger connected, since the low level routine can detect when the debugger is not active and just return. Also the name is distinct, you use trace_printf() for tracing, which allows printf() to be re-targetted to any other stream (for example a socket) to meet any application requirements. > Semihosting is a technique to do printf() debugging through an active debug connection. To be correct, this is only one of the features of semihosting. With advanced GDB servers like J-Link, you can use all the features specified in the ARM semihosting documentation, including reading/writing files from the host machine, getting the current time, passing a command line, getting the exit code back, etc. These features are very useful for writing unit tests, that need to return the test results (usually as an XML file to be parsed by a continuous integration server, like Hudson/Jenkins). When creating projects with the GNU ARM Eclipse plug-ins, it is possible to choose from three options: – full semihosting (all POSIX read/write routed to host), – re-targetting (read/write routed to application device) and – no POSIX calls. In addition to this, the trace_printf() can be routed to: – ARM SWO, – semihosting STDOUT (a buffered device, lines displayed at \n) – semihosting DEBUG (an unbuffered device, characters displayed one at a time). Thanks Liviu for the clarification. I was actually wondering how to use ARM SWO, I will check on this. I saw this was available in GNU ARM toolchain eclipse plugin with Jlink firmware from a while ago, wrote a post about it: . Nice to know we can do this in KDS. I got the KDS better version but havent tested much, been into Physics lately. Pingback: Tutorial: Nordic Semiconductor nRF24L01+ with the Freescale FRDM-K64F Board | MCU on Eclipse Pingback: printf() and scanf() with GNU ARM Libraries | MCU on Eclipse Pingback: printf() for the FRDM-K64F Board and Kinetis Design Studio | MCU on Eclipse Pingback: Semihosting with GNU ARM Embedded (LaunchPad) and GNU ARM Eclipse Debug Plugins | MCU on Eclipse Hello! I’m trying to use semihosting to print stuff with P&E GDB (OpenSDA) in a FRDM-KL25Z board. (I could’nt manage to make prints using virtual serial port or other solutions). However I can only print a string with 103 characters. More than this (103,104, etc) it gives me an error (something weird, related to a SIGTRAP signal) and I can continue execution, and with even more (128, which is what I want) it simply doesn’t print anything. Do you think this is related to still poor support of semihosting by P&E debugger or with semihosting limitations? Your website is great for any student working with Freescale microcontrollers, congratulations! Hi Daniel, thanks :-). I have not printed very long strings. But indeed, there is naturally a buffer limit inside semihosting. Are you sending that 104 character string with now \r\n in between (no line feeds)? I have not tried your case yet, but it could be as well the library provided in KDS. Have you seen the same thing with the Segger connection too? Hello Erich, I tried this solution with FRDM-KL25Z, GDB PE Micro Interface Debugging (OpenSDA USB). I followed carefully all the steps (heap size, enable semihosting, include ) starting from scratch with a new KDS project with PE. But it prints to the console only the first character of each string. With your example, instead of Printing “Hello world!” 100 times and then “*** FINISHED ***!” in the console view I see just 100 lines with only “H” and then the “*” of the last string. What can be the reason? Thank you in advance for your help. Not sure what is causing this for you. Do you have a \n at the end of each string? Thank you Erich. Yes, the \r\n is present at the end of the string: I used the exact code you gave as an example: int i; for(i=0;i<100;i++) { printf("Hello world!013456\r\n"); /* 20 characters */ } printf("***FINISHED***\n"); /* 2000 characters finished */ But in the console I get just: H H … (100 times) H * Strange behaviour… If I find the reason I will post it here. Thanks again! Cheers, Vittorio Just for reference I found an explanation here: Pingback: Installing and using Kinetis Design Studio version 2 in Linux >> Karibe Muriuki I tried out semihosting with the latest PE opensda firmware for KL05Z and it only prints the first character. I saw this is already discussed on another post on this blog, but for k64F board, and a physical serial port. I tried adding the isatty() function and adjusting stack and heap, no success. Also, the serial port interface does not work for the KL05Z board with the latest PE firmware in Linux. Its enumerated well as /dev/ttyACM0, but it doesnt work. I am writing a summary of what works here: I cannot comment on Linx, but I can confirm that it works as expected on Windows? have you tested with the KL05Z board? I will test the KL25Z board and see. Yes, it works as expected. I have pushed my UART example for the FRDM-KL05Z here: My board information: Board Name is: FRDM-KL05Z MicroBoot Kernel Version is: 1.03 Bootloader Version is: 1.11 Installed Application: PEMicro FRDM-KL05Z Mass Storage/Debug App Application Version is: 1.14 Pingback: Semihosting with GNU ARM Embedded (Launchpad) and Kinetis Design Studio | MCU on Eclipse Pingback: Why I am not a fan of ARM’s Semihosting feature | bitknitting Dear Erich I changed the stack size and heap size as given here for my boot loader project. But it is throwing these errors while building. c:/freescale/kds_2.0.0/toolchain/bin/../lib/gcc/arm-none-eabi/4.8.0/../../../../arm-none-eabi/bin/ld.exe: Bootloader.elf section `._user_heap_stack’ will not fit in region `m_data’ c:/freescale/kds_2.0.0/toolchain/bin/../lib/gcc/arm-none-eabi/4.8.0/../../../../arm-none-eabi/bin/ld.exe: region `m_data’ overflowed by 352 bytes collect2.exe: error: ld returned 1 exit status make: *** [Bootloader.elf] Error 1 How to rectify it ? Thanks Ganesh R Hi Ganesh, You are allocating more RAM than you have available on your system. You need to reduce the amount of RAM allocated in the linker file for stack and heap by 352 bytes. Erich Hi Erich, This is exactly where I got my doubt. You have written in this article, ” I recommend at least 0x400 for stack and 0xc00 for the heap “.. In my project, when I opened the Build options tab, the stack size and heap size were 0x100 by default. After reading through this article I increased my stack and heap size to 0x400 and 0xc00 respectively. When I built the code, I got the errors which I already quoted in my first comment. Now from your reply to comment, If I have to reduce the stack and heap size by 352 bytes (0x160 in hex), do you mean, New Stack Size => 0x0400 – 0x160 = 2A0 New Heap Size => 0x0c00 – 0x160 = AA0 I changed the stack and heap sizes to these values and the code built fine. But I don’t know whether this is correct ? Have I rectified the error correctly ? Hi Ganesh, you only need to reduce the RAM size for 0x160 once. You did it twice (both for stack and heap). Oh .. So is it enough if I reduce the heap size by 0x160. So my new size should be New Stack Size : 0x400 New Heap Size: 0xc00 – 0x160 = 0xAA0 have I understood it correctly now ? yes, that looks good. LikeLiked by 1 person Hello, I am using CodeWarrior for MCU Version: 10.6.4 and I cannot find any checkbox to enable semihosting. In my case the setting windows are not the same as in the discription above (e.g. I cannot find a startup tab). I have an example code in which semihosting is used and it works fine. I also found the “-D__SEMIHOSTING” flag in the build options of the C compiler (under “all options:” this field is not editable). So I tried to enable semihosting in another example project by adding the phrase “-D__SEMIHOSTING” in the “others flag” field. However, then there is this error when I want to build the project: “undefined reference to ‘sys_exit'” in “__arm_end.c” So, how can i enable semihosting? Where are the settings hidden? I would be most grateful for any help. Hi Andreas, I believe I have never used Semihosting with the CodeWarrior tools. And I do not recommend it as it is very, very slow because of the way it is done in the CodeWarrior libraries (I remember something with only few hundred bytes per second). And the panels are different because Kinetis Design Studio is using GDB, while CodeWarrior is using a proprietary debugger. Erich Hi Erich, I’m attempting to use semihosting under P&E Multilink. I have followed the steps P&E outline for enabling their part of it, and have followed your instructions(which they link to). When I try to debug my target program I get the branch showing “Semihosting Console”, but when I click on it, although the Console switches to “P&E Semihosting Console”, the debug Resume/Suspend etc. are greyed out. If I then reclick on Thread #1, the Resume etc. come back to life, but the Semihosting Console disappears. This is a totally bare metal project, if that makes any difference… Hi John, double clicking on the ‘semihosting’ entry in the call stack opens the console with that view. And yes, if you don’t have the current (code) tread selected, then you cannot step/etc. This is how it is implemented with the GNU ARM Eclipse plugins and to me this is in alingment with how Eclipse thread debugging works. The Semihosting ‘thread’ is just shown there to have an easy way to switch to the console view with the semihosting output. As for the semihosting console disapearing: you can ‘pin’ the console view to whatever you want. Thanks for the very prompt reply. I have now “pinned” the Semihosting Console, but nothing appears in it, so I guess I’m doing something else wrong, or missing some other part of the puzzle. OK, I have it working now. I changed the -specs=nosys.specs to -specs=rdimon.specs It maybe that I’m using a later set of tools? Good tip also, on one of your pages, about finding the SemiHosting Console by using the Display selected console” icon. Many thanks, I’m not sure that I’d be able to make any progress with Kinetis if is wasn’t for your blogs and advice. Whatever NXP are paying you it’s not enough. John Hi John, does the application run and does not crash? Otherwise, it depends what kind of library or SDK your are using. For example there are special steps for the Kinetis SDK v2.0 necessary, see I hope this helps, Erich Hi, No crashing. Only problem I have now is that I can’t get the Semihosting Console to persist – have to re-select it every debug session. I fear these questions and answers are getting out of sync now! Although despite “pinning”, I can’t get the Semihosting Console to persist between debugging sessions. I always have to select it anew. Is this normal? Hi John, yes, I noticed this too. The ‘pinning’ state is not stored between debugging sessions. I’m not using any SDK, by the way. When I say bare metal I mean bare metal. Stupid as it may sound, I’m writing everything from scratch, using some of the SDK files for a bit of guidance now and again. I’m wondering if SDK_DEBUGCONSOLE=0 as you mention on another page could be the trick… Hi John, ah, ok. Some (mostly myself too) are using ‘bare metal’ for ‘without RTOS’. But to the essence, real bare metal is not using the SDK, I agree with you. The SD_DEBUGCONSOLE=0 is used with the SDK to disable the debug console code, as it overwrites some of the low level rxchar/txchar methods used by printf. If these overwrites are in place, semihosting does not work. Thanks again for all the replies. Semihosting working fine now, apart from having to open the Semihosting Console view every time. I can live with that – need to get on with writing code. While I agree, in principle, with the maxim “Give me six hours to chop down a tree and I’ll spend the first four sharpening the axe”, there comes a time when you have to get some results! Hi John, good to hear that it is working now. And I agree to the tree thing 100% :-). Hello, I’m doing a project with some simple image acquisition and processing in an STM32 microcontroller. I’ve been using ARM semihosting technology for debugging with standard C library fopen/fwrite functions for logging the data of the pictures on the host computer for purposes of debugging but I find it quite slow since I have sometimes to log data for RGB565 320×240 pictures (153 600 bytes), and it takes quite some time to write the data to the file on PC. As so I’m asking if you have another recomendation or alternative for debugging this kind of data (maybe using ITM SWO output?). Thank you for your attention. Best regards, Yes, semihosting is very slow. Because it stops the target with a breakpoint, the debugger needs to catch this, then the debugger reads one or more byte of data and restarts the device, and so on. If you have a SEGGER J-Link, then a fast ways is to use the Segger RTT to tranfer the file (see). Another way is to attach an SD card over SPI and store the data on it (see). Or simply send the data over USB CDC (or UART), but this will need a special program on the host to receive the data. I love the whole blog and found it super helpful for learning how to use JTAG debugging. I typically use M0 processors (like the SAMD21G or SAMD11D that don’t have SWO. Does semi-hosting work on these types of devices? Hi Jeremy, semihosting works on any device, as long as the debugger/debug connection is able catch the semihosting traps. Thanks for the blog post and lots of great information on JTAG debugging. I typically use SAMD21 and SAMD11 processors which are cortex-M0. Does semi-hosting work without SWO? Semihosting depends on the debug connection and capability of the debug probe, and does not depend on SWO. Using a terminal connection with SWO is using a different technology, but does not need a direct involvement of the debug probe
https://mcuoneclipse.com/2014/06/06/semihosting-with-kinetis-design-studio/
CC-MAIN-2020-50
refinedweb
3,202
71.44
You may want to search: Hot Selling Japanese Glass Teapot Turkish Set For The Gas Stove US $5.2-5.7 / Piece 1000 Pieces (Min. Order) Square silicone mat set for baking heat proof silicone dish pot place mat US $0.5-2 / Piece 1 Piece (Min. Order) JUKI smt pick and place machine Feeder CF FF CTFR AF TAPE GUIDE SET US $1-10 / Set 1 Set (Min. Order) 10kw to 50kw generator set powered by yamar engine US $1000-30000 / Set 1 Set (Min. Order) Supply all kinds of 30w speaker,fm radio speaker with usb port in india one set US $18-25 / Piece 1 Piece (Min. Order) High Quality 3pc Stianless steel Bathroom set 2400 Sets (Min. Order) FUJI CP6 8X2MM feeder lever link full set AWCA2200 AWCA3906 MCA0391 US $5-10 / Piece 1 Piece (Min. Order) 2016 modern wicker dining set with tempered glass top outdoor furniture US $230-555 / Set 6 Sets (Min. Order) SWITEK robot alibaba come from china for a set of packaging machine US $20-30 / Set 1 Set (Min. Order) Supply all kinds of wireless garden speakers,professional active speaker set US $32-36 / Piece 1 Piece (Min. Order) Japanese Cuisine Color-Dough Set, Modeing Clay Set Toy US $1.19-1.34 / Set 5 Cartons (Min. Order) different shapes natural color bamboo utensils set hot sell in Japan US $4.99-5.99 / Set 100 Cartons (Min. Order) CBJ Joint color Mat 8 set /( #120302) 16 Sets (Min. Order) Bathroom wall mounted brass spa room bath-Shower sets US $20-80 / Set 50 Sets (Min. Order) Cost-effective and Delicious soft rice cracker sets for your babies made in Japan US $16-20 / Set 6 Sets (Min. Order) Organic healthy puer tea gift wedding best sell product in japan US $12-28 / Box 1 Box (Min. Order) cas 541-02-6 pure silicon oil d5 cyclohexasiloxane korea cosmetic raw materials supplier US $3.4-4.5 / Kilogram 500 Kilograms (Min. Order) Folding Wood Bedside Commode Chair Seat With Antimicrobial Finishing US $1275.0-1500.0 / Set 1 Set (Min. Order) wire extruder machinery US $1000-180000 / Set 1 Set (Min. Order) Second hand small cheap komatsu d85 bulldozer US $24000.0-26000.0 / Unit 1 Unit (Min. Order) super water absorbency and soft material self-sufficient wholesale japanese wash cloth US $0.1-4.09 / Piece 2000 Pieces (Min. Order) Easy to use and Fashionable ambulance with various places US $5-10 / Set 1 Set (Min. Order) yamaha yv100xg track slider KV7-M2676-S2X US $130.0-150.0 / Piece 1 Piece (Min. Order) 2017 Beautiful Girls Panty Sets Japan Sexy Images Fancy Latest Bra US $2.2-2.3 / Pieces 10 Pieces (Min. Order) 2018 Trade Assurance New Style antique door style modern movable coffee table US $20-300 / Set 5 Sets (Min. Order) biryani cooking pot large stock pot for shopping US $4.2-6.2 / Pieces 2000 Pieces (Min. Order) japanese cast iron cookware 2000 Sets (Min. Order) Cheap japanese patio furniture outdoor plastic wood table US $65-70 / Set 10 Sets (Min. Order) import solid wood ash small table japanese outdoor coffee table garden small tea table US $1-10 / Unit 5 Units (Min. Order) Sushi Boat for/in place of sushi boat tray leaf dish plate lunch box US $0.01-2 / Set 20 Cartons (Min. Order) 679 wooden New design luxury dining table for wholesales US $319-409 / Piece 5 Pieces (Min. Order) 2016 Full Auto electric stackable washer dryer discount with Warranty US $3750-4500 / Set 1 Set (Min. Order) 6ft outdoor folding table outdoor fire pit table US $15-25 / Piece 100 Pieces (Min. Order) The Japanese Biscuits Confectionery US $10000-50000 / Set 1 Set (Min. Order) Popular Hot Sale Sushi Placing Japanese Wooden Tray US $0.059-0.72 / Piece 50000 Pieces (Min. Order) entertainment,tube8 japanese for public place of entertainment,a4 graph paper to print US $0.12-0.75 / Pack 10000 Packs (Min. Order) 2015 best selling wholesale high quality japanese women sexy lingerie free pictures US $3-9 / Piece 50 Pieces (Min. Order) High availability japanese shoe deodorizer bag US $0.5-0.8 / Pair 1000 Pairs (Min. Order) japanese tea set health premium best white tea brands US $4.5-6 / Piece 20 Pieces (Min. Order) Precision Heavy Duty Crosscutting Machine US $1000-200000 / Set 1 Set (Min. Order) Excellent quality antique robot vacuum cleaner japanese factories US $1-3000 / Set 3 Sets (Min. Order) melamine dinner plates wholesale japanese products US $0.1-1 / Piece 3000 Pieces (Min. Order) with drawer for korea and japan market magic mop bucket US $9.6-10.4 / Piece 900 Pieces (Min. Order) PP 3D promotional dinner place mat with coaster US $0.02-0.99 / Set 1000 Sets (Min. Order) set-up LED Lamp and LED Bulb assembly line for making of of finished goods from CKD components US $75000-99000 / Piece 1 Piece (Min. Order) portable Gasoline Generator set branded Japan 3kw US $78-150 / Set 12 Sets (Min. Order) Supply all kinds of active speaker set,square mini portable speaker US $18-25 / Piece 1 Piece (Min. Order) modern design outdoor dining set with umbrella garden furniture sets US $230-555 / Set 6 Sets (Min. Order) Supply all kinds of speaker nfc,speaker with led light,set of 2 wireless portable speakers for computer US $40-55 / Piece 1 Piece (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show japanese place settings or other products of your own company? Display your Products FREE now!
http://www.alibaba.com/showroom/japanese-place-settings.html
CC-MAIN-2017-39
refinedweb
946
67.96
. Selection List for Qt and Windows Phone This article demonstrates how to create selection lists in Qt and Windows Phone 7. Windows Phone 8 Windows Phone 7.5 Symbian Introduction Selection lists are lists of items from which the user can select one or more items. The lists can optionally be scrollable, and have a highlight to show single or multiple selections. Both Qt and Windows Phone have support for selection lists: - In Qt, we use the SelectionListItem QML element to launch a Selection Dialog which contains the list. When a season is selected the dialog closes and populates the SelectionListItem subtitle with the selected season. - Windows Phone 7 code uses the ListPicker control from Silverlight for Windows Phone Toolkit. When an item is selected it is displayed as the "Current" item in a separate text box. This example creates selection lists in both Windows Phone and Qt. The selection lists contain the four seasons of the year as list items. When user selects any of the item it gets displayed in the screen. Implementation The code below starts from an empty project for both Qt and WP7. Qt Project (main.qml) We took SelectionListItem component to get the selected item in the list. When user clicks on the SelectionListItem it opens SelectionDialog with the list item in it. SelectionListItem { id: item y:51 title: "Select Season" subTitle: selectionDialog.selectedIndex >= 0 ? selectionDialog.model.get(selectionDialog.selectedIndex).name : "Please select" onClicked: selectionDialog.open() SelectionDialog { id: selectionDialog titleText: "Select one of the values" selectedIndex: -1 model: ListModel { ListElement { name: "Spring" } ListElement { name: "Summer" } ListElement { name: "Fall" } ListElement { name: "Winter" } } } } When user select any item from the list it is displayed in the SelectionListItem in the subtitle field. WP7 Project (MainPage.xaml) - Let’s add the reference Microsoft.Phone.Controls.Toolkit to the project . - Add namespace in XAML: xmlns:toolkit="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls.Toolkit" xmlns:controls="clr-namespace:Microsoft.Phone.Controls;assembly=Microsoft.Phone.Controls" xmlns:local="clr-namespace:SelectionListWp7" - Add resources : <phone:PhoneApplicationPage.Resources> <local:Model x: <DataTemplate x: <StackPanel Orientation="Horizontal"> <Ellipse Fill="{Binding}" Width="20" Height="20" Margin="0 0 6 0"/> <TextBlock Text="{Binding}"/> </StackPanel> </DataTemplate> </phone:PhoneApplicationPage.Resources> Then use the ListPicker to create the List item <StackPanel> <toolkit:ListPicker x: <TextBlock x: </StackPanel> By default the index 1 in the list is being selected. WP7 Project (MainPage.xaml.cs) When user clicks on any of the list item the SeasonsSelectionChanged() event is called and the selected item it being displayed in the screen. private void SeasonsSelectionChanged(object sender, SelectionChangedEventArgs e) { SeasonsSelection.Text = "Current selection: " + ((0 < e.AddedItems.Count) ? e.AddedItems[0] : "[nothing]"); } Summary Using SelectionListItem we can create a list view of many items, but in case of WP7 the ListPicker control can have at max five items in the drop down list, if the list item is more than five then it opens a full-screen popup for the item selection. Both the effects are controlled by WP7 Toolkit's ListPicker control which combines both the experiences in the same API by automatically selecting the right UX based on the number of items in its list. Source Code - The full source code of Qt example is available here: File:SelectionListQt.zip - The full source code of WP7 example is available here: File:SelectionListWP7.zip Croozeus - Hi Somnath, Good work, we need more such articles.It would be also good to add the list of visual differences between the components of the platforms. For example, here in WP7 selection list you have feasibility of app title being visible, showing the current selection text, etc.. would be good to have a para describing that these are available in WP7 selection list and not in QML and vice-versa. croozeus 04:37, 15 November 2011 (EET) Croozeus -Just one more thing - instead of using the hard coded value in QML (y:51) it would be advisable to demonstrate using anchors. croozeus 04:50, 15 November 2011 (EET) Hamishwillee - Comparing like with like Hi Somnath Thanks, this is a useful article. Make sense? RegardsHamish hamishwillee 01:05, 22 November 2011 (EET) Somnathbanik - Compatibility This article is Compatible for both Windows Phone 7 and Windows Phone 8.We will update the title accordingly. somnathbanik 14:12, 5 June 2013 (EEST) Hamishwillee - And content Hi Somnath Also great if you can update the text to say Windows Phone rather than WP7. Fixes look good. As a rule if you update the SDK or devices just insert the most recent first - it is useful for people to know this works on the WP7.5 SDK. For devices they might only have Lumia 800 for example, so knowing it worked on that would remain useful. Thanks again. RegardsH hamishwillee 09:21, 10 June 2013 (EEST) Somnathbanik - And content Hi Hamish, I will keep this in mind, and will check/update all my articles. Somnath Thanks somnathbanik 12:10, 10 June 2013 (EEST) Hamishwillee - Thanks. I did update most of them for the SDKs ... after this point. I think for these Qt comparison articles, next WP version we should consider not updating them - porting from Qt is no longer so relevant. We might however consider if creating a new article covering the basic UI features might be useful. For example in this case we do a basic selection list. However there are heaps more things you can do with a selection list than this - for example we could cover how to change its appearance, other types of selection lists like "context sensitive menus", selection lists provided by other UI libraries - ie Windows Phone toolkit, coding4 fun etc. I'm sure there must be heaps of options. RegardsH hamishwillee 04:20, 11 June 2013 (EEST)
http://developer.nokia.com/community/wiki/Selection_List_for_Qt_and_Windows_Phone
CC-MAIN-2014-52
refinedweb
955
55.34
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. "add an item" to clicking a buttom Hi all, in a view (like sale order, but could be any view) there is two ways to add lines (products in sale order) - we can create a view form for the line and when the user click on "add an item" link show the form (like a sale order) or - we can add a line in row, directly in the tree (like customer or supplier invoice) but I need a way to add a line when the user click on a buttom (or in on_chage attribute of a field) in OpenERP 6.0 I can add an item with the "create" function in the server side, but in OpenERP 7.0 this not work because the line is added in the client view an the "create" function is called when the user save the form I hope you can help me. thanks in advance. Hi As above answer , if you want to add value iby clicking a button , i would say that any of form view when button is clicked the current record is saved at the same time you , there is no means to before save you add same as in above answer mentions if you want add before save then just use on change property Thanks Sandeep Hi naitsir. You can call create function when clicking on a button. So you don't need to click on Save button. For example, In sale order there is a button Confirm Sale. On clicking on this button you can add sale order line record by calling create method of sale.order.line. def action_button_confirm(self, cr, uid, ids, context=None): sale_order_line_obj = self.pool.get('sale.order.line') order_line_id = sale_order_line_obj.create(cr, uid, {'name':'Hello','order_id':ids[0]}, context=context) // Sale order line created for current sale order If you want to add products as per user's choice then you can create new wizard and add your required fields in wizard which need to be added in sale order line, in wizard button you can write your code that means whichever data was filled in wizard will be added in sale order line. Hope this way it will work. Thanks I check a lot of the code and I found an example of this on account.voucher form. For a Saler Order in my on_change function I need to res['value']['order_line'] = [{order line data1},{order line data2},...] where order line data is the all data related a sale.order.line then the client browser add all lines to my Sale order this work for all forms with one2many, many2many fields :) regards, and thank your for your help HI could yo explain better where you found these features, I have similar problems and I can't find what you mention in this check a lot of the code and I found an example of this on account.voucher form. for a Saler order
https://www.odoo.com/forum/help-1/question/add-an-item-to-clicking-a-buttom-20807
CC-MAIN-2017-09
refinedweb
520
72.09
Before powerful GPUs and multi-core processors made it possible for machines to learn from data, AI was about coding a deterministic algorithm. Thе old and well-explored principles of graph trees, constraint propagation and search still find many applications today. Constraint Propagation and Search Artificial intelligence is all about designing computer systems able to perform tasks that normally require human intelligence. We already know computers can do some arithmetic tasks like multiplying large numbers much faster than any human will ever do. But what about non-arithmetic tasks? Well, by now everyone knows that Tesla, Google, Apple and many other tech companies are working on autonomous driving. And yet, they haven’t completely cracked it yet. On the other side, it is now 20 years since IBM’s Deep Blue won both a chess game and a chess match against Garry Kasparov - the reigning world champion at the time. To sum it up - driving a car is obviously an easy task for humans, two billion people are driving to work every day, but it is very hard for a computer system to manage. At the same time, computer systems can beat the world champion at chess - a task that hardly any human can achieve. Makes you wonder, doesn’t it? Coding a Sudoku Environment Another non-arithmetic and seemingly human task at which computers excel is solving a sudoku. The use of constraint propagation and search is illustrated in this great blog post by Peter Norvig. In this post I will go one step further by introducing a small, but powerful optimization for Norvig’s solution. My whole sudoku solver implementation can be found in this repo: AIND-Sudoku. In a sudoku, the rows, columns and 3x3 squares all contain digits from 1 to 9 exactly once. Norvig introduces a very flexible design, which is easily extended to a diagonal sudoku. Indeed, Norvig’s solution can be extended to solve a diagonal sudoku by just adding the diagonals to the units, used in the constraint propagation steps: MODE_NO_DIAGONAL = 1 MODE_WITH_DIAGONAL = 2 DIGITS = '123456789' ROWS = 'ABCDEFGHI' COLS = '123456789' def cross(A, B): "Cross product of elements in A and elements in B." return [a + b for a in A for b in B] BOXES = cross(ROWS, COLS) ROW_UNITS = [cross(r, COLS) for r in ROWS] COLUMN_UNITS = [cross(ROWS, c) for c in COLS] SQUARE_UNITS = [cross(rs, cs) for rs in ('ABC', 'DEF', 'GHI') for cs in ('123', '456', '789')] DIAGONAL_UNITS = [[row+col for (row,col) in zip(ROWS, COLS[::step])] for step in [-1,1]] def get_units_peers(mode): if mode == MODE_NO_DIAGONAL: unitlist = ROW_UNITS + COLUMN_UNITS + SQUARE_UNITS elif mode == MODE_WITH_DIAGONAL: unitlist = ROW_UNITS + COLUMN_UNITS + SQUARE_UNITS + DIAGONAL_UNITS else: raise Exception('Unknown mode.') units = dict((s, [u for u in unitlist if s in u]) for s in BOXES) peers = dict((s, set(sum(units[s], [])) - set([s])) for s in BOXES) return unitlist, units, peers Naked twins strategy In solution_performance_test.py I added a small performance test to measure the time needed to solve 20 hard sudoku puzzles. I furthermore modified the code to print the amount of search attempts the solver needs for solving each sudoku puzzle. A search attempt is made whenever the potential of constraint propagation is exhausted and the algorithm has to try different digits for the same box. When executed the test output looks like this: As previously mentioned, in order to solve a sudoku puzzle one needs to use only constraint propagation and search. To increase the performance of Norvig’s solution I simply added an additional constraint, called naked twins: def naked_twins(values): """Eliminate values using the naked twins strategy. Args: values(dict): a dictionary of the form {'box_name': '123456789', ...} Returns: the values dictionary with the naked twins eliminated from peers. """ # Find all instances of naked twins # Eliminate the naked twins as possibilities for their peers for unit in UNITLIST: unsolved = [box for box in unit if len(values[box]) > 1] # indices of all pairs (0, 1), (0, 2), (0, 3), (0, 4), pairs = list(itertools.combinations(unsolved, 2)) for i,j in pairs: chars1, chars2 = values[i], values[j] # the characters in each pair # if characters match, i.e. chars1 = '34' and chars2 = '34' they are twins if len(chars1) == 2 and chars1 == chars2: # all boxes that are not the twins not_twins = [box for box in unsolved if values[box] != chars1] for box in not_twins: for char in chars1: # remove the characters of the twins for each box that is not one of the twins val = values[box].replace(char, '') values = assign_value(values, box, val) return values Putting it all together Adding just this single constraint led to the significant performance boost. The time needed to solve twenty sudoku puzzles was cut in half. You can clearly see the algorithm is making far fewer attempts than before: One can even go further and implement additional constraints. In the sudoku world those constraints are called sudoku strategies. So how good is a computer at solving a sudoku? In this Telegraph article I found a sudoku puzzle which was designed by Japanese scientists to be especially hard to solve. It is supposed to take hours if not days to solve. Below is a slow motion video of the algorithm solving the sudoku. Note, the video would be much longer if not for the naked twins strategy that is significantly reducing the number of unsuccessful attempts. As you can see on the video, the algorithm is making quite a few unsuccessful attempts and consequent steps back. One thing is sure - an AI engineer will be faster at writing the code that solves a sudoku than actually solving a puzzle that hard.
http://machinememos.com/python/artificial%20intelligence/depth%20search/sudoku/diagonal%20sudoku/naked%20twins/2017/02/27/cracking-the-worlds-hardest-sudoku.html
CC-MAIN-2019-51
refinedweb
942
58.32
I am trying to add templatetags in using django, but i get this error after creating the template tags, i don’t know if there is something i need to change in my settings.py to make this work. template.html {% extends 'base/base.html' %} {% load static %} {% load course_custom_tags %} {% block content %} course_custom_tags.py from course.models import UserCourse , Course register = template.Library() @register.simple_tag def is_enrolled(request , course): user = None if not request.user.is_authenticated: return False # i you are enrooled in this course you can watch every video user = request.user try: user_course = UserCourse.objects.get(user = user , course = course) return True except: return False
https://forum.djangoproject.com/t/course-custom-tags-is-not-a-registered-tag-library-must-be-one-of/13188
CC-MAIN-2022-21
refinedweb
106
53.88
displayed is customizable. Its function is similar to that of perl's -Dx debugging flag or the B::Terse module, but it is more sophisticated and flexible. Here's two outputs (or 'renderings'), using the -exec and -basic (i.e. default) formatting conventions on the same code snippet. % perl -MO=Concise,-exec -e '$a = $b + 42' 1 <0> enter 2 <;> nextstate(main 1 -e:1) v 3 <#> gvsv[*b] s 4 <$> const[IV 42] s * 5 <2> add[t3] sK/2 6 <#> gvsv[*a] s 7 <2> sassign vKS/2 8 <@> leave[1 ref] vKP/REFC In this -exec rendering, each opcode is executed in the order shown. The add opcode, marked with '*', is discussed in more detail. The 1st column is the op's sequence number, starting at 1, and is displayed in base 36 by default. Here they're purely linear; the sequences are very helpful when looking at code with loops and branches. The symbol between angle brackets indicates the op's type, for example; <2> is a BINOP, <@> a LISTOP, and <#> is a PADOP, which is used in threaded perls. (see "OP class abbreviations"). The opname, as in 'add[t1]', may be followed by op-specific information in parentheses or brackets (ex '[t1]'). The op-flags (ex 'sK/2') are described in ("OP flags abbreviations"). % The default rendering is top-down, so they're not in execution order. This form reflects the way the stack is used to parse and evaluate expressions; the add operates on the two terms below it in the tree. Nullops appear as ex-opname, where opname is an op that has been optimized away by perl. They're displayed with a sequence-number of '-', because they are not executed (they don't appear in previous example), they're printed here because they reflect the parse. The arrow points to the sequence number of the next op; they're not displayed in -exec mode, for obvious reasons. Note that because this rendering was done on a non-threaded perl, the PADOPs in the previous examples are now SVOPs, and some (but not all) of the square brackets have been replaced by round ones. This is a subtle feature to provide some visual distinction between renderings on threaded and un-threaded perls. Arguments that don't start with a hyphen are taken to be the names of subroutines or formats. sequence numbers with the least significant digit first. This is obviously mutually exclusive with bigendian. With this option, the rendering of each statement (starting with the nextstate OP) will be preceded by the 1st line of source code that generates it. For example: 1 <0> enter # 1: my $i; 2 <;> nextstate(main 1 junk.pl:1) v:{ 3 <0> padsv[$i:1,10] vM/LVINTRO # 3: for $i (0..9) { 4 <;> nextstate(main 3 junk.pl:3) v:{ 5 <0> pushmark s 6 <$> const[IV 0] s 7 <$> const[IV 9] s 8 <{> enteriter(next->j last->m redo->9)[$i:1,10] lKS k <0> iter s l <|> and(other->9) vK/1 # 4: print "line "; 9 <;> nextstate(main 2 junk.pl:4) v a <0> pushmark s b <$> const[PV "line "] s c <@> print vK # 5: print "$i\n"; .... Renderings usually include a banner line identifying the function name or stringified subref. This suppresses the printing of the banner. TBC: Remove the stringified coderef; while it provides a 'cookie' for each function rendered, the cookies used should be 1,2,3.. not a random hex-address. It also complicates string comparison of two different trees.. If you invoke Concise more than once in a program, you should know that the options are 'sticky'. This means that the options you provide in the first call will be remembered for the 2nd call, unless you re-specify or change them. The concise style uses symbols to convey maximum info with minimal clutter (like hex addresses). With just a little practice, you can start to see the flowers, not just the branches, in the trees. These symbols appear before the op-name, and indicate the B:: namespace that represents the ops in your Perl code. OP flags are either public or private. The public flags alter the behavior of each opcode in consistent ways, and are represented by 0 or more single characters. v OPf_WANT_VOID Want nothing (void context) s OPf_WANT_SCALAR Want single value (scalar context) l OPf_WANT_LIST Want list of any length (list context) Want is unknown) Private flags, if any are set for an opcode, are displayed after a '/' 8 <@> leave[1 ref] vKP/REFC ->(end) 7 <2> sassign vKS/2 ->8 They're opcode specific, and occur less often than the public ones, so they're represented by short mnemonics instead of single-chars; see op.h for gory details, or try this quick 2-liner: $> perl -MB::Concise -de 1 DB<1> |x \%B::Concise::priv-spec is copied and scanned for the following items; data is substituted in, and other manipulations like basic indenting are done, for each opcode rendered. There are 3 kinds of items that may be populated; special patterns, #vars, and literal text, which is copied verbatim. (Yes, it's a set of s///g steps.) These ;-) This ucfirst form of #var generates a tag-value form of itself for display; it converts '#Var' into a 'Var => #var' style, which is then handled as described above. (Imp-note: #Vars cannot be used for conditional-fills, because the => #var transform is done after the check for #Var's value). The following variables are 'defined' by B::Concise; when they are used in a style, their respective values are plugged into the rendering of each opcode. Only some of these are used by the standard styles, the others are provided for you to delve into optree mechanics, should you wish to add a new style (see "add_style" below) that uses them. You can also add new ones using "add_callback".. The common (and original) usage of B::Concise was for command-line renderings of simple code, as given in EXAMPLE. But you can also use B::Concise from your code, and call compile() directly, and repeatedly. By doing so, you can avoid the compile-time only operation of O.pm, and even use the debugger to step through B::Concise::compile() itself. Once you're doing this, you may alter Concise output by adding new rendering styles, and by optionally adding callback routines which populate new variables, if such were referenced from those (just added) styles. use B::Concise qw(set_style add_callback); add_style($yourStyleName => $defaultfmt, $gotofmt, $treefmt); add_callback ( sub { my ($h, $op, $format, $level, $stylename) = @_; $h->{variable} = some_func($op); }); $walker = B::Concise::compile(@options,@subnames,@subrefs); $walker->();.. compile accepts options as described above in "OPTIONS", and arguments, which are either coderefs, or subroutine names. It constructs and returns a $treewalker coderef, which when invoked, traverses, or walks, and renders the optrees of the given arguments to STDOUT. You can reuse this, and can change the rendering style used each time; thereafter the coderef renders in the new style. walk_output lets you change the print destination from STDOUT to another open filehandle, or into a string passed as a ref (unless you've built perl with -Uuseperlio). my $walker = B::Concise::compile('-terse','aFuncName', \&aSubRef); # 1 walk_output(\my $buf); $walker->(); # 1 renders -terse set_style_standard('concise'); # 2 $walker->(); # 2 renders -concise $walker->(@new); # 3 renders whatever print "3 different renderings: terse, concise, and @new: $buf\n"; When $walker is called, it traverses the subroutines supplied when it was created, and renders them using the current style. You can change the style afterwards in several different ways: 1. call C<compile>, altering style or mode/order 2. call C<set_style_standard> 3. call $walker, passing @new options Passing new options to the $walker is the easiest way to change amongst any pre-defined styles (the ones you add are automatically recognized as options), and is the only way to alter rendering order without calling compile again. Note however that rendering state is still shared amongst multiple $walker objects, so they must still be used in a coordinated manner. in rendering (non-existent function-name, non-existent coderef) are written to the STDOUT, or wherever you've set it via walk_output(). Errors using the various *style* calls, and bad args to walk_output(), result in die(). Use an eval if you wish to catch these errors and continue processing. Stephen McCamant, <smcc@CSUA.Berkeley.EDU>.
http://search.cpan.org/dist/perl-5.17.8/ext/B/B/Concise.pm
CC-MAIN-2015-32
refinedweb
1,411
61.06
. For any of the packages, a fraction of an hour will be charged for an entire hour (hint: there's a Math method that will help). Design a class that calculates a customer's monthly bill. -It should store the letter of the package the customer has purchased (A, B, or C) and the number of hours that were used. -It should have a method that returns the total charges. [I'm having trouble on this part] -It should also calulate the amount of money Package A customers would save if they purchased packages B or C, and the amount of money Package B customers would save if they purchased Package C. If there is no savings, no message should be printed. The tester class would prompt the user for the package and the number of hours used, construct a new instance of the class and invoke the method(s) to calculate and display the bill. This is my class public class InternetPackages { //declaring constants and variables private char packages; private double hours; private double PACKAGE_A_PER_MONTH = 9.95; private double PACKAGE_B_PER_MONTH = 14.95; private double PACKAGE_C_PER_MONTH = 19.95; private double savings1 ; private double savings2; //argument constructor public InternetPackages (char pack, double hr){ packages = pack; hours = hr; } //Methods: To get Package Type, Amount, and Savings public String getPackage(){ String p; //if/else statement to decide user's package if (packages == 'A' || packages == 'a') p = "Package A"; else if (packages == 'B' || packages == 'b') p = "Package B"; else if (packages == 'C' || packages == 'c') p = "Package C"; else p = "Illegal input. Please try again."; return p; } public double getAmount(){ double amount; //if/else statement to determine total amount of the bill if (packages == 'A'|| packages == 'a') amount = (Math.ceil(hours)-10)*2 + PACKAGE_A_PER_MONTH; else if (packages == 'B' || packages == 'b') amount = (Math.ceil(hours)-20) + PACKAGE_B_PER_MONTH; else if (packages == 'C' || packages == 'c') amount = PACKAGE_C_PER_MONTH; else amount = 0; return amount; } public double getSavingsB(){ final double SAVINGS_B =((Math.ceil(hours)-10)*2 + PACKAGE_A_PER_MONTH)-((Math.ceil(hours)-20) + PACKAGE_B_PER_MONTH); double savingsB; //if/else statement to calculate what the user could've saved if they had the B or C packages if (packages == 'A' || packages == 'a') savingsB = SAVINGS_B; else savingsB = 0; return savings1 = savingsB; } public double getSavingsC(){ final double SAVINGS_C = ((hours-10)*2 + PACKAGE_A_PER_MONTH) - (PACKAGE_C_PER_MONTH); final double SAVINGS_C_FOR_PACKB = ((hours-20) + PACKAGE_B_PER_MONTH) - (PACKAGE_C_PER_MONTH); double savingsC; //if/else statement to calculate what the user could've saved if they had the B or C packages if (packages == 'A' || packages == 'a') savingsC = SAVINGS_C; else if (packages == 'B'|| packages == 'b') savingsC = SAVINGS_C_FOR_PACKB; else savingsC = 0; return savings2 = savingsC; } public String getToString(){ String savings; //if/else statement to output savings if (packages == 'A' || packages == 'a') savings = "You would save $" + savings1 + " if you changed to Package B and $" + savings2 + " if you switched to Package C."; else if (packages == 'B' || packages == 'b') savings = "You would save $"+ savings2 + " if you switched to Package C."; else savings = "This is the best package!"; return savings; } } and this is my tester import java.text.DecimalFormat; import java.util.Scanner; public class InternetPackagesTester { public static void main(String[] args) { //declaring input Scanner input = new Scanner (System.in); //asking user for package and hour info System.out.println("Which package do you have: A, B. or C?"); System.out.println("How many hours did you use?"); InternetPackages pack = new InternetPackages (input.next().charAt(0), input.nextDouble()); //round by the hundredth decimal point DecimalFormat money = new DecimalFormat("00.00"); System.out.println("You have " + pack.getPackage()); System.out.println("Your bill for this month is $" + money.format(pack.getAmount())); System.out.println(pack.getToString()); System.out.println("Thank you for your time and business. Have a wonderful day!"); } } When I run it, everything thing is fine except for the savings. If I were to input "A" and 30 hrs, it would print out: "You would save $0.0 if you changed to Package B and $0.0 if you switched to Package C." I don't understand why the values of the savings would appear instead of the 0.0.
http://www.javaprogrammingforums.com/whats-wrong-my-code/25974-help-if-else-statement.html
CC-MAIN-2015-06
refinedweb
667
54.22
By: John Kaster Abstract: Documentation on the QualityCentral (QC) web service interfaces This is a working document for using the QualityCentral (QC) Web Service. The client interface for the web service is documented here. The web service does evolve in a backward-compatible way, so be sure to check it for updates if you're implementing a new client. When you look at the documented interface listed above, you will note that many of the routines return a Base64Binary type. The return value for these methods is a compressed XML version of ClientDataset data packet. To find out more about XML datapackets for ClientDataset, see the article the express way to the Internet, Part 2. For Delphi, C++Builder, or Kylix applications, you can directly assign the uncompressed and unencoded version of this result set to a ClientDataSet component's XMLData property. Almost every method in QC requires a session ID parameter. You can retrieve a session ID by calling the login method in with your registered QC email address and password. SessionIDs are actually returns as a column of a single user record that includes things like your userid, sysop level, email, ... everything you can change on the options dialog except your password. You will need to extract the session ID value from this data packet. The session IDs returns are compressed with Zlib. The Delphi code submission for encoding and compressing listed below provides routines that will handle Session IDs. Once you have extracted the session ID, you can pass that as a plain-text string to all the methods that require it. Currently, your session ID is active for 90 minutes. If you make another call within 90 minutes, your session ID will be refreshed and good for another 90 minutes. If your session ID expires, a SOAP fault will be returned. If this occurs, you can trap that exception and log back in to retrieve a new session ID. Once you have an active session ID, you're ready to call the methods of the web service. If a method is not yet implemented, you will receive a SOAP fault indicating that it is not yet implemented. The names of most of the methods are self-explanatory. One that is important for writing your own web service client to QC is LastInterfaceChange. This will return the date and time of the last change made to the client web service interface. Additional parameters may be added to existing methods, and new methods may appear. It is not likely that a change that breaks the current interface will be required. However, since this is a beta, it's certainly possible some interface incompatibilities may happen. If you're writing your own client, the CurrentClientVersion method is probably of less interest than the above method since it returns the version of the current GUI client :-). GetOutline returns the outline areas for all projects. It is an internally-related dataset. You will need to filter this outline based on the active project. GetProjects returns the list of all projects. These source code routines may help you when writing your own web service client for QualityCentral. This is the only code snippet currently available for Java import java.io.*; import java.util.zip.InflaterInputStream; import javax.mail.internet.MimeUtility; InputStream in = new InflaterInputStream( MimeUtility.decode( new ByteArrayInputStream( bytes ), "base64" ) ); When you write a client, you must first login and retrieve a session ID to use for most of the available web service methods. See the notes on Session IDs above. To help eliminate garbage data in QC, a test user and test areas have been created. When you are testing creation or editing of reports, use the test user. You can login as the test user with the following information: Please use the "Test project" area for client testing purposes. Data in this area will be periodically removed. We'll probably leave them active for 72 hours or so, depending on how loaded with data the test area gets. Some of the word values you can pass to the lookup table are documented here. Check back for updates to this list. Value Description Project List of public projects - pass ID from this column to everything requiring a project. Platform List of all platforms. NULL project columns belong to all projects, otherwise the ID from the corresponding project's record indicates it belongs to a project. ID is stored when a platform is asked for. Version Lists all versions. Project column indicates what project that version is for. lookup_value is stored. Type Lists all Types. Lookup_value is stored. Severity Lists the severity lookups. Lookup_value is stored. Status Lists the statuses. ID is stored. Resolution Lists all the resolutions. ID is stored. If you pass an empty string for the lookup name you will get all the lookups (this is the way the current client works, then they are programmatically assigned their own ClientDataSets on the client). The sort id that comes back for a lookup_name/project tuple are unique, and change the sort order of the tree view. The lastDays argument that many of the methods accept is for items that have been created or modified in the last <NumberOfDays> For GetAttachment, you can specify a list of files with a <cr><lf> pair separating each file name. Wildcards are not currently accepted -- the file name must be specific. If you want all the files just send the empty string. The user information is returned as part of the login process when you get the session id. It is a superset of what GetUserInformation will probably be. GetUserInfo is for getting the public information of another user. In the GUI client, this is only used for the Sysop level to hide things non-Sysop users can't do (even if they tried the service always checks that the sessionid belongs to a sysop), and stop them from editing/deleteing based on the user_id. If you look at things like user_id (which more than likely will not be returned in the GetUserInformation since no one else needs to know my user_id) it is never passed back to the service. The user id is always determine from the sessionid returned from the Login method. Server Response from: ETNASC02
http://edn.embarcadero.com/en/article/33866
CC-MAIN-2016-36
refinedweb
1,044
65.52
This action might not be possible to undo. Are you sure you want to continue? 2- Issue 52 Thursday, September 22, 2011 Gonzales Cannon The Today in Texas History Apaches face Yoakum in Homecoming game Page C1 Sports First Shot Winners in annual First Shot Cookoff Page A10 Local singers head CATI festival lineup Page D1 Music **50 Cents** Reporting on Gonzales and Surrounding Counties with Honesty, Integrity and Fairness September 22, 1964 By NIKKI MAXWELL On this day in 1964, the Nimitz Museum in Fredericksnewseditor@gonzalescannon.com burg bought the Nimitz Hotel. Stage 2 water restricThe hotel, a unique building, tions are officially in effect was built in the late 1840s or for the City of Gonzales as early 1850s. Charles H. Nimitz, part of the city’s drought grandfather of Admiral Nimitz, contingency plan, and an bought it in 1855. The hotel ordinance approved by was remodeled many times. Its the Gonzales City Counremarkable steamboat supercil Tuesday night during a structure was added sometime called meeting. after 1888. Over the years many “The changes in this ornotable persons stayed there, dinance more accurately including President Rutherford reflects where the (water) B. Hayes and Robert E. Lee. In levels should be,” said City 1964 it was renovated and reManager Allen Barnes, reopened on Admiral Nimitz’s ferring to the Guadalupe birthday as a museum. The Ad- Hometown heroes Cpl. Jimmy Navarro and Cpl. Matthew Craven of GonzaRiver water flow measuremiral Nimitz Museum is now les prepare to celebrate “Come and Take It” while serving with the Marines ments, which report to be part of the National Museum in Afghanistan. (Courtesy photo) right above 100 cubic feet of the Pacific War, a seven acre site dedicated to retelling the story of World War II in the Pacific Theater. County Fire Marshal Keith Schmidt. work together in any emergency,” By NIKKI MAXWELL “During my time in fire service I Schmidt said, citing the Global newseditor@gonzalescannon.com have never seen an operation of that Connect system used to notify citiEvaluating the performance of size. I do know we all learned a lot zens in emergency situations. “Its “The cultivated mind is the guardian responders and the effectiveness of from it, and I’m proud of everyone up and running and we’re learning genius of democracy.” our way though it, but we need peoMirabeau B. Lamar equipment is a standard procedure who stood up to this thing (fire).” after any emergency situation. The debriefing was led by ple to get registered.” Following that protocol, Gon- Schmidt, who also serves as the City Global Connect is employed durzales County elected officials, fire of Gonzales Fire Chief. He stressed ing power outages, road closures, fighters and law enforcement per- the importance of an emergency fire, flooding and weather alerts. sonnel met at the Gonzales County management plan and said that due Schmidt said its a lot more powerful Courthouse, Sept. 15, to discuss the to the rotation of some key person- and user friendly than previous sysrecent wildfires north of the county nel, routine practice and drills are tems. Commissioner Kevin LaFleur and identify how some issues can be essential. He then introduced the asked citizens to register their cell improved. County’s new Emergency Manage- phones, land lines and addresses. “The Delhi fire was near and dear ment Coordinator Jim Harliss, who Constable Raliegh Measom asked to us, and Gonzales County did a assumes the duties Oct. 1. Schmidt about communication lot to be proud of,” said Gonzales “Gonzales County as a whole can DEBRIEF, Page A3 Gonzales puts restrictions on water usage per second (cfs). “This ordinance changes our drought contingency plan, changing it to 150 feet instead of 100 feet,” Barnes said. “The only difference between stage 1 and stage 2 is stage 1 is voluntary,” said Gary Shock, director of the city’s waste-water treatment plant. Shock went on to explain the specific restrictions of Stage 2 to the council and public. Based on river flow criteGONZALES Page A3 Come and Take It, Semper Fi County evaluates fire response Today’s Quote Weather Watch The City of Gonzales reminds residents that paving, curb and gutter installation will begin on the 100 and 200 blocks of St. George Street next Monday, Sept. 26. St. George will be closed to thru traffic during construction. Please do not park on the street between the hours of 7 a.m. and 6 p.m. The work will take approximately 6-8 weeks. If you have any questions, please call City Hall at 830-672-2815. St. George St. work to begin on Sept. 26 Thursday: High-92, Low-66, Isolated Thunderstorms Friday: High-94, Low-67 Sunny Saturday: High-96, Low-68, Sunny Sunday: High-98, Low-68, Sunny Monday: High-98, Low-67, Sunny Tuesday: High-96, Low-68, Partly cloudy Wednesday: High-95, Low-68, Partly cloudy Weather Sponsored by: By DAVE MUNDY FAST gives high marks to area schools AUSTIN — Two of the area’s school districts achieved nearperfect marks from the office of Texas Comptroller Susan Combs while others saw varying degrees of commendation in the release of the annual 2010 Financial Allocation Study for Texas (FAST). Shiner and Moulton ISDs each received 4.5 stars on the five-star scale in the study, which indexes academic progress versus spending. Shiner is noted for having a “Very Low” spending index and “Strong Relative Progress” academically, while Moulton has a “Low” spending rating and “Strongest Relative Progress” academically. The academic ratings are based on performance on English/ language arts and math statemandated test scores. Both the academic and spending scores are based on three-year averages. Joining those two districts in recording the strongest academic progress was Gonzales ISD. GISD’s spending index was rated at “high,” however, and the district ended with a rating of 3.5 stars. Three other districts joined Moulton in earning citations for a “low” spending rating: Waelder, Nixon-Smiley and Luling. NixonSmiley received a three-star FAST rating based on “Little Relative Progress” academically, while Waelder and Luling each received 2.5 stars based on “Least Relative Progress” academically. Yoakum ISD and Hallettsville ISD each received ratings of 2.5 stars. Yoakum had a spending index rating of “average” and “Little Relative Progress” academically, while Hallettsville had “Average Relative Progress” but a “high” spending index. Cuero ISD received a rating of just 1.5 stars, with a “very high” FAST, Page A3 manager@gonzalescannon.com Gonzales High School Royalty Danny Belsher, Murray Montgomery, Kevin Kelso, Capt. Steve Webb, Chris Ortman, DDS, Peggy Ortmann, David Allison, Tommy Poe, Annette Raab, Flo Blundell. Sept. 19: Keith Brown Sept. 25: Patsy Fitzsimmons Welcoming our newest subscribers 830-672-8585 Happy Birthday! Crime Beat.............. A2 In Our View............. A4 Faith........................ B3 Obituaries................ A9 Regional.................. B1 Sports......................C1 Classifieds................ B4 Puzzle Page..............D5 In Your View............. A5 Comics.....................D6 Agribusiness............... A7 Inside: Energy Watch Wednesday’s Prices Oil $84.80/bbl Nat.Gas $3.73 Lucas Energy Inc. “LEI” $1.83 GHS Homecoming Court The Gonzales High School Homecoming game is scheduled this Friday against Yoakum. The Homecoming Court includes (front, from left) Princess Amber Torres, Princess Bre-Ann Stafford and princess Mariah Hastings; (standing) Freshman Duchess Alenis Matamoros, Junior Duchess Taylor Green, Senior Duchess Stephanie Horner, Senior duchess Lauren Parr, Senior Duchess Katie Staton and Sophomore Duchess Hayley Blanton. (Photo courtesy Jami Owens, GHS) “Come and Hear It!” Tune in to radio station KCTI 1450 AM at 8 a.m. Friday and 8 a.m. Tuesday for weekly updates from Gonzales Cannon news editor Nikki Maxwell and General manager Dave Mundy with KCTI personality Egon Barthels. Victim bites off suspect’s finger By NIKKI MAXWELL newseditor@gonzalescannon.com Page A2 The Gonzales Cannon Thursday, September 15, 2011 Gonzales Police Report after a report of a loud party and people taking their clothes off. “The report said people were stripping down to their underwear, but when officers arrived on the scene the people dancing were not in their underwear,” Taylor said. “The officers told them to keep the noise down and keep their clothes on.” While on patrol Monday evening, Gonzales police officers stopped a tan Ford Explorer for failing to signal at a turn. The officers noticed that the passenger in the back seat was making suspicious movements, and asked him to exit the vehicle. Once outside the vehicle, Mark Alfred Diogu 29 Of Gonzales ran from the scene and a witness saw him toss something. Crack cocaine was recovered from the area where Diogu threw something, and it is being sent to the crime lab in Austin this week to be tested. “Officer Matt Camarillo caught him on foot and called for back up,” Taylor said. “Diogu took off again but Camarillo jumped on his back, got the suspect to the ground and subdued him.” Camarillo suffered a broken leg in three places. Diogu was arrested and charged with resisting arrest, evading arrest with injuries, and possession of a controlled substance. Gonzales Police Department Activity Report For The Week Of Sept. 19: 09/15/2011 Reported Hit And Run Accident At 1800 Blk Water St. 09/15/2011 Reported A man was rushed to Gonzales Memorial Hospital Tuesday morning to be treated for a severed finger. Gonzales police officers responded to a domestic disturbance and assault call at 7:48 a.m. at the 2600 block of Winding Way Drive in Gonzales. According to the report, the assault victim bit the finger off the alleged attacker in self defense. The identities of the victim(s) in the case were not released. “We don’t know at this time if the finger was reattached successfully, but we are investigating and assault charges may be filed,” said Capt. Allen Taylor of the Gonzales Police Department. In other news, Gonzales police officers were dispatched to a leased RV site at J.B. Wells Park Sept. 16 at 11:20 a.m., Hit And Run Accident At 2000 Blk Hwy 183. 09/15/2011 Johnny Cantu 64 Of Gonzales Arrested And Charged With Public Intoxication At 800 Blk St. Andrew St. 09/16/2011 Reported Forgery At 100 Blk Wallace St. 09/16/2011 Sadie Cardenas Ybarbo 33 Of Gonzales Arrested And Charegd With Criminal Trespass At 1000 Blk Henry St. 09/17/2011 Reported Assault At 1200 Blk St. Matthew St. 09/18/2011 Reported Burglary Building At 500 Blk St. Matthew St. 09/18/2011 Reported Sexual Assault At 700 Blk St. Paul St Which Is Still Under Investigation. 09/19/2011 Reported Theft At 300 Blk Carroll St. 09/19/2011 Mark Alfred Diogu 29 Of Gonzales Arrested And Charged With Resisting Arrest,Evading Arrest With Injuries, And Possession Of Controlled Substance At The 1400 Blk Cavett St. 09/20/2011 Reported Assault at 2600 Blk Winding Way Drive. Suspect in custody after college shooting VICTORIA —Victoria College was reported to be on lockdown amid reports of a shooting at the tennis courts late Wednesday afternoon. Students received the following message via text from the College: “Shots have been fired on the Victoria College main campus. All buildings should be locked and all individuals on campus should stay inside their current location. Law enforcement officers are on site.” News reports indicated around 5 p.m. that a suspect was in custody, but further details were not available at press time. Cannon News Services newseditor@gonzalescannon.com DNA leads to break in animal cruelty case LA GRANGE — Fayette County Sheriff Keith Korenek reported Friday that a 2010 cold case of animal cruelty and mailbox damage turns active after DNA analysis provided a suspect. In July of 2010 Sgt. Charles Jobb responded to Helcamp, Adamcik, W. Sedan, Vornsand, and Seydler Roads for reported mailbox damages and two head of cattle which were shot. Evidence on the scene was processed and submitted to the Texas Department of Public Safety Crime Laboratory in Austin. Due to the backlog of cases to process, the wait was lengthy, but DNA matched a record currently on file and maintained by the Laboratory, indicating a suspect. Sgt. Jobb followed up on this discovery and soon interviewed one suspect in the case. The interview resulted in two other adult males being named as well as one juvenile. All suspects in the case have been interviewed by Sgt. Jobb and confessed to the crimes. The case will be presented to Fayette County District Attorney Peggy Supak for prosecution. Those persons involved in the case will be named at a later date once charges are accepted. Gonzales Co. Sheriff’s Office Report The Gonzales County Sheriff’s Office Sheriff’s Report for 09/11/11-09/17/11 09/11/11 Campos, Hector Villazana, 02/1962, Gonzales. No Drivers License Issued. Released on Time Served. Local Warrant - Theft of Property >$1,500 <$20K. Released on $10,000 bond. 09/12/11 Flores, Rocky, 09/1978, Gonzales. Local Warrant - Assault causes Bodily Injury Family violence. Released on Time Served. Gomez, Jose Fernando, 11/1983, Waelder. Commitment/Sentence - Driving while Intoxicated. Remains in Custody. 09/13/11 Silbas, Sophia Estelle, 03/1984, Gonzales. Local Warrant - Theft of Property >$20 <$500 by Check. Requires $1,500 Bond. Remains in Custody. Mathis, Blake O’Neal, 07/1983, Gonzales. Local Warrant - Evading Arrest Detention. Requires $2,500 Bond. Local Warrant - Driving while License Invalid with previous Conviction or Suspension. Remains in Custody. Esparza, Pedro Gutierrez, 12/1955, Waelder. Local Warrant - Disregard Stop Sign. Released on $500 PR Bond. 09/14/11 Salazar, Bernardo Garcia, 05/1983, Cuero. Local Warrant - Driving while Intoxicated. Requires $1,500 Bond. Immigration Detainer. Remains in Custody. Spears, Brandon, 03/1985, Luling. Local Warrant - Criminal Nonsupport. Requires $1,000 Bond. Caldwell County Hold. Evading Arrest Detention. Caldwell County Hold. Possession of Marijuana <2 oz. Requires $5,000 bond. Remains in Custody. Villanueva, Richmond, 03/1973, Karnes City. Local Warrant - Driving while Intoxicated. Requires $2,500 bond. Remains in Custody. 09/16/11 Wisdom, Frederick James, 11/1968, San Antonio. Local Warrant - Possession Promotion of Child Pornography. Requires $25,000 Bond. Local Warrant - Traffic - Speeding. Requires $208.10 Fine. Local Warrant - Expired Drivers License. Requires $215.00 Fine. Local Warrant - Public Intoxication. Requires $465.00 Fine. Local Warrant - Failure to Appear. Requires $410.00 Fine. Remains in Custody. Total Arrest, Court Commitments, other agency arrest and processing’s: GCSO DPS GPD WPD NPD Constable DWCSO DEA TPW GCAI Total 10 04 05 00 02 00 00 00 00 00 21 Bernshausen earns Fayette Co. citation Fayette County Sheriff Keith Korenek reports he has recently given recognition to a Deputy for outstanding performance at the Sheriff’s Office. Sheriff Korenek has implemented an award program through the Sheriff’s Office to recognize his Deputies in dedicated performance to the citizens of Fayette County and to the duty of being a Deputy for the Fayette County Sheriff’s Office. Through this program, a Deputy will be acknowledged for their service each quarter throughout the year and receive a plaque for this accomplishment. Sheriff Korenek is proud to report that Deputy Dusty Bernshausen has received this prestigious award for Deputy of the Third Quarter of 2011. Deputy Dusty Bernshausen grew up in Fayette Coun- Gonzales Municipal FTA List Gonzales Municipal Court: Court Date Sept. 14 Defendants who receive a citation(s) must appear on or before the date indicated on the citation(s). Their appearance must be in writing, in person or by an attorney, and any change of address must be given to the court. Defendants listed below have recently missed their scheduled court date and their failure to respond will result in a warrant(s) being issued for their arrest. An additional charge of violate promise to appear being added to their fine. In addition to the original charge, there will be a warrant fee for violate promise to appear. In addition, you may be denied the renewal of your driver license from the Department of Public Safety and collection of debt fees by attorneys at law. Ramon Rivera Eduardo Luis Arellan Deanna M. Bailey Thomas Enriquez, Jr. Daniel Almarez Rhonda Simmons Jose Azua Bautista Rebecca Castillo John Vasilio Aleman, Jr. Rigo Sandoval Rojas Arthur Lakey, Jr. Bianca Stewart Dennis Lee Trujillo Angela Fonseca Kristie Marie Perez Justin Sepulveda Sepulveda Francisco Moreno Emuil Greathouse Jose Alfonso Reyes-Hernandez Enrique Lopez Flores Christopher Espinosa Madison Marcus Walter Marion Taylor, Jr. Hugo Hernandez Fabian Humberto Medrano Ricardo Veliz William Marquis Robinson Kory Tyler The above listed defendants need to contact the court as soon as possible at 830-672-2815. If you have any outstanding fines your name may make the next list. ty and graduated from Round Top – Carmine High School. Bernshausen then continued his education at Blinn College and graduated from Blinn in 2000 with an Associates Degree in Criminal Justice. Bernshausen then attended TEEX Law Enforcement Academy in College Station and graduated on May 31, 2002. On June 1, 2002 Bernshausen’s law enforcement career began here in Fayette County as a Patrol Deputy. Bernshausen currently holds the position as Patrol Deputy Level Three and carries an Advanced Peace Officer License. Bernshausen resides in Nechanitz and is married to his wife Jennifer Bernshausen. They have three children, Crew Bernshausen, Kash Wessels, and Charlee Wessels. DeWitt Co. Sheriff’s Office Report DeWitt County Sheriff’s Office Report for Sept. 8-14: Jail Average Daily Count74; Inmates Housed for other Agencies- 4 September 8, 2011 Jeremiah Miller, 22, of Cuero, Criminal Trespass, Bond of $1,000, Possession of Marijuana < 2 OZ, Bond of $1,000, Violation of Probation / Possession of Dangerous Drugs, Bond of $1,000, Theft by Check, Bond of $1,000, Bail Jumping and Failure to Appear, Bond of $1,000, Cuero PD Taffie Etoll, 25, of Yoakum, Violation of Probation / Forgery of Financial Instrument, Bond of $50, 000, DCSO September 9, 2011 Randy Flores, 23, of Yoakum, Violation of Probation / Criminal Trespass, Bond of $2,000, DCSO Craig Dolan, 30, of Cuero, Violation of Probation / Indecency / Sex / Assault Child, Bond of $20,000, DCSO Christopher Huerta, 34, of Cuero, Violation of Probation / Deadly Conduct, No Bond, USMS Anthony Taylor, 30, of Yoakum, Violation of Probation / Sexual Assault Child, Bond of $20,000, DCSO Patricia Saenz, 30, of Cuero, Violation of Probation / Burglary of a Building, No Bond, Possession of a Dangerous Drug, Bond of $2,000, Possession of a Dangerous Drug, Bond of $2,000, Possession of a Dangerous Drug, Bond of $2,000, Possession of a Dangerous Drug, Bond of $2,000, Possession of a Dangerous Drug, Bond of $2,000, Possession of a Dangerous Drug, Bond of $2,000, Cuero PD September 10, 2011 Enrique Patlan Vanegras, 22, of Cuero, Public Intoxication, Fine of $355, Illegal Entry, No Bond, DPS Bernado Garcia Salazar, 28, of Cuero, Public Intoxication, Fine of $355, Violation of Probation / Driving Under the Influence, Bond of $1,500, DPS, Illegal Entry, No Bond, DPS Cruz Rodriguez Armas, 23, of Cuero, Driving While Intoxicated, Bond of $1,000, Illegal Entry, No Bond, DPS September 11, 2011 Derrick Jacob DelosSantos, 21, of Nordheim, Driving While License Invalid Enhanced, Bond of $1000, Accident Involving Damage to Vehicle GT$200, Bond of $1000, Yorktown PD September 12, 2011 Lupe Garcia, 46, of Cuero, Failure to Comply With Registration/ Sex Offender, Bond of $20,000, Cuero PD Tony Vasquez, 24, of Cuero, Theft of Property by Check GT$20LT$500, Out of Victoria Co, $500 PR Bond, Cuero PD Vernesa Dorsey, 45, of Cuero, Revocation of Probation/ Felony Theft with Previous Conviction and Habitual Felon, Bond of $35,000, Revocation of Probation/ Felony Theft with Previous Conviction and Habitual Felon, Bond of $35,000, DCSO Ronnie Hendrick, 40, of Westhoff, Criminal Nonsupport, No Bond, DCSO September 13, 2011 Bobby Massey, 22, of Yoakum, Revocation of Probation / Injury to a Child, Bond of $75,000, DCSO Mauro Gonzalez, 28, of Yorktown, Driving While License Invalid w/ Previous Conviction, Bond of $1,000, DPS September 14, 2011 Justin Little, 22, of Cuero, Capias Pro Fine / Driving While License Invalid, Fine of $447.20, Cuero PD Carlos Becerra, 35, of Cuero, Violation of Probation / Felony Theft, No Bond, DCSO Kristin Morris, 35, of Cuero, Theft by Check, Bond of $1,000, Cuero PD Rosalinda Garcia, 39, of Victoria, Revocation of Probation / Credit Card Abuse, No Bond, DCSO Lorena Navarro, 45, of Cuero, Sale to Minors - Alcohol, Bond of $1,000, Cuero PD Ronald Taylor Jr., 19, of Cuero, Public Intoxication, Fine of $314 (30 days to pay), Cuero PD When busy lives meet big responsibilities… With so many demands on your time, some things just have to wait. But don’t put off talking to me about life insurance – it may be the most important thing you ever do. Yoakum Police Report Yoakum Police Department Weekly Incident Report September 12, 2011 thru September 18, 2011 09/12/11 Case #11-375, Burglary-Vehicle, 302 Ward; Disposition, Investigation. Case #11-376, Disorderly Conduct, Hopkins; Disposition, Court Citation. 09/13/11 Case #11-378, Disorderly Conduct, 705 Lavaca; Disposition, Court Citation. 09/15/11 Case #11-380, Disorderly Conduct, 201 W. Gonzales; Disposition, Court Citation. 09/17/11 Case #11-381,BurglaryResidence, 311 Plaza; Disposition, Investigation. 09/18/11 Case #11-384, Assault-A/ (FV), 509 W. Gonzales; Disposition, Investigation. Case #11-385, Criminal Mischief-C, 509 W. Gonzales; Disposition, Investigation. Scott T Dierlam, Agent 1212 E Sarah Dewitt Drive Gonzales, TX 78629 Bus: 830-672-9661 Fax: 830-672-5444 P092001TX State Farm Life Insurance Company (Not licensed in MA, NY or WI) • Bloomington , IL Thursday, September 22, 2011 CITY: Imposes water restrictions Continued from page A1 The Gonzales Cannon Page A3 Spreading the word Commissioners say: ‘No blading’ until more rain By NIKKI MAXWELL newseditor@gonzalescannon.com Ten year old Brevin Wilson, formerly of Gonzales, was shopping in a store in Sonora, Calif., when he came across this “Come and Take It” flag for sale. He asked the owner what he wanted for the flag and was told that the flag had “some kind of history” but the man couldn’t remember the story. Brevin then proceeded to tell the owner all about the battle of Gonzales and the origin of “Come andTake It”. The owner was so impressed that he gave Brevin the flag and it now flies outside his home in Sonora. (Picture courtesy Kimara Wilson) ria, Gonzales City Manager has initiated stage 2 drought response of the Drought Contingency Plan in the City of Gonzales. Stage 2, will go into effect September 14, 2011. During Stage 2, the following water use restrictions shall apply to all persons: (a) Irrigation of landscaped areas with hose-end sprinklers or automatic irrigation systems shall be limited five (5) gallons or less, or drip irri- gation system. (b) Use of water to wash any motor vehicle, motorbike, boat, trailer, airplane or other vehicle is prohibited except on designated watering days between the hours of midnight and 10 a.m. and between 8 p.m. and midnight. Such washing, when allowed, shall be done with a handheld bucket or a hand-held hose equipped with a positive shutoff nozzle for quick rises.. (c) Use of water to fill, refill, or add to any indoor or outdoor swimming pools, wading pools, or jacuzzi-type pools is prohibited except on designated watering days between the hours of midnight and 10 a.m. and between 8 p.m. and midnight. (d) Operation of any ornamental fountain or pond for aesthetic or scenic purposes is Gonzales County Commissioners are asking citizens to be patient with them about county road conditions. “We’ve been getting a lot of calls from people asking us when we are going to blade the roads, and our answer is, we don’t know,” said Commissioner Donnie Brzozowski. “We can’t use the maintainer trucks on the dirt roads because its just too dry. We need more rain first.” According to Brzozowski a small brush fire was sparked recently because of a rock in the road. He explained that using the metal equipment on dry roads is a fire hazard, because if the blades scrape against a rock in the road it can lead to sparks — And sparks can lead to fire, something that no one in Texas wants to see. The County is also holding off from mowing for the same reason, until the dry conditions change. “When we have at least two inches of rain over the whole county, then we will get back to doing it,” Brzozowski said. “Be patient for the sake of safety.” During their meeting Monday morning, the commissioners filed the 2011-2012 Gonzales County Budget, received the Tax Assessor-Collector’s monthly report, and approved two minor budget amendments. County Clerk Lee Reidel applauded the actions of the Belmont Volunteer Fire Department during a house fire in her neighborhood Friday afternoon. “They responded fast and took care of it,” Reidel said. “We’re lucky to have them out there.” DEBRIEF: County reviews response to wildfire issues Continued from page A1 prohibited except where necessary to support aquatic life or where such fountains or ponds are equipped with a recirculation system. (e) Use of water from hydrants shall be limited to fire fighting, related activities, or other activities necessary to maintain public health, safety, and welfare, except that use of water from designated fire hydrants for construction purposes may be allowed under special permit from the Gonzales Water Works. (f) Use of water for the irrigation of golf course greens, tees, and fairways is prohibited except on designated watering days between the hours midnight and 10 a.m. and between 8 p.m. and midnight. However, if the golf course utilizes a water source other than that provided by the Gonzales Water Works, the facility shall not be subject to these regulations. (g) All restaurants are prohibited from serving water to patrons except upon request of the patron. (h)) Violation of the water restriction provisions is a misdemeanor, punishable by a fine of not less than $50 and not more than $1,000. Each day that one or more of the provisions are violated constitutes a separate offense, carrying the same punishment. A person convicted of three or more distinct violations of the water conservation measures is subject to discontinuation of water services to the premises where the violations occurred. Barnes said the city has ordered signs to be posted around the community in high traffic areas to alert residents of the current water usage restrictions. Area Schools: FAST ratings District Name Gonzales ISD Waelder ISD Hallettsville ISD Cuero ISD Shiner ISD Yoakum ISD Nixon-Smiley CISD Moulton ISD Luling ISD Enroll. 2,513 263 860 1,870 552 1,539 1,057 311 1,455 Rating 3.5 stars 2.5 stars 2.5 stars 1.5 stars 4.5 stars 2.5 stars 3 stars 4.5 stars 2.5 stars Spending Index High Low High Very High Very Low Average Low Low Low Comp. Progress Score Comp. Acad. Progress Score Progress Pct Strongest Relative Progress 80 Least Relative Progress 13 Average Relative Progress 48 Little Relative Progress 36 Strong Relative Progress 75 Little Relative Progress 20 Little Relative Progress 35 Strongest Relative Progress 90 Least Relative Progress 8 FAST: Shiner, Moulton earn area’s best marks Continued from page A1 spending index and “Little Relative Progress” in academics. In a news release, Combs said the study is designed to show which Texas schools and school districts successfully combine high academic achievement with cost-effective operations. In the new report, 46 school districts statewide, Matamoros Taco Hut Weekly Specials Sept.26-Oct. 2 Bean & Egg Taco Breakfast $ 15 Business Delivery Only ends at 11 a.m. 201 St. Joseph • Gonzales • 672-6615 OPEN SUN.-TUES 6:00 A.M.-2 :00 P.M. WED.-SAT. 6:00 A.M. - 8:00 P.M. 1 Carne Guisada Plate Lunch $ 95 4 district size and student characteristics. In addition to ratings for all Texas public school districts, campuses and charter schools, the FAST website at. FASTEXAS.org. problems during the emergency. Schmidt confirmed that radio communication between responders was an ongoing issue, with inconsistencies in frequencies being a major contributing factor. “We had some communicator problems and there are some radio frequency issues with Waelder,” Measom said. “We need to have cops and firemen on the same channel.” “As we rolled into Delhi we weren’t told what frequency to work on or what tasks they wanted from us,” Schmidt said. He added that Gonzales County radios are VHF and need to be kept in proper, working condition at all times. Measom asked about 800 frequency radios. “They’re supposed to work well,” he said. “We have the capability to handle the 800 frequency radios and the system worked pretty good,” Sheriff Glen Sachtleben said. “As a supervisor I don’t need to listen to the ‘woods,’ just what this man (indicating Judge David Bird across the room) needs from me.” According to Schmidt, the firefighters in Delhi were using channels ‘T-Fire 1 and 3’ “They . were covering everything in two frequencies,” he said. “As the Gonzales fire department arrived they used the same frequencies, but there were some tactical concerns with limited channels on all the radios.” Sachtleben suggested using attack channels only, and keeping the other frequencies clear. “Whatever we can do to back you up, we will do it,” Sachtleben said to Schmidt. During the fires, the Gonzales County Sheriff Office personnel were involved in evacuation preparation warnings and answering questions from Gonzales County residents with property near the north county line. Schmidt explained that Sachtleben is the county’s public information officer, and said that in the future he will funnel the information to the sheriff for release instead of handling that collateral duty himself. Schmidt made hourly reports on KCTI AM 1450 during the nearly three day fire emergency in Delhi, updating listeners on the status of the fires and any changes to evacuation orders. Judge Bird credited Schmidt for juggling his duties during the emergency, but added that in the future the public information officer will handle that for him. According to the group, there was a lot of confusion during the emergency, with hundreds of phone calls being received between the Gonzales Fire Station and the Sheriff’s Office. “At one point we had more than 70 messages on our answering machine asking about the fires,” said a Gonzales fire fighter at the meeting. “The switchboard at the Gonzales County Sheriff’s Office was locked down for four hours. That’s why they were calling everywhere else.” Throughout the emergency, reverse 9-1-1 phone calls went out with recorded messages from Judge Bird, informing the public of the status in their area. “It would have been an asset if you repeated what you said in those messages,” said Justice of the Peace Diedra Voigt. Voigt also suggested utilizing the Gonzales city cable channel for emergency communications. Bird confirmed that text messages can also be sent through the reverse 9-1-1 system. The communications discussion shifted to include social media options, with Facebook topping the list of resources used during the wildfires. According to Schmidt, while helpful for many people to spread the word, a lot of misinformation also spread like wildfire on the web. “As first responders we instantly became experts in everything we said,” Schmidt said. “We must be careful what we say because it is repeated, and its not always accurate.” Voigt said a lot of the problem came from the citizens who were reporting fires or evacuations without all the facts, causing more fear and confusion. “Facebook is helpful, but people make comments on there and others assume its true,” she said. “It needs to have controlled information, not gossip.” Assistant Fire Chief Kevin Pirkle agreed it is unreliable. “It’s like the old scanners,” he said. “Once somebody gets on it, all of a sudden he or she is a news reporter.” Schmidt said Global Connect is more reliable than Facebook, but agreed with supporters that it is a resource worth considering in the future. “We will look at having an official County Facebook page to dispense emergency information. The conversation turned to 911 phone calls, and some hiccups in the classic system. “911 overloads quickly. We had all nine lines ringing and if they aren’t answered by the fifth ring they roll over,” Sachtleben said, adding that some of the calls ended up being answered in neighboring counties up to half an hour away. Measom asked about putting the county’s emergency command trailer on site next time. “The trailer has communication capability,” Schmidt agreed. “We can do more with it in the future.” Private businesses and public service agencies were credited for their contributions to the fire fight. Department of transportation supplied fuel for fire fighting vehicles, the Forest Service and local law enforcement helped with county road closures, and DPS managed state road activity. “We did have some rubberneckers, and some of that impeded emergency vehicle traffic,” Schmidt said. He said that while fire fighters used their equipment to create fire lines, some citizens freelanced, using bulldozers and trying to save their property by creating fire lines themselves. “They were focussed and knew what they were doing,” Schmidt commented. Brzozowski asked about the county putting someone in charge of heavy equipment. “In the future give the request tot he emergency management coordinator, and he will handle that,” Schmidt said. All agreed they needed to continue working on the county’s emergency command structure as a whole and said they were open to suggestions. “We want to hear from the public and what we can do to continue serving them successfully,” Schmidt said. “There wasn’t much fire in Gonzales County but the money we spent fighting the fires outside the county was well worth it,” said Gonzales County resident Gilbert Philippus, who attended the debriefing. Schmidt credited all the firefighters from throughout Gonzales County who helped battle the fires in Delhi and other nearby areas. “When all our fire fighters got there we just thought ‘Let’s put this wet stuff on this hot stuff,’” he said. “They did a great job. Nobody got hurt and everybody went home — That’s what’s most important.” Page A4 Redistricting isn’t designed to make any sense It was interesting to see the reactions of residents of the Gonzales County Underground Water Conservation District, and hear of the reactions of citizens of Gonzales, to preliminary presentations about the upcoming redistricting in each of those entities. A lot of folks expressed some genuine surprise to learn that they live in a “protected minority” district — and that any redistricting plan dreamed up must maintain that “protection” or it will get thrown out by the U.S. Department of Justice. Nice of y’all to tune in, folks. We’ve been trying to tell you about these shenanigans for 30 years or more. In a nutshell, here’s the gist of the situation, both in those governing entities and in every other local government in the State of Texas. Even though “Hispanics” are now the most numerous ethnic population in Texas, they are still considered a “protected racial minority” by the federal government. The Voting Rights Act of 1965 said that Texas In Our View The Gonzales Cannon Thursday, September 22, 2011 Dances with Chihuahuas Dave Mundy General Manager has historically discriminated against racial minorities — it did — so anything and everything that can maximize the impact of “racial minority” voters is justified. Whether or not it’s fair, whether or not there’s any justice in it. Thus, since the GCUWCD for example has two of its five districts with a Hispanic-majority population of 65 percent or more, at least two of the newly-drawn districts must have 65 percent or more Hispanic residents. Doesn’t matter whether the population percentages change in the future or not — there must ALWAYS, for now and evermore, be at least two districts which are comprised of at least 65 percent Hispanic people. The same happens in areas where the black population is numerous enough to develop a majority. Look at areas like Harris County, where Sheila Jackson Lee’s congressional district looks like some kindergartener’s artwork because it’s drawn to be comprised almost exclusively of precincts of black voters. She couldn’t win an election in a racially-mixed district because she works very hard to offend everyone who isn’t black (and a lot of those who are). And yes, “Hispanic” is an ethnicity, not a race. On the Census, you could check the box as both “white” and “Hispanic” — although why you would want to set yourself up for racial discrimination by the federal government by checking the box as “white” escapes me. (You remember the U.S. Justice Department, right? They also con- Are you on the gift list? Last Thursday, I came home from work and checked the mailbox as usual. Inside there was the electric bill — Yikes, I’ve been dreading that one — some coupons for eye glasses and an oil News Editor change, and a mysterious white envelope postmarked from Washington, D.C. At first glance I figured it was something from the Department of Veteran’s Affairs, but then I noticed the symbol in the left hand corner was not the usual VA emblem. It was DOD (Department of Defense), and marked “URGENT.” What I found inside shocked me. I read it three times before realizing what it meant. It was a letter from the Department of Defense Bone Marrow Registry, notifying me that I was a preliminary match for a patient in desperate need of a bone marrow transplant. I scanned my memory and couldn’t remember filling out any paperwork for the registry. But the letter said I joined the list in 2002. Suddenly I remembered everything. I was stationed at Navy-Marine Corps News in Washington, D.C. nine years ago and assigned to do a story on the DOD Bone Marrow Program. A bone marrow drive was being held on base so it was a perfect opportunity to interview the people involved, from all perspectives. While I was there taking pictures, one of the medical technicians suggested I roll up my sleeve and give a blood sample to be added to the registry. As a service member filling out paperwork and being poked by white coats wasn’t a big deal to me. It also gave me an opportunity to experience firsthand what the “registry applicants” went through for my report. That day I interviewed a man who donated bone marrow twice, and was eager to do it again. I will never forget him, or his story. He was a civil service employee in his mid-fifties, with only a few years left as an eligible bone marrow donor (the cut off age is 60 years old). He shared his personal experiences with me, and by the end of the interview we were both in tears. He said he had received the call to donate a third time, but the day he was supposed to fly to Arizona for the surgery he was notified that the 14 year old boy who needed the transplant had died before he could get there. That news devastated him. I will never forget the look in his eyes, or the tremor in his voice as it cracked while he spoke through his tears. He felt guilty somehow, even though it was not his fault. Even though he was a hero who had already saved two other lives, he wanted to do more, save more... Now, here I am nine years later, following in his footsteps. I gave blood for more tests this week to confirm I am a good match for “Patient X”. I don’t know his/her name or anything about them. All I know is they are in critical need of a bone marrow transplant, and I may be able to help. sider America’s veterans to be “terrorists” and are helping to arm and supply drug lords in Mexico. Nice folks.) The Feds consider “Hispanic” to be a “racial minority,” and nothing you say will change their minds. It doesn’t matter that many of those “Hispanics” counted by the Census cannot vote; one gentleman at the GCUWCD meeting identified himself as a door-todoor Census taker, and he readily offered that many of those he included in the count are not citizens of the U.S. and are ineligble to vote. All the Voting Rights Act cares about is skin color and number. Like so many other federal programs, agencies and regulations, the Voting Rights Act was passed to address a bona fide problem. And like so many federal programs, agencies and regulations, the Voting Rights Act long ago ceased to perform its primary function and has instead taken on a new dimension seemingly diametrically op- posed to its original purpose. Rather than ensuring that the voting rights of historically-oppressed minorities are protected, the Voting Rights Act is now used as a tool to divide and conquer. It is the federal version of “ethnic cleansing,” used to systematically disenfranchise whites. For our local folks, here’s some advice: don’t argue with the poor folks making their presentations to our local governments. It’s not their rules they’re using. As with public education, it’s something being forced on us from Washington, and you’re not going to change this at the local or even state level. There are only two ways you’ll ever change it: one is to convince the federal government to let you sue it, then to win those lawsuits over however many decades it takes, including at the Supreme Court level, to overturn the law. Good luck with that. The other way is to eliminate federal oversight altogether — by getting a divorce from Washington. ‘Reality Check’ Nikki Maxwell How sincere is jobs plan? prime time. again, he was back to the “tax and spend” approach with an emphasis on class warfare. Obama has started campaigning for passage of the bill; however, it has not been presented in Congress, not even by a Democrat (as of today, Sept. 13). That is a very curious point. If Texas Congressional Democrats like Henry Cuellar, Ruben Hinojosa, Charlie Gonzalez, and Lloyd Doggett are serious about the bill, why haven’t they stepped up to sponsor it? re- act a. BOARD OF DIRECTORS Billy Bob Low • Chairman Randy Robinson, Vice Chairman Myrna McLeroy Mary Lou Philippus, Secretary Alice Hermann Dave Mundy - General Manager manager@gonzalescannon.com Nikki Maxwell - News Editor newseditor@gonzalescannon.com Debbie Toliver - Advertising Director advertising@gonzalescannon.com Dorothy Voigt - Business Manager dot:. The Gonzales Cannon Cedric Iglehart - Regional News region@gonzalescannon.com Mark Lube - Sports Editor sportseditor@gonzalescannon.com Sanya Harkey - Circulation/Classifieds subscriptions@gonzalescannon.com Letters to the Editor letters@gonzalescannon.com As the lab technician at Gonzales Memorial Hospital filled 7 large tubes with my blood Tuesday, I said a silent prayer in my head, asking God to make me a perfect match for this stranger. I want to pick up where the man I interviewed in 2002 left off. I’ve spoken to my friends and family about the possibility of me being a bone marrow donor and most were 100 percent in support of it, but some of their comments surprised me. One person asked me why I would go through surgery for a stranger. Another said I shouldn’t do it because I have children and what if something goes wrong during the procedure and I get hurt. I appreciated their concern and input, but the answers came quickly to my heart. In response to the patient being a stranger, I said, “I would want a stranger to do the same thing for me or my family.” In response to something going wrong and me getting hurt, I said, “Anything can happen, anytime, anywhere. A bus could hit me crossing the street (hope I didn’t jinx myself).” Plus, it never hurts to make a deposit in the “Good Karma/Blessing Bank” because you never know when you will need to make a withdrawal. I have been so blessed in my life and had many second chances. If I can be someone else’s second chance then that is a chance to return the favor. In the week since receiving that letter, I have done a lot of research on the bone marrow registry and donation process. Much has changed in the nine years since I rolled up my sleeve and got on that list. Medical technology has improved, making the procedure less invasive and more comfortable for the donor, with less recovery time. There are more than 100 diseases that can be treated through bone marrow transplants, but finding a good marrow match is not as easy as you think. In fact, only 30 percent of Americans who need bone marrow transplants find a relative who is a match. The other 70 percent rely on the National Bone Marrow Registry to find a stranger with the same tissue code as the patient. And according to the National Marrow Donor Program, only half the people who need a transplant find a match. Hopefully I will find out soon if I am the right match for “Patient X.” I will keep you posted. In the meantime, ask yourself what you would do if you were in my situation. And then ask yourself what you would want someone else to do if you were “Patient X” waiting for a stranger to save your life. There is a saying; “A stranger is a friend I haven’t met yet.” I think that definitely applies in this case. I hope I can be a lifesaving friend to a stranger, and someday share my story with a young reporter. Maybe he will roll up his sleeve and the cycle will go on. What a wonderful gift we can give each other if we look beyond ourselves for a moment and see the big picture. Thursday, September 22, 2011 You can’t turn back time, but you can buy a little We received a press release apple nasties. this past April but didn’t put Jim Cunningham “But it is my belief, and much truck in it at the time. brother, I wouldn’t fib to you, However, a company repthat the monies generated by resentative contacted me this the parking meters could get week and asked for a sit-down the little towns out of their budmeeting to discuss plans to get crunches. I swear and deresurrect the economy in a clare!” he exclamated. number of small communities Experson said the city of in Lavaca, Dewitt and Gonza- Jim Cunningham is a former long- Moulton is one such town they time Gonzales newsman and the are anxious to approach. Citing les counties. former interim publisher of the Lamar (which is a real name Gonzales Cannon. the recent oil boom. for a man) Experson III, CEO “This area is set to become an of the Billbaits Mechanical impacted area. And the locals Monetary Making Machine Co. of Duncan, need to be prepared. Why the activity around OK, explained, over a tall glass of green tea, here is gonna pick up something fierce. Year“Governments are having a difficult time in round it’s gonna be as busy as a cranberry these trying times trying to make ends meet. merchant at Thanksgiving. I swear and deThe US of A and Greece and France, well, it’s clare.” worldwide. Every country is going broke. Experson said parking meters have been “So it is just natural to assume the little mu- around since 19 and 33. When Oklahoma nicipal governments are hurting fer certain City merchants were in wont of increasing also. Looking this way and that and over there traffic turnover in their stores. So they asked trying to make ends meet.” the local newspaper editor, Carl C. Magee Experson said that early this spring he to help them. Magee sponsored a contest, opened up his mind one night and discovered with the grand prize of $500 to engineering a 3-way light bulb burning as bright as the sun. students at the University of Oklahoma to “That’s when it hit me that these little towns develop a timing device that would allocate should bring back parking meters. Why it is set amounts of time for parking. a simple, yet surefire way to generate much The first parking meter was named “Black needed funds for the burgs’ coffers.” Maria.” Once the parking meters migrated He continued that it was his intention to out of Oklahoma in later years it is assumed coax communities into a trial period of utiliz- a lot of folks said “To hell with them there ing his company’s parking meters to see if it Okies.” was feasible to do so. Experson further ventured, “Sales tax re“Now I know there will be some negatory bates are down. Businesses are folding. The reaction to such a plan if implemented. Some little towns don’t have the luxury of larger critics will be squawkin’ like a hen layin’ a corporations, like riding academies, cressquare egg. But once the feathers settle it’ll just cent wrench factories or street walkers, to be a gut reaction. Like a small case of the green add to the tax rolls. In Your View The Gonzales Cannon Page A5 Scratch Pad An omen of the future? Is this an omen of the future? An Oklahoma company is proposing bringing back a monster from the past. The parking meter. A spokesman said it could generate much need money for a small town’s coffer. Pictured above is a major street in Moulton. (Photo by Jim Cunningham) “So they have to look to other avenues for and final say-so in a number of area towns income or they’ll just be stuck in a cul de so the parking meters can be in place by sac. Sort of like a NASCAR driver going in April 1, 2012. circles. I mean it takes a gazillion to run a “Everyone is accused at times for killing village.” time or wasting time. Well, with my parkHe allows that on certain occasions the ing meters mankind can once again buy money will be pouring in to the parking time. In increments of 15-minutes at a $1, meters. 30 minutes for $1.75 and an hour at $3. “I understand that once a year Moulton Don’t worry about keeping change or foldhas a Jam’N’Jelly Celebration. And that on ing money. The meters will take debit and Saturday afternoon the downtown street is credit cards,” boasted Experson. lined with antique tractors. With the drivA few things are to be addressed in the ers visiting a spell and jawin’ and juicin’ at interim. Such what the town’s take would be an establishment for thirst. Well, they have in comparison to what Experson’s company to have a spot to stop those tractors. would bank. “And that’s where my company comes But don’t concern yourself with the miin.” nor details. That’ll all be taken care of once Experson says he hopes to get the okay we’re parked … down the road. Letters to Dear Editor, We would like to thank you for accommodating the Come and Take It Bicycle Race on July 23-24 which benefitted Norma’s House. The effort put forth and the privileges given to us by the City of Gonzales, as well as the Gonzales County Sheriff’s Department allowed us to put on a firstclass event while at the same time highlighting what the area has to offer. The feedback from the racers was overwhelmingly good – they loved the small town venue and the courses. The race was made possible by several companies, groups, and individuals who must be acknowledged for their donations, help, and approvals: • Southern Clay Products, Inc. for being the title sponsor of the race. Southern Clay Products continually supports events and projects in the area in the Gonzales, TX area and we are very grateful for their support of the event. We would also like to thank the employees of Southern Clay Products who volunteered to help with the event organization and logistics. • The City of Gonzales for all of their work in helping with the permitting, organization, and logistics for the event. In particular, we would like to acknowledge city manager Charles Windwehen and city employees Carolyn Gibson, Todd Remshel, Robert Miller and the Gonzales City Council for their work on the event. • Soncrest Egg Company for their generous financial support. • Don Ford for his generous financial support. • Gonzales County Sheriff Glen Sachtleben for his support and planning help with the event. Also Chief Deputy Dennis Richter and Deputy Jeremy Belin for their onsite work of the Sunday event in Cost, TX. • The Gonzales Police Department for their approval and onsite work of the Saturday event. In particular we would like to thank Chief Crow and Officer Tammy West for their help in planning the traffic flow and loaning the bike helmets for the Kids Race. Chief Crow and Lt. West were most supportive in the endorsement of the race and maintaining the safety for the course. • Jim Russell and the Gonzales EMS for their onsite support at the event in Cost. • Dr. Hisey and Sievers Medical Clinic for their medical tent at the Saturday event in Gonzales. • Robert and Jackie Gandre of Cost, TX for the use of their property by race officials and spectators. • Ms. Doris Charles of Yoakum, TX for the use of her property in Cost for racer parking. • Barbara Hand for support from the Chamber of Commerce. • Steve and Beverly Pirkle for assistance with parking for the Cost Race • Kenneth and Jacque Schumacher-Energy Waste Services Donating Port A Pottie • E-Barr Feeds – Square Hay Bales for Saturday Race Also, all of our other financial supporters: Alton Czichos, Inc., EOG Resources, McElroy Sandblasting, Rouse Bicycles, Dickie White Construction, Graham Land & Cattle, GVEC, Gene’s Machine, Inc., Purvis Bearing Service, M&H Crates, Black Hills Bentonite, McLeroy Land Group, Guerra’s Grill, Mid-America Pack- the editor aging, Schmidt & Sons, Inc., Christian Kids Daycare and Pre-School, Sage Capital Bank, Prospera Financial – Brian Fees The event attracted racers from beyond just Texas. Racing in the event were the 2011 U.S. Men’s Elite National Criterium Champion, the 2009 British National Road Race Champion, the 2008 New Zealand National Time Trial Champion, the winningest female cyclist in US history, the 2011 Mexican Women’s Elite Road Race Champion, and the 2011 Texas Women’s Criterium Champion. In all, over 200 racers traveled to Gonzales, TX for the event. Many stayed in Gonzales hotels, ate in Gonzales restaurants, and shopped in Gonzales stores. The economic impact on the community should have been noticeable. Most importantly, a sum of money was raised for Norma’s House, the benefactor of the The paranoid interpretation of Barack event. Obama’s presidency would be that he’s a plant We plan on hosting the event in Gonzales, TX from the libertarian Cato Institute slyly working and Cost, TX again in 2012. We look forward to to discredit government. a bigger and better event at that time. Could the tea party have devised a more diabolical scheme than a liberal president Chris Cornetto – Race Promoter delivering a passionate speech plugging an Connie Kacir – Norma’s House President enormous jobs program that won’t work and Brian Fees – Norma’s House Volunteer doing it in grandiose terms that identify it with the historic liberal agenda? About half of the bold-seeming $447 billion Obama jobs package is an extension and augmentation of an already-existing temporary payroll tax cut. At best, preserving the cut avoids Dear Editor, the pain of its lapse. It does put more money Well, I find myself having to apologize to the in the pockets of workers and, at the margins, director of the Gonzales Chamber of Comreduces the cost of hiring for employers. But merce. Based on the article printed in the Gona lot of the money will be saved, not spent, by zales Cannon newspaper and e-mail from a city strapped workers, and employers will hire based official, I blamed the chamber director for the on market conditions, not a tiny boost from loss of the “First Shot Relay.” government. But in the last few days, more information has Obama’s struggles with the economy are come out that has convinced me that the chamreinforcing the idea that government can’t solve ber director was not at fault for the relay leaving problems, and that it can’t learn from its mistakes. Gonzales. It seems as though the City of GonAlready dogged by the false promises of the first zales informed the organizers of the relay they stimulus, Obama has resorted to a second round would have to pay a fee for the city to handle the of dubious assurances. logistics of the relay in Gonzales. I commend Upon the passage of the first stimulus bill, he those that made that decision, it’s just not right touted the “shovel-ready” infrastructure projects for the taxpayers to finance a for profit event. So I that would create immediate jobs. When few apologize to those I may have offended. of these jobs materialized, even Obama joked that there’s no such thing as shovel-ready. But Bill Sheppard Gonzales Community thanked for CATI Race support The Manchurian president Rich Lowry Rich Lowry is editor of the National Review and a syndicated columnist for King Features Syndicate. Writer offers his apology Thanks GISD for hand rails Dear editor, We would like to thank Dr. Kim Strozier, the GISD School Board and the maintenance men who fixed the very nice hand rails at the football stadium. Dr. Strozier told us our last football season that there would be rails in place by this football season and she was true to her word. It is a pleasure to have an administrator who sticks to her guns. Gonzales is very fortunate in having a person such as Dr. Strozier leading our school district. Jerry & Gayle Akers Apache Fans Gonzales The Gonzales Cannon welcomes and encourages letters to the editor.-writters. Letters to the Editor Policy:kind contribution to the Perry campaign, from our Manchurian President. Rich Lowry is editor of the National Review. Page A6 Want to list your business here? Call Debbie at 830-672-7100 Gonzales Cannon Business Directory Featuring Home-Grown Businesses Walker Plumbing & Septic Systems wwalker@gvec.net m-8953 dolpHin talES 3-d 1:30, 4:00, 7:30, 9:20 (pg) The Gonzales Cannon Thursday, September 22, 2011 Don’t forget about our online advertising too! gonzalescannon.com Hallettsville Livestock Commission Co. Where your livestock brings top $$$ everytime! King123 Bypass & E. Walnut St., Seguin RangeR THeaTRes Hwy driVE (R) 1:40, 4:00, 7:00, 9:30 7 dayS in Eutopia 1:15, 3:15, 5:15 7:15, 9:15 (g) 123 Bright Street, Gonzales AUCTION SALE EVERY TUESDAY 830-672-3057 or 830-857-4006 ReSidential and CommeRCial Plumbing StraW dogS (R) 1:00, 3:10, 5:20, 7:30, 9:40 SHark nigHt 3-d (pg-13) 1:30, 3:30, 5:30, 7:30, 9:30 aBduction (pg-13) 1:00, 3:10, 5:15, 7:25, 9:35 monEyBall (pg-13) 1:00, 3:45, 6:45, 9:25 i don’t knoW HoW SHE doES it (pg-13) Call 361-798-2542 We appreciate your business! contagion (pg-13) 1:30, 3:30, 5:30, 7:30, 9:30 killEr ElitE (R) 1:40, 4:15, 7:00, 9:20 Fri., Sept. 23 thru Thurs., Sept. 29 • All Shows $5.50 Before 6:00 • Adult $7.50, Child & Senior $5.50 • Open Daily @ 12:45 ROCking ChAiR STADium SeATing •WheelChAiR ACCeSSiBle • All DigiTAl SOunD • heARing impAiReD SOunD $2.00 upChARge FOR 3D mOvieS • Call us @ (830) 379-4884 • visit us @ kingRanger.com with live webcast @ Sale every Saturday at 10am Any type concrete work. Commercial & Residential We don’t do cheap work; We do quality work Vic’s Concrete Finishing and Backhoe Work Free Estimates 830-672-6383 Let Us Build Your New Home Custom Residential & Commercial Builders Re-Roof • Vinyl Siding • Metal Buildings Remodeling • Concrete Works Plumbing • Trenching • Backhoe Service Serving the area since 1948 General Contractors • Shiner P.O. Box 565 • Gonzales, TX 78629 Dave S. Mobile 830-857-5394 Mike B. Mobile 830-857-3900 Office 830-672-2845 Fax 830-672-6087 (361) 594-3853 • 594-4311 Open: Monday-Friday, 7:30 a.m.-5:30 p.m. Saturday, 8 a.m - Noon 25 years experience • 2-5 man crew Concrete • Cattle Guard Bases Gonzales Family Thrift Store BEAUTIFUL THINGS FOR CHILDREN also ing, Now lts cloth du ng a purses & li hand hoes, re. s h mo muc N ixoN L ivestock c ommissioN 830-582-1561 or 830-582-1562 Sale Every Monday 10:30 a.m. All Livestock Bonded and Insured Hours: Mon.-Sat. 10:30-7:00 p.m. First Sunday of Each Month 10:00 - 3:00 p.m. 304 Thornton Street • Gonzales, Texas Gently Loved Clothing, Books, Toys, Necessities for Babies and Children Hwy. 87 E., Nixon HACKNEY & TORRES TREE SERVICE Free Estimates, Fully Insured, Professional and Experienced Serving Residential and Commercial Properties • Pruning •100 Ft Crane Service • Tree Removal • Stump Grinding • Fertilizing W.E. “Buck” BUTLER Nixon, Texas 830-582-1052 MANAGER GARY BUTLER 830-582-1944 Have a professional check your trees for dangers to your home and property Call Larry at 361-649-4527 • Or Jimmy at 361-564-8976 D&G Automotive & Diesel Wrecker Service 830-672-6278 134 Hwy. 90A • Gonzales, TX 78629 Plumbing Services, LLC Commercial & Residential New Construction & Repair Back-Flow Certified Matt McNabb 830-857-5998 20511 State Highway 80 N Gonzales, TX 78629 216 Parkview, Luling, 78648 #M-38296 BJ Bujnoch BJ’s Fencing Mark Bujnoch (817) 933-6155 Cell (817) 645-1491 Home Quality Work, Dependable Service 30 Years Experience Glenn & Linda Glass, Owners Larry Ondrusek dOzer service 35 Years Experience working in Gonzales and Surrounding Counties. (361) 772-5869 Cell (361) 798-3978 Home Root Plowing - Root Raking Discing and Tank Building. Call: 361-594-2493 Barbed Wire Fences • Corrals • Board Fences Stock Sheds • Pipe Fences • Cedar Posts for Sale 788 US Hwy 77-A-South Halletsville, Texas 77964 FREE ESTIMATES Oil Field Degreasers & Detergents Truck & Rig Wash Complete Line Chemicals Complete paper goods & Trash Liners Independent HerbaLife Distributor “A New Weigh & Nutrition Club” Johnny Hoffpauir 830-481-0408 Septic System Installation Office 830-437-2873 Fax 830-437-2876 FREE ESTIMATES ALL MATERIALS HAULED 932 Oil Patch Lane Bob Erickson Bus: 830-672-6851 • Fax: 830-672-6621 • Res: 830-437-5528 jhoff1953@yahoo.com 321 St. Lawrence Gonzales, TX 78629 Construction Company Sub-Contractor Specializing in Site Work Foundation Pads-Road Work-Demolition Stock Tanks-Brush Clearing TACLB6030C/M-37285 Office 830-437-2873 • Fax 830-437-2876 David Ehrig 830-832-6063 Bubba Ehrig 830-832-5094 221 Private Rd 2003 • Gonzales, TX 78629 B&J Liquor Wide Selection of Liquor, Wine, Liqueurs and Beer! Special Orders Welcome! Gift Baskets made to order! Get caught up on all the local news! Use this handy form to subscribe today! In-county subscriptions are $22 per year; $24 out-of-county (830) 672-3107 The Gonzales Cannon 730 Seydler, Gonzales, Tx 78629 Honesty Integrity Fairness Name:______________________ Street Address: _________________________________________ _ City, State, ZIP:___________________ Phone Number: ___________________ Mail this form to: The Gonzales Cannon PO Drawer E Gonzales, TX 78629 Contact us by e-mail! subscriptions@gonzalescannon.com 618 St. Paul, Gonzales Phone: 830-672-7100 Fax: 830-672-7111 Thursday, September 22, 2011 ‘Gonzales Dog Adoptions’ joins city’s animal welfare efforts By NIKKI MAXWELL newseditor@gonzalescannon.com The Gonzales Cannon Page A7 There are hundreds of stray, abused and abandoned domestic animals rescued in Gonzales County each year, and now there is another group dedicated to the welfare of those animals. “Friends of Gonzales Animal Shelter (FOGAS) is pleased to welcome the newly formed group ‘Gonzales Dog Adoptions’ (GDA) to the animal welfare effort in the Gonzales community,” said Mary Anne MacLean, FOGAS spokesperson. The organization is new but it comes with some familiar faces who formerly worked tirelessly as FOGAS volunteers on behalf of Gonzales County’s homeless dogs. “Keiko and Lance McCormick have been very instrumental in FOGAS’ dog care and adoption success for more than six years now,” MacLean said. “We look forward to collaborating with Gonzales Dog Adoptions in the future.” She said FOGAS has encouraged the group members to pursue it’s own specific interests in dog welfare, including dog shelter management. “Even though we are a newly formed corporation, we have all been working and volunteering at the Gonzales Dog Shelter for several years and plan to continue doing so as long as the City of Gonzales will allow us,” Lance McCormick said to the Gonzales City Council Tuesday during their meeting. “We strive to make the shelter an efficient and effective tool to help the City and Animal Control manage the stray dog population in Gonzales, while simultaneously providing the dogs with proper care and medical treatment. This offers them a chance at becoming part of someone’s family, instead of being euthanized.” MacClean said the two groups share the same mission. “FOGAS will continue our dog rescue efforts in addition to cat rescue, but we will utilize a foster network (non-shelter) model for dogs which is more standard for rescue groups,” MacLean said. Effective immediately Gonzales Dog Adoptions personnel will be providing dog shelter management on behalf of FOGAS who is currently the outsourcing contractor for shelter management to the City of Gonzales. GDA will also manage the dog adoptions in Gonzales and can be reached at (830) 445-9279 (Keiko McCormick) or (830) 445-9811 (Lance McCormick). Gonzales Dog Adoptions manages The City of Gonzales dog shelter, where nearly 100 dogs are waiting for a permanent home. Those interested in adopting a dog are asked to contact the shelter to make an appointment. Adoptable dogs are also available at Tractor Supply in Gonzales every Saturday from 10 a.m. - 2 p.m., weather permitting. “We’re hoping that the birth of Gonzales Dog Adoptions will encourage other people who are passionate about animal welfare to form groups and help with this mission,” continued MacLean. “There’s plenty of need for at least another six groups in our county, and FOGAS will be happy to help with organizational and fundraising information.” According to McCormick, the decision to form another group came naturally to him and the other key volunteers. “We want to continue operating the dog shelter under a new and separate contract with the City,” McCormick said. “This will help maintain the consistency of care for the dogs and our service to the community.” He said a new contract will also help the shelter’s eligibility for grant programs and allow volunteers to focus on the mission - finding homes for Gonzales’ homeless dogs. FOGAS will continues to provide free spay/neuter grants from the Texas Department of State Health Services and PetSmart Charities. Call FOGAS at (830) 857-1616 or visit the cat adoption facility at 505 Saint Francis Street in downtown Gonzales, M-F 3:30-5:30 to qualify. Gonzales Dog Adoptions is in the process of becoming a 501c3 non-profit corporation, and FOGAS is a 501c3 charitable organization founded in 2004. Since its inception, FOGAS has spayed/neutered more than 2,600 shelter orphans and 2,500 pets owned by residents of our Community. FOGAS has re-homed more than 4,000 cats and dogs. These spay/neuter surgeries were done with no cost to the owners, the City of Gonzales or Gonzales County. Due to these efforts the City of Gonzales was able to achieve “no kill” shelter status three years ago. The newly formed group “Gonzales Dog Adoptions” held its first adoption event outside Tractor Supply last Saturday. (Photos by Nikki Maxwell) First Come and Take It tents ready to go up this weekend Barbara Hand The tents go up on Congeon, Kathleen Koerner, Around the federate Square on Sept. D. O., M.S. 24, then the food booths The Gonzales Area Chamber Office go in the food tent on Sept. Development Corpora25 and you know it’s Come tion (GADC), will hold & Take It Time again! Vola groundbreaking cerunteers are needed at the emony on Wednesday, St. John Street warehouse Sept. 28 at 10 a.m. at the on Sunday at noon, then at GADC Business Park, 1 p.m. to put the booths in for construction of their Barbara Hand is the Executive Diplace under the food tent. rector of the Gonzales Chamber of second spec building. It We have T-Bone, chili and Commerce. sits on 1.35 acres in Block bean cook-off forms here 1, Lot 4 and will include at the office, along with all the other forms 1,200 sq. ft of office space with a 5,000 sq. for events and booths. ft. asphalt parking area and 5,000 sq. ft. of The city will begin wiring tents, squares warehouse space. The park is at Church and craft booths and plumbing the rest- St., F.M. 794 and Delgado St. For informaroom trailers on Monday, Sept. 26; the tion on purchasing this or other lots within tables and chairs will be set up by the I.S. the industrial park, please contact Lindsey F. on Thursday, then things really get go- Lyde at 830-857-5520 ing Friday morning, in order to be ready The Executive and Finance committees to start the festival at 6 p.m., when Come & will meet on Friday. Take It opens. South Texas Tour Team Roping will be A reception will be held on Thursday, at the J. B. Wells park on Thursday, Bar J Sept. 22 at 3 p.m. at Gonzales Healthcare Team Roping will be there Friday and SatSystems Outpatient Lobby to meet and urday and Sunday will be Wrap N 3 Barrel welcome the new full-time general sur- Racers. Annual Relay planning underway Noon, Breakfast Lions team up The planning committee is busy setting up committees and scheduling upcoming activities in preparation for the tenth anniversary of Relay For Life in Gonzales County. Because we have been diligent in continuing the fight against cancer for ten years, our theme is “TENacious about the Fight.” Now is the time to register teams and begin the fundraising projects to benefit Relay For Life 2012. On Saturday, September 24, team captains and team participants can come to Victoria College Gonzales Center for the Kick-off Party and register online. The first five teams to register between 10 a.m. and 1 p.m. that day will be awarded a $20 gift card to Wal-Mart to assist with fundraising projects. At 10:30 a.m. Hero of Hope Monica Flores will speak about her caregiver experience following the birth of her son Xavier who was diagnosed in utero with a neuroblastoma. There will also be a video explaining what $5 can do in the fight against cancer plus photography from the 2011 Relay in addition to displays of materials for team development, sponsorship, and survivor registration. After 11 o’clock VistaCare Hospice will provide a hot dog lunch. Face painting, Some 35 members of the Gonzales Noon and Breakfast Lions Clubs, along with several visitors, held a joint meeting Monday, September 19th at noon on the third floor of the Randle-Rather Building. Lion Gene Kridler (right) of the Breakfast club served as program chairman and had as his guest speaker Carter Dennis of San Antonio. Dennis, who is affiliated with “Skaters for Public Skateparks” presented a power point presentation related to the design, building and obtaining financing for skateparks throughout the state. He pointed out the advantage of having skateparks which are safer than having kids skateboarding on public streets and sidewalks. His presentation included skateparks that have been built in Lockhart, Flatonia and numerous other communities. While no action was taken at this meeting, the idea for a local skatepark has been discussed by local groups interested in such a facility. Anyone who would like more information about skateparks can get it at or. (Courtesy Photo) Full Plumbing Services office 830-672-9226 fax 830-672-2006 TACLB6030C/M-37285 1229 St. Lawrence GonzaLeS, texaS 78629 Sean Kendrick, owner cookie decorating, and music will round out the party fun. A Come and Take It parade entry is being organized by Kristi Mercer and Joyce Gibson. They would appreciate some volunteers who can help with decorating. On Saturday, October 22, a Pink Ribbon Brunch will be held at First Lutheran Church Fellowship Hall beginning at 10:30 a.m. Tickets for this event will be for sale beginning at the Kick-off. Teams can earn a share of the brunch profits by selling tickets; helping with set-up, decorating, and clean-up; serving the meal; bringing a door prize or a silent auction item. Contact event chair Arline Rinehart (672-2077), event co-chair Patty Stewart (672-7581), or Team Recruitment/Development co-chairs Kristi Mercer (672-7581) and Carolyn Kocian (672-9557) about how you can help with the brunch. We look forward to working together during the next six and a half months as we raise funds for American Cancer Society Relay For Life. Page A8 Local FFA member national finalist in tractor program Cannon News Services newseditor@gonzalescannon.com Agribusiness The Gonzales Cannon oils, lubricants and coolants, announced the finalists for its 2011 Delo Tractor Restoration Competition (www. DeloTractorRestorationCompetition.com). The event will bring the nation’s top teen tractor restoration specialists to Indianapolis during the 84th National FFA (Future Farmers of America) Convention to compete for the much-sought-after national title. Projects will be presented on Oct. 19-20 with the champion crowned on the evening of Oct. 20. The other finalists and projects for the 2011 Delo Tractor Restoration Competition include: Buckeye FFA: Medina, Ohio - 1954 Allis-Chalmers WD-45; Central City FFA: Central City, Nebraska - 1943 Model A John Deere; Thursday, September 22, 2011 SAN RAMON, Calif. — Gonzales FFA member, Kyle Day, has received word that he is among 12 finalists in the National Delo Tractor Restoration Competition for 2011 sponsored by Chevron Lubricants. Kyle submitted his workbook, pictures and video, to document all his work. He had one year to select and restore a tractor. All submitted work was reviewed by expert judges, who narrowed the national entries to the final 12. Gonzales County Deeds Gonzales County Courthouse Deeds August 1-31 Needham, Arva Nell to Western Energy Group, LLC, o/l, 176.378 Acres, Orig. Outer Town Gonzales. Tinsley, John Carter to Lucas Energy, Inc., o/l, 45.60 Acres, Sarah Hendricks Svy, a-261. Tinsley, Roberta Ann to Lucas Energy, Inc., o/l, 45.60 Acres, Sarah Hendricks Svy, A-261. McGlothing, Bill D. and McGlothing, Olga Huebner to McGlothing, Bill D. (trustee), McGlothing, Olga H. (trustee), McGlothing Living Trust, Bill and McGlothing Living Trust, Olga, w/d, 72.50 Acres, William Erskin & John Erskin Svys, Gonzales & Wilson Counties. September 1-30 Kittoe, Erna to EOG Resources, Inc., o/l, 146.812 Acres, Jean Humphrey Svy, A-266. Wolter, Lawrence W. to EOG Resources, Inc., o/l, 146.812 Acres, Jean Humphrey Svy, A-266. White, Debra J. to O’Neal, A.C., O’Neal, Mollie B. and O’Neal, Molly (aka) to Pena, Edward, w/d, Lt. 3, Blk. 4, Glover’s Addn, Smiley. Cox, Betty J. Barnick to Ford, Don and Ford, Nancy, o/l, 35.84 Acres, Andrew Winters Svy, A-471. Lazo, Mariano and Lazo, Margaret to Hernandez, Barbara, w/d, Lts. 13-14, Blk. 2, Tejada Subdvn, Nixon. Perryman, Saralynne Stockton (Extrx & Trustee), Stockton, Frank M. (estate) and Stockton Trust to Petras, Dwayne L. and Petras, Kaylin A., w/d, Lt. 4, Blk. 4, Titcomb Addn, Gonzales. Hermes, Charles Leo and Hermes, Betty Anders to Hermes Ranch, LP, w/d, Mineral Int. in 382.742 Acres, WM Hill & B D McClure Svys & an undiv. int. in 225.00 Acres, John McCoy Svy. & Property in Lavaca & DeWitt Cos. Lopez, Roberto and Lopez, Dolores S. to Litke, Paul J. and Litke, Dianne L., w/d, 37.19 Acres, John Baker A-116 & Abraham Dillard A-193 Svys. Havemann, Douglas and Havemann, Melissa to Wexco Resources, LLC, o/l, 20.06 Acres, Juan Jose Tejada Svy, Gonzales & Wilson Counties. Caraway, Eddie R. and Caraway, Mary B. to Ritchie, Wilson O. and Ritchie, Diane, w/d, 1.70 Acres, Wm. A Farris & Isom J. Good Svys. Menea, Victoria to Shannon, William E., and Shannon, Linda F., w/d, 0.693 of an acre, Peter Winn Svy, A-464. Wiley, Howard Barry, Wiley Barry (aka) and Wiley, Carol Ann to Ford, Don and Ford, Nancy, o/l, 1.60 Acres (Ptr. Lt. 13, Tier 1, Orig. Outer Town Gonzales & Lts 2, 5 & 8, Eastwood Terrace, Gonzales). Brown, Tom Willis, Brown, John Willis, Brown, Thurman, Brown, Lawrence, Jones, Una Mae Brown and Brown, Louis to Brown, Andrew, w/d, Lt. 5, blk. 77, Kelley Addn, Waelder. Grauke, Melvin and Grauke, Linda M. to KP Enterprises, LLC, w/d, 1.00 Acre (Pt. Lt 6, RG 7) Orig. Outer Town Gonzales. Seguin First Home, L.P. to Wyman, Dean and Wyman, Aleathea, w/d, Lts. 5-6, Blk. 8, Badger’s Addn, Gonzales. Wolter, Robert C. to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. Hutchins, Emily to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. Pierce, Cheryl to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. True, Billy Roy to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. McReynolds, Charlotte to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. Gerron, Joann H. to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. Borchers, Alda Loraine to EOG Resources, Inc. o/l, 146.812 Acres, Jean Humphrey A-266 & James Jones A-301 Svys. Moye, Mildred to Eagle Ford Hunter Resources, o/l, 16.52 Acres, Turner Barnes Svy, A-112. Frazier Jr., Stewart F. and Frazier, Linda Kridler to Ford, Don and Ford, Nancy, o/l, 42.69 Acres, Andrew Winters Svy, A-471. Frazier, Stewart F. and Frazier, Barbara Barnick to Ford, Don and Ford, Nancy, o/l, 48.898 Acres, Andrew Winters A-471 & Andrew Zumwalt A-503 Svys. Connolly Sr., Thomas Frank and Connolly, Nicki A. to Diamond M. Drilling & Exploration Co., o/l, 29.72 Acres, John Slater A-435 & Samuel McCoy A-340 Svys. Mincey, Allen David to Diamond M. Drilling & Exploration Co., o/l, 129.199 Acres, Archibald Gibson Svy, A-237. Harborth, Marie, Wallace, Jennifer and Wallace, Stephanie to Diamond M. Drilling & Exploration Co., o/l, 129.199 Acres, Archibald Gibson Svy, A-237. Keck, Donald R. to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Keck, James Scott and Keck, Joann to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Heath, Shirley Keck and Heath, Robert P. to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Collins, Adele Keck to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Keck, Elizabeth Ann (Indiv & Atty-inFact) and Keck, Morris C. to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Bohlae, Frances Keck to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Allert, Cecil Keck to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Kyle will have to travel to the National FFA Convention in Indianapolis, Indiana, October 19- 21. At the convention, he will make a presentation, of the work he has done on his 1948 International Harvester Farmall M, and be interviewed by five restoration specialists. His ag instructor at Gonzales High School is Robert Washington. We believe Kyle to be the first Gonzales FFA member to be a finalist in the National Competition. Chevron Lubricants, maker of the Delo brand of technologically advanced engine - 1946 Farmall; Sequim FFA: Sequim, Washington - 1954 Farmall, Super C. The champion will receive $5,000, with the reserve champion taking home $3,000 and third place $1,500..” Keck, Carolyn to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Baxter, Janice to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Hesler, Marie to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Keck, Ralph E. to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Keck, John R. to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Hochstetler, Cynthia to Ford, Don and Ford, Nancy, o/l, 24.30 Acres, Orig. Outer Town Gonzales. Texas Gonzales & Northern Railway Company to Halliburton Energy Services, Inc., w/d, 20.00 Acres, T S Lee Svy, A-314. Assumed Names Dolezal, Michael - Diamond Gymnastics, Gonzales. Malaer, Vernon R. - M-M Pilot Cars/ Escorts, Gonzales. Ponce-Trevizo, Angel - Country Corner Grill, Gonzales. Cooper, Lynette and Cooper, Nicole Verde Properties, Gonzales. Gonzales, Jeffery C. - Jeff’s Auto Paint & More, Gonzales. Johnson, Margo - New Beginnings International House of Worship, Gonzales. Rogers, Ida E - Sam Rogers Crane & Rigging, Nixon. Marriage License Lopez, Alduvi and Ruiz, Yuridia Elizabeth. Ramirez Jr., Joe A and Alcantar Stephanie. Knepp, Hayden C. and Johnson, Kaylene. Barhight, Jesse E. and Dominguez, Teran D. Elstner, Blake M. and Durrett, Meggin. National finalist Gonzales FFA member, Kyle Day, has received word that he is among 12 finalists in the National Delo Tractor Restoration Competition for 2011. (Courtesy Photo) Area Livestock Reports The Gonzales Livestock Market Report for Saturday, September 17, 2011 had on hand: 2,919. Compared to our last sale: Calves and yearlings sold steady. Packer cows sold steady to $2-$4 lower. Stocker-feeder steers: medium and large frame No. 1: 150-300 lbs., $155-$165; 300-400 lbs., $141-$147; 400-500 lbs., $128$138; 500-600 lbs., $122-$126; 600-700 lbs., $117-$119; 700-800 lbs., $111-$115. Bull yearlings: 700-900 lbs., $92-$111. Stocker-feeder heifers: medium and large frame No. 1: 150-300 lbs., $131-$155; 300-400 lbs., $123-$126; 400-500 lbs., $116$121; 500-600 lbs., $114-$115; 600-700 lbs., $111-$113. Packer cows: good lean utility and commercial, $45-$53; cutters, $56-$67; canners, $39-$45; low yielding fat cows, $57-$65. Packer bulls: yield grade 1 & 2, good heavy bulls, $69-$75; light weights and medium quality bulls, $66-$67. Stocker cows: $525-$850. Pairs: $775-$850. Thank you for your business!! View our sale live at cattleusa. com! Gonzales Heifers: 200-300 lbs, $94 to $104 to $153; 300-400 lbs, $102 to $112 to $153; 400-500 lbs, $100 to $110 to $140; 500-600 lbs $97 to $107 to $123; 600-700 lbs, $93 to $103 to $115; 700-800 lbs, $90 to $100 to $107. Slaughter cows: $20 to $41 to $58; Slaughter bulls: $50 to $65 to $74; Stocker cows: $300 to $714; Pairs, $710-$1,020. The Hallettsville Livestock Commission Co., Inc had on hand on September 13, 2011, 4,753; week ago, 2,760 year ago, 1,555. Better quality classes of calves and yearlings sold mostly steady. Light weights calves and plainer quality calves continue weaker. Packer cows and bulls sold $2$3 lower on 1,3788 total hd. Packer Cows: Individuals, higher dressing utility & cutter cows, $50-$64; lower dressing utility & cutter cows, $38-$50; Hallettsville light weight canner cows, $25-$38. Packer Bulls: Heavyweight bulls, $67-$71; Utility & cutter bulls, $61-$67; light weight canner bulls, $54-$61. Stocker and feeder calves and yearlings: No. 1 steer & bull calves: under 200 lbs, $118-$146; 200300lbs, $115-$146; 300-400lbs, $114-$146; 400-500lbs, $113$143; 500-600lbs, $112-$1347; 600-700lbs, $108-122; 700-800lbs, $100-$114. No. 1 Heifer calves, under 200 lbs, $115-$135; 200300lbs, $112-$132; 300-400lbs, $110-$122; 400-500lbs, $108$119; 500-600lbs, $105-$117; 600700lbs, $100-$110; 700-800lbs, $90-$100. No. 2 & 3 steer & bull calves, 200-300lbs, $87-$118; 300400lbs, $85-$116; 400-500lbs, $84-$115; 500-600lbs, $82-$113; 600-700lbs, $80-$108. No. 2 & 3, heifer calves, 200-300lbs, $84$112; 300-400lbs, $83-$110; 400500lbs, $80-$107; 500-600lbs, $78-$104; 600-700lbs, $76-$98. Plain Quality $40-$80. If we can help with marketing your livestock, please call 361798-4336. Cuero Livestock Market Report on September 16, 2011 had 3,175 head. Had 727 cows and 44 bulls. Bulls were steady. Cows $3 to $5 lower. Did not finish cows in time for a market report. Packer bulls: heavy weights, $60.50-$65.50; lower grades, $50$60.50; canners, $32.50-$50. Packer cows: boning cows, $51-$61; cutters mainly, $35-$47; few, $47.50-$50.50; low yielding cutters, $30-$42.50; canners, $15$36; fats, $56-$65. Dry cows, $34.50-$57; young, $22-$69. Cow and calf pairs, $495-$895. Cuero J B Wells Upcoming Events September 22 South Texas Tour Team Roping September 23-24 Bar J Team Roping September 25 Wrap N 3 Barrel Racers ery Sale ev at ay Saturd 10am Gonzales Livestock Market P.O. Box 565 • Gonzales, TX 78629 w Dave Shelton Mobile 830-857-5394 ebcast @ attleUSA .com Mike Brzozowski Mobile 830-857-3900 with liv e Sponsored by Office 830-672-2845 Fehner & Son Grain Co. •Grains •Custom Mix Feed •Liquid Feed •Cattle Cubes •Liquid Fertilizer •Pellet Feed •Spraying Shear Designs Boutique Come & Take It Sale Tue., Sept. 27 - Sat., Oct. 11 20% off all Fall Designs The Nixon Livestock Commission Inc. report had on hand, September 19, 2011, 2.366. Steers: 200-300 lbs, $117 to $127 to $165; 300-400 lbs, $118 to $128 to $158; 400-500 lbs, $109 to $119 to $153; 500-600 lbs, $107 to $117 to $130; 600-700 lbs, $105 to $115 to $124; 700-800 lbs, $100 to $110 to $120. Nixon James Fehner -- Cell 830-857-3638 Jimmy Fehner -- Cell 830-857-3636 1922 Co. Road 197 Gonzales, TX 78629 Phone: 830-672-3710 1819 N. State Hwy. 97 Gonzales, Texas 78629 FAX BUS. PHONE 830-672-6237 830-672-7739 A & S RECYCLING $$ WE PAY CASH $$ IRON • TIN • JUNK CARS ALUMINUM CANS • ALUMINUM • COPPER • BRASS • STAINLESS STEEL RADIATORS • LEAD • BATTERIES Come by after the parade for cool refreshments and a great sale! 12:00-4:00 p.m. Lots of new Kut, Anoname, Jag and Not Your Daughter Jeans Also Uncle Frank, Ivy Jane, Muttiples, Ya, Joy Joy, Jade, and Double D, Corral Boots Excluding Brighton and Consuela 805 St. Joseph St., Gonzales We also have 20, 30, 40 yrd. containers available for scrap metal cleanups Junk Cars accepted w/titles only. OWNERS Arturo & Cruz Mata BUS. HOURS MON.-FRI 8am-5pm SAT. 8am-12pm 830-672-9200 Thursday, September 22, 2011 E-Mail Your local information to: newseditor@gonzalescannon.com and symptoms of labor, the labor process, pain management meth- volunteers to attend advocate training. The training is mandaGonzales Healthcare Systems invites the public to meet ods, care of the newborn, infant CPR and breastfeeding. The class will tory, but flexible times are available. Individuals that complete their new fulltime general surgeon, Dr. Kathleen Koerner, in be taught by Rene Griffin, RN, OB Director. Moms are encouraged to the training will become valuable advocates to our commubring a support person with them. nity members in need of crisis intervention and services. The the lobby at the facility starting at 3 p.m. Thursday, Sept. 22. To RSVP, call 672-7581 ext 727 and ask for Valerie. training and materials are free. Volunteers are also needed for the “Cranny,” the thrift store operated by the shelter. Please Leesville Fair Pets in the Park Day call our office at (830) 372-2780 or our hotline number at 800The O’Neal Brothers Band will headline the entertainment at the Pets-In-The- Park is scheduled Saturday, Oct.1 at Starcke 834-2033 for more information. Leesville Country Fair, the annual fund raiser for the Leesville Cemetery and the Leesville scholarship fund. Festivities begin at 10 a.m.and in- Park in Seguin from 10 a.m.-4 p.m. There are Chihuahua, Donations Needed clude an auction, silent auction and raffle drawing, a country store and Dachshund and open dog races starting at 1 p.m. There is live music, great food, dog show, llama rides. The Classic Car Show The Learning Center is seeking donations of caps and gowns. If you flag presentation as well as lots of kid-friendly activities. will start at 11 a.m. Costume contest at 10:45 a.m. Talent con- have donations, please contact Ann Gaines Rodriguez at the Gonzatest at noon and the ugliest pet at 2:30 p.m. There will also be les Learning and Career Center, PO Box 116, 1135 St. Paul St., Gonzales, Food donations Gonzales Christian Assistance Ministries is out of food, as there were vendor booths, a shot clinic and micro chipping clinic. There TX 78629, 830 672-8291 / 830 672-1076 fax or e-mail glcc@gvec.net over 500 people who came in for food last month. It’s shocking that will be games and animals of all kinds. Free admission and reDementia-Alzheimer Support that many people in our town can be hungry and even more shock- strained well mannered pets are welcome. This group meets the first Wednesday of every month at 1 p.m. in ing that the food bank cannot accommodate them. The Gonzales Chisholm Trail Ride the Narthex of the First United Methodist Church. This meeting is free County 4-H Third Annual Food Drive will be held the first week in OcThe Old Chisholm’s Fall Trail Ride is scheduled Sept. 30-Oct. 2. and open to the public. Shirley Goss, Wesley Nurse is the facilitator. It tober to coincide with National 4-H Week. Last year over 1,000 items Sign-up starts at the Friar Thomas Ranch near Cuero on Friar Roadd offers health-wellness education and supportive programs related to were given to the food drive, and they were low on food at that time, so we can make a difference. If some groups or individuals take food to off Highway 87 at 6 p.m. Sept. 30. Registration is $30 for ages 16 and dementia. You are welcome to attend. For more information, call 672GCAM, that will carry them over until the county drive. 4-H members up, $15 for ages 5-15 and free for children ages 4 and under. Price in- 1031. cludes six meals and drinks during the ride. There will be a primitive and GIDS grades PK-6 will be getting a message of this type. camp and water available for horses. For details, contact Rip Gibson Parkinson Support Group at 361-277-2671, Lupe Briseno at 361-652-2489 or Jerry McWhorter at This group meets the second Thursday of every month at 10 a.m. in GYC Barbecue the Narthex of the First United Methodist Church. This meeting is free The Gonzales Youth Center will host a fund-raising Bar-B-Que Sun- 210-241-2131. and open to the public and is facilitated by Wesley Nurse, Shirley Goss. day Oct, 16, from 11 a.m.-2 p.m. at the Gonzales Jr. High Cafeteria. Flames dance clinic Educational and supportive programs are offered. For more informaThe delicious plates will consist of delicious Beef Brisket, potato Registration for the 2011 Apache Flames Dance Clinic runs tion, call 672-1031. salad, beans, and pickles, onions, bread and dessert for only $7.00. You may dine in or go through the drive-through pick up on St. Louis through Sept. 22 from 4:30-6 p.m. Monday-Thursday at the Street . Ken Hedrick will again head up the fantastic cook team. Tick- Gonzales Elementary Gym. The clinic is open to all pre-k- 6th ets are available from any Youth Center member or can be purchased grade students. The cost is $25 and includes a t-shirt and daily at the event. Any briskets left after 1:00pm will be sold for $35 & halves snacks. The future Flames will perform at the home game on for $20. Please plan to eat with us Sunday, October 16 after church and Oct. 7. help the Youth Center continue serving our kids. If you would like to ness to God were central Yoakum Clean-up help, need tickets or need more information call Pat Anders at 857themes in her life, whether The City of Yoakum, along with area organizations will be 3483. in her official capacity or coordinating a Cleanup Day on Saturday, October 1, 2011. her personal life. Local businesses and citizens are asked to contact City Hall Donation sought Pam is preceded in death The Heights of Gonzales Activity Department is looking for a fridge/ at 293-6321 if they are interested in disposing of ANY items. by her father Pat Kilpatfreezer to hold supplies for event refreshments. If you would like to do- Regular household garbage, paint and hazardous waste will rick. She is survived by her nate or know of one that is reasonably priced, contact Gwen Koncaba, NOT be accepted. Calls will be accepted until Friday, Septemmother, Janie Kilpatrick; ber 23rd. No items will be picked up outside the City limits 830-672-4530. of Yoakum. If any individual or organization is interested in her son, Aubrey Dunham, volunteering their time or equipment, please contact Gena or his wife Stephanie and Community Bingo daughter Sabina of New The Heights of Gonzales is having Community Bingo, Friday, Sep- Theresa at City Hall. If there are any questions or concerns, tember 23 at 2:30 p.m. Free to play. Bingo winners will receive $1.00 please do not hesitate to contact City Hall at 293-6321. Orleans, Louisiana; her Community involvement is needed for this to be a success! for each Bingo and $50.00 split for Blackout. Must be 55 or older or a Community Calendar The Gonzales Cannon Page A9 New surgeon greet Obituaries resident of a care facility to win blackouts. Hosted by Hospice of South Texas, Gonzales Memorial Healthcare Systems and The Heights of Gonzales. The Gonzales VFW Post 4817 will hold a Social on Tuesday, sept. 27 for all members and volunteers. A meal will be served at 6:30 p.m. everyone is encouraged to bring an old photo of themselves for members to try and identify. The annual Shiner Catholic School Fall festival is scheduled Oct. 2 at the KC Hall (formerly the American Legion Hall) in Shiner. A barbecue dinner with trimmings at $7.50 a plate will be served from 11 a.m.-1 p.m., with drive-thru service available starting at 10:30 a.m. A live auction is scheduled for noon-4 .m. Cake walk, games, a moon walk and concessions start at 11 a.m. The St. Paul battle of the Classes will take place after the live auction.. Gonzales High School Interact Club is sponsoring a toy drive to benefit the Bastrop fire victims. All new and gently used toys may be taken to the high school front office. The drive will last until Friday Sept. 23. All donations are appreciated! The Christian Center of Living Water will host Holy Spirit Night on Friday, Sept. 23, from 7-8 p.m. at the Christian Worship Center located at 1012 Hwy 90 E in Waelder. Pastor Chris Porter will be speaking on the topic of baptism of the Holy Spirit. Gonzales Healthcare Systems has scheduled its next childbirth class for Sept. 22 at 6:30 p.m. in the hospital cafeteria. The class is open to all expectant moms and free of charge. During class we will discuss signs VFW Social Apache Booster Club would like to remind everyone that all Fall Sports are underway. Put on your spirit shirts. Decorate your homes & businesses to support your team! Mark your calendar for the community Pep Rally to be held Wednesday, Oct. 5th at 7:30 p.m. at Apache Field. The booster club will be selling raffle tickets at each home game for the 50-50 drawing & a football signed by the 2011 Gonzales Apaches. They will also sell raffle tickets for two Gonzales Apaches Benches. That drawing will be held at the last home football game, Oct. 28th. If you’re a man 50 years or older, Medicare covers tests to help find Prostate Cancer early when treatment works best. Medicare covers a digital rectal exam and prostate specific antigen (PSA) test once every twelve months for all men with Medicare over age 50. Coverage for this exam begins the day after your 50th birthday. See your local healthcare provider for more information. The Job Corps is currently enrolling applicants aged 16-24 in over 20 career fields. If you need a GED, High School Diploma and a Driver License give us a call. College training is available as well. Get started today, call 512-665-7327. September 8, 2011. Cost of study materials is $40.00 and the fee for the class is $10.00 For more information, contact the Extension Office at 830-672-8531. Apache Boosters Rev. Pamela Kilpatrick, 1946-2011 Reverend Pamela Kilpatrick, 65, of Brownsville TX, passed away on September 17, 2011 in Brownsville from respiratory failure. Pam was born in Fort Worth on July 10, 1946, the only child of Leweir Lovell “Pat” and Ila Virginia “Janie” Kilpatrick. As a child, Pam moved with her parents to Japan and later to Montgomery, Alabama where she graduated high school. As a teenager, she was an accomplished horse rider, winning numerous awards in barrel racing. She was married to an Air Force officer, Robert L. Dunham Jr., on December 28, 1966 in San Antonio, eventually having three sons. She went on to earn her Bachelor of Arts degree in 1986 from Angelo State University and a Masters in Divinity from Southern Methodist University in 1989. Pam answered her calling to preach in 1983, beginning as an Associate Pastor at Sierra Vista United Methodist Church in San Angelo. In 1984 she began her first charge as head Pastor at First United Methodist Church in Robert Lee, Texas. In 1990 she continued to serve as Associate Pastor to Reverend Robert Hall at First United Methodist Church of Victoria. Pam was ordained an Elder in 1992. She then served as Associate Pastor at Coker United Methodist Church in San Antonio 1993. Pam continued her work as Pastor of Crestview United Methodist Church in Austin in 1996, Pastor of First United Methodist Church of Mason in 2003, Pastor of First United Methodist Church of Gonzales in 2006, and Pastor of First United Methodist Church of Brownsville in 2009. During these 28 years of service to God, she touched many lives and was loved by many who knew her as “Pastor Pam”. She was also an active minister in the Emmaus community at Mt. Wesley near Kerrville, and helped organize several trips with church members to visit Holy sites in Israel. Service to others and close- KILPATRICK Shiner Catholic School Festival Protstate Exams Benefit toy drive Job Corps Pesticide Training Holy Spirit Night Childbirth classes son, Tobin Dunham, his wife Kelly and stepdaughter Sara of Bandera; her son, Hardin Dunham, his wife Angela and son Raiden of San Angelo; and her cousins, Bill Baker and wife Jan of Houston, and Susan Langford and husband Jerry of San Angelo. Visitation was held at Johnson’s Funeral Home from 5-7 p.m. Wednesday, September 21, at 435 W. Beauregard in San Angelo. Funeral service will be held at Sierra Vista United Methodist Church on Thursday, Sept. 22, 2011 at 11:00 a.m., at 4522 College Hills Blvd. Burial will follow at Fairmount Cemetery. Please contact Johnson’s Funeral Home for more information at (325) 655-3113. Family and friends may sign the online register book at. Ruby Elizabeth Null, 80, of Gonzales left us on September 16, 2011. She was born August 24, 1931, in Adamsville, Texas. Ruby was a member of Memorial Heights Baptist Church. She loved visits with her Children, working with ceramics and keeping up with politics. Ruby is survived by seven children: Larry Pierce of San Diego, Pat Pakebush of Gonzales, Patsy Null of Seguin, Ralph Null Jr. of Gonzales, Jennie Pierce of Yoakum, Susan Hurst of Seguin, and Sandy Wilke of Gonzales. She also is survived by eleven grandchildren. She leaves behind four sister-in-laws: Bertha Null, Dorothy Gossett and Lottie Null of Gonzales also Hattie Null of Houston, Texas. A viewing was held Monday, September 19, 2011 from 6 p.m. till 8 p.m. at the Buffington Funeral Home Chapel. Graveside Services were held the following morning at 10 a.m. at the Gonzales Memorial Park Cemetery, in Gonzales, Texas. The family request that donations be made to the American Cancer Society or Memorial Heights Baptist Church. Arrangements under the care and direction of Buffington Funeral Home, Gonzales, TX 424 St. Peter Gonzales, TX 78629, 830-672-3322. Shelter Volunteers NULL The Guadalupe Valley Family Violence Shelter is looking for Page A10 First shot Cook-oFF 2011 First Shot Cook-off Winners The gonzales Cannon Thursday, sePTember 22, 2011 Brisket First Place - Hampton Pratka – Bottle Cap Cookers Second Place - Tinker Brown – Cheapside BBQ Third Place - Paul Panus – PPI BBQ Fourth Place - Ernest Servantes – Burnt Bean Company Fifth Place - Darwin Hoel – Giant BBQ Sixth Place - Tim Balch – Up in Smoke Seventh Place - Alton Mosecke – Hole Master Eighth Place - Jerry Killen – Denton Creek Kookers Nineth Place - Monte Brown – Trash Can Cookers Tenth Place - Jerry Fogle – Family Traditions Ribs First Place - Jason Bray – Verti-Bray BBQ Second Place - Tim Balch – Up in Smoke Third Place - Sequoya Janacek – Just Twisted Fourth Place - Darwin Hoel – Giant BBQ Fifth Place - David Fortune – Bar Ditch BBQ Sixth Place - Kevin Nollkamper – Steady Cooking Seventh Place - Ernest Servantes – Burnt Bean Company Eighth Place - Alvin Seiler – Barbarossa Trough Nineth Place - Tinker Brown – Cheapside BBQ Tenth Place - Jerry Rhodes – Sauced Up & Smokin Chicken First Place - Hampton Pratka – Bottle Cap Cookers Second Place - Tim Balch – Up in Smoke Third Place - Mike Hafur – LCB Cookers Fourth Place - Mike Edge – Bare Bones Cookers Fifth Place - James Jones – Medicine Man BBQ Sixth Place - Brent Allen – Buzzard Bar Cooking Team Seventh Place - Gary Mobbs – Lone Star Bank Cookers Eighth Place - Jerry Rhodes - Sauced Up & Smokin Nineth Place - Shawn Wilke – Rodeo Q Cookers Tenth Place - Johnny Bosquez – Titties & Beer Beans First Place - Janice Whidden – JOC Tailgateers Second Place - Wade Miller Third Place - Hubert Mills – Wingnut Cookers Fourth Place - Brent Allen – Buzzard Bar Cooking Team Fifth Place - Phil Baker – S A Smokers Sixth Place - Jerry Fogle – Family Traditions Seventh Place - Matt Wyant – Sauced Up & Smokin Too Eighth Place - Kevin Nollkamper – Steady Cooking Nineth Place - Cathy Perez Tenth Place - Hampton Pratka – Bottle Cap Cookers Overall Champion - Tim Balch - Up in Smoke Reserve Champion - Hampton Pratka - Bottle Cap Cookers Photos by Nikki Maxwell and Lorrell Wright Saturday Chili Winners: First Place - (Head Cook) Lanny Thomas Second Place - (Head Cook) Nadine Karnei Third Place - (Head Cook) Vickey Harvey Fourth Place - (Head Cook) Joe Trigo Fifth Place - (Head Cook) Donna Foley Sixth Place - (Head Cook) Billy Reiter Seventh Place - (Head Cook) Carolann Gibson Eighth Place - (Head Cook) Pat Irvine King Nineth Place - (Head Cook) Sandy Watson Tenth Place - (Head Cook) Joe Carrizales Showmanship winners: First Place - Bottle Cap Cookers Second Place - Big Kahunas Third Place - Swamp Gas Giggles Sunday Chili Winners: First Place - (Head Cook) Jennifer Cyrus Second Place - (Head Cook) Billy Reiter Third Place - (Head Cook) Donna Foley Fourth Place - (Head Cook) Ronald Rerich Fifth Place - (Head Cook) Dianna Hoy Sixth Place - (Head Cook) Nancy Netardus Seventh Place - (Head Cook) Bayette Bearden Eighth Place - (Head Cook) Dorothy Spishock Nineth Place - (Head Cook) Carolann Gibson Tenth Place - (Head Cook) Margie Shoemaker Showmanship winners: First Place - Swamp Gas Giggles Second Place - Pirate Chili Third Place - Crunch’s Bohunk Soechting Motors, Inc. “In Business over 50 years” Authorized Sales & Service 603 E. Kingsbury Street, Seguin, TX 830-303-4546 Keep up with all the local news at our web site: gonzalescannon.com Region By DAVE MUNDY manager@gonzalescannon.com Pre-Owned Vehicles Daily Rentals Repair Body Shop The Gonzales Cannon Thursday, September 22, 2011 B Legislature made key strides, Kuempel says Thanking firefighters Officials report Bastrop fire’s cause ‘most likely’ electrical Cannon News Services newseditor@gonzalescannon.com Allen Fink of the Gonzales County farm Bureau presentss donation checks to Waelder VFD Chief Nino Reyes and Ottine VFD Chief John Everett during Tuesday’s annual Gonzales County Farm Bureau convention. (Photo by Dave Mundy) COLLEGE STATION — The Texas Forest Service announced Tuesday it has completed the investigation of the Bastrop County Complex Fire. It has been determined that the cause of the fire was most likely electrical in nature Forest Service officials did not elaborate on the statement and were not immediately available for comment. The fire began on Labor Day weekend and, whipped by strong winds as a result of a mild cold front and the back side of Tropical Storm Lee farther to the east, raged out of control for much of the following week, eventually blackening more than 34,000 acres and razing more than 1,500 homes. Firefighters from throughout the region were brought in to battle the fire, which at one point included several smaller fires that eventually merged into one. Two persons were confirmed killed by the fire. State officials say the Bastrop fire was the worst in state history and damage estimates are upwards of $250 million. Many of the Bastrop and Smithville firefighters who battled the blaze, in fact, lost their own homes while trying to save those of others. Cooler temperatures in the past week, along with up to a half-inch of rain in many areas, have eased the strain on Texas firefighters, but officials said the fire danger remains high across the state. Luling breaks ground on new animal shelter Allen Guisinger, president of LAWS (Luling Animal Welfare Society), speaks during Tuesday’s ground-breaking ceremonies for the city’s new animal shelter. LAWS led the fund-raising partnership with the city to build the new shelter. (Courtesy Photo) Gonzales County Farm Bureau members got a chance to welcome — and say goodbye to — State Rep. John Kuempel during Tuesday’s annual county convention meeting at the First Lutheran Church here. Kuempel, who won a special election to replace his father after Edmund Kuempel passed away just two days after winning reelection to his 44th District legislative seat, will be turning over representation of Gonzales County to Rep. Tim Kleinschmidt of the 17th District following re-alignment prior to the 2012 elections. Kuempel nonetheless vowed Tuesday to remain a good friend of Gonzales County and the Farm Bureau, which represents more than 1,000 members in the county and 455,000 members statewide. “We passed some very important legislation last year,” Kuempel told the convention gathering. He cited legislation protecting landowners’ rights over water resources, protection from eminent-domain land seizures and bills designed to “do away with the Trans-Texas Corridor” as prime accomplishments and legislation supported by the Farm Bureau. “We have a great relationship with the Farm Bureau,” he said. “I appreciate y’all’s involvement in the legislative process.” The elder Kuempel’s district included Gonzales County for most of his 30-plus years in office. Kuempel said later the next Legislature is primed to deal with many of the same problems of the most recent one — an anticipated budget deficit and school funding foremost among the issues. “We deferred about $2 billion in school funding, so that’s going to come back,” he said. “Before we ever get started, we’re going to be looking at $14 billion. I hate to say it but that ‘T’ word (taxes) may come up.” During Tuesday’s convention, local Bu- State Rep. John Kuempel reau President Allen Fink presented donation checks to the Waelder and Ottine volunteer fire departments, citing their tireless work in helping to protect homes and property of rural residents throughout this summer of drought, especially during the recent outbreak of wildfires. Attendees also had a chance to meet with Seguin Municipal Judge Kevin Kolb, who is campaigning for the Republican nomination for the 25th Judicial District seat being vacated by retiring Judge Dwight Peschel. County members also approved several Farm Bureau legislative policy proposals, including one which supports Country of Label Origin labeling for cattle imported from Mexico and Canada, no matter how long the cattle have been in the U.S. Members also approved resolutions opposing over-regulation by the federal Environmental Protection Agency in support of endangered species; opposing property tax exemptions for land bought for water production purposes; and calling for the elimination of benefits for members of Congress who resign or are convicted of crimes while in office. WISD trustees add three more teachers By CEDRIC IGLEHART region@gonzalescannon.com WAELDER — Due to an increase in elementary age students, the Waelder Independent School District’s Board of Trustees decided to bring more educators into the fold. The board unanimously agreed to the hiring of three new teachers - Melinda McCormick, Kindergarten; Jessica Helmer, 2nd Grade; and Juile Shaw, 4th Grade. The motion was made by Chris Mindieta and seconded by Delores Martinez. Faith Pope told the board that by next month the documentation to be filed with the Texas Education Agency (TEA) in regards to their academic rating will be done. TEA, who gave the district a rating of academically unacceptable, requires all such districts to submit a plan outlining the steps they plan to take in order to reach a level of academic acceptability. Waelder ISD has already begun implementation of a district-wide tutorial program. Over 200 students attended the first session on Tuesday and the five who failed to show will face consequences that could include lunch-hour sessions or Saturday school attendance. In another agenda item, the board announced a public hearing to be held on Oct. 3 at 5:45 p.m. The meeting, which will explain the district’s School Improvement Program, will be immediately followed by a special called meeting. In other business, the board: * Announced its intention to attend the Texas Association of School Administrators/ Texas Association of School Boards Conference in Austin, which will run from Sept. 30Oct. 2. * Announced the Family Night at the Park event will be held on Sept. 29 at 5 p.m. * Read a letter from the Texas Association of School Business Officials stating WISD business officer Susan Richardson successfully completed her certification courses and is now a Certified Texas School Business Specialist. Page B2 Gonzales Family Church Assembly of God 320 St. Andrew Assemblies of God Places of Worship The Gonzales Cannon “This know also, that in the last days perilous times shall come. For men shall be lovers of their own selves, covetous, boasters, proud, blasphemers, disobedient to parents, unthankful, unholy... ” 2 Timothy 3:1-2 Church of Christ (Iglesia de Cristo) 201 E. Second St. Nixon Thursday, September 22, 2011 First United Methodist 410 N. Franklin, Nixon 403 E North Main, Flatonia Jesus Holy Ghost Temple 1906 Hickston, Gonzales Flatonia United Methodist Harris Chapel United Methodist S. Liberty St. Nixon First Assembly of God 509 E. 3rd St. Nixon Church of Christ Lighthouse Church of Our Lord 1805 Weimar, Gonzales E. 3rd & Texas, Nixon New Life Assembly of God Corner of Church St. & Jessie Smith St. Gonzales Community Church of God 1020 St. Louis, Gonzales Churches of God New Life Temple for Jesus Christ Belmont, Corner of Hwy 466 & Hwy 80 Harwood Methodist Church Baha’i Faith Baha’i Faith Baptist 621 St. George St. Gonzales Gonzales Memorial Church of God in Christ 1113 Hastings, Gonzales North 2nd and North Gonzales, Harwood Clark Baptist Church F.M. 794, Gonzales Hwy. 87 Smiley New Way Church of God in Christ 514 St. Andrew, Gonzales Henson Chapel United Methodist 1113 St. Andrew, Gonzales River of Life Christian Fellowship 207 Steele St., Smiley 830-587-6500 Two Rivers Bible Church Monthalia United Methodist CR 112 off 97 1600 Sarah DeWitt Dr., Ste 210, Gonzales County Baptist Church Iglesia Bautista Memorial Hwy 97 Waelder Shiner Baptist Church Eastside Baptist Church Seydler Street, Gonzales Hwy. 87 Smiley Avenue F and 15th Street, Shiner Episcopal Church of the Messiah 721 S. Louis, Gonzales (830) 6723407 Episcopal Smiley United Methodist 1 blk S. of Hwy 87 Faith Family Church Inter-Denominational Pentecostal 1812 Cartwheel Dr., Gonzales Leesville Baptist Church E. of Hwy 80 on CR 121 Union Lea Baptist Church St. Andrew St. Gonzales Waelder United Methodist 2 blks from Hwy 90 & 97 Efeso Igesia Bautista First Baptist Church 422 St. Paul, Gonzales 403 N Texas Nixon Hwy 108 N Smiley Memorial Heights Baptist Church 1330 College Gonzales 100 Capes Gonzales Hwy. 97 Bebe Union Valley Baptist Church FM 1681 NW of Nixon La Os del Evangelio Mission Capilla del Pueblo W. Central at 87 Nixon Evangelical Faith Temple Hwy 80 (N. Nixon Ave.) Nixon Webster Chapel A.M.E. 1027 Church St. Gonzales First Baptist Church First Baptist Church First Baptist Church 406 N Ave E Waelder Mount Pilgrim Baptist Church Oak Valley Baptist Church Old Moulton Baptist Church 2287 FM 1680, Moulton St. James Catholic Church 417 N. College, Gonzales St. John St. Gonzales Catholic Camp Valley Full Gospel 7 mi N of Nixon on Hwy 80 Full Gospel Agape Ministries Living Waters Church Non-Denominational Fellowship Holy Temple of Jesus Christ No. 2 1515 Dallas, Gonzales 512 St. James, Gonzales Temple Bethel Pentecostal 1104 S. Paul, Gonzales Sacred Heart Catholic Church St. Joseph Catholic Church 207 S. Washington, Nixon Full Gospel Church 1426 Fisher, Gonzales 605 Saint Joseph St. Gonzales Life Changing Church of Gonzales 3.3 miles north on 183, Right on CR 235, Right on CR 236 First Evangelical Lutheran 1206 St. Joseph, Gonzales Lutheran Bread of Life Ministries 613 St. Joseph, Gonzales Greater Church Palestine Baptist Primitive Baptist Church 1121 N. College Gonzales S of 90-A (sign on Hwy 80) Greater Rising Star Baptist Church Providence Missionary Baptist Church 1020 St. Andrew Gonzales St Patrick Catholic Church in Waelder 613 Highway 90 East Waelder Hwy 87 Smiley Abiding Word Church, LCMS 1310 St. Louis Lutheran Cowboy Church of Gonzales County J.B. Wells Showbarn El Centro Cristiano “Agua Viva” of Waelder Sun. Worship 10:30 a.m., 6 p.m. Pilgrim Presbyterian Church CR 210 off FM 1116 Presbyterian St. Phillip Catholic Church 3rd Ave S of Hwy 87 Nixon Stratton Primitive Baptist FM 1447 9 miles east of Cuero Hwy 80- North of Belmont SE 2nd St. Waelder Harwood Baptist Church North of Post Office St. James Baptist Church Saint Paul Baptist Church First Christian Church (Disciples of Christ) 712 Crockett, Luling Christian Belmont United Methodist Hwy. 90-A Dewville United Methodist West of FM 1117 on CR 121 Methodist Presbyterian Church of Gonzales 414 St. Louis, Gonzales Emmanuel Fellowship 1817 St. Lawrence St. Gonzales Iglesia Bautista Macedonia Congregation Adat HaDerech Meets on Saturdays and Holy Days, 672-5953 Messianic Judaism 201 S Congress Nixon Church of Christ Churches of Christ 1323 Seydler St. Gonzales R FREE ESTIMATES First United Methodist 426 St. Paul, Gonzales Encouraging Word Christian Fellowship Hwy. 80 in Leesville odRigue Body Shop P.O. Box 810 1839 St. Lawrence St. Gonzales, TX 78629 Phone: 830-672-6715 Fax: 830-672-6717 Email: rbs@gvec.net Z ALL MATERIALS HAULED Family Dentistry of Gonzales Gentle Quality Care 606 St. Louis Gonzales, TX 78629 Office 830-672-8664 Fax 830-672-8665 Kitchen Pride Mushroom Farms County Road 348, Gonzales, TX. 830-540-4516. Logan Insurance Agency HOME AUTO FARM 516 St. Paul PO Box 100 Gonzales, Texas 78629 Jim Logan COMMERCIAL BONDS Travis Treasner (830) 672-6518 Fax: (830) 672-6368 Cell: (512) 376-0773 SATURN SALES & SERVICE James Miller 4421 Hwy. 97E, Gonzales Sub-Contractor Specializing in Site Work Foundation Pads • Road Work • Demolition Stock Tanks-Brush Clearing David Ehrig 830-832-6063 Construction Company Ilene B. Gohmert Certified Public Accountant 830-540-4285 • 830-540-4422 Office 830-437-2873 Bubba Ehrig 830-832-5094 830-672-5030 • 830-672-2483 (Fax) 409 St. George St. • Gonzales Luxury Motors 830-672-7500 113 US Hwy. 90A E Gonzales, Tx 78629 FARMERS INSURANCE GROUP Gets You Back Where You Belong! Gieser Insurance Agency 941 St. Joseph Gonzales, Tx 78629 701 North Sarah DeWitt, Gonzales, TX, 78629 830-672-4530 830-203-5325 Toll Free: (800) 358-5298 Lisa G. Gaspard Leticia M. Cenotti Agency Manager TDI #001113854 Agency Producer TDI #001243345 Community Health Centers Of South Central Texas, Inc. “Making a difference one life at a time since 1966” Most insurances accepted, we welcome Medicare - Medicaid. (No one is turned away for inability to pay.) Hours: Mon., Wed., Thurs., Fri. 8a.m.-5p.m. Tues., 8a.m.-8p.m. • Sun. 12p.m.-4p.m. Closed Sat. 228 St. George Street P.O. Box 1890 Gonzales, Texas 78629 830-672-6865 or 830-672-2065 921 St. Peter St. & 1214 St. Louis Brandi Vinklarek Director (830)672-2065 Ph. 830.672.6511 “Train a child in the way he should go: and when he is old he will not depart from it.” Proverbs 22:6 WAYNE SCROGGINS Funeral Director BUFFINGTON FUNERAL HOME Sale every Saturday at 10am 424 St. Peter St. Gonzales, TX 78629 Phone: (830) 672-3322 Fax: (830) 672-9208 with live webcast @ Email: wayne.scroggins@sci-us.com 520 N. Ave. C. P.O. Box 64 Shiner, TX 77984 Phone: (361) 594-3352 Fax: (361) 594-3127 Cell: 361-258-1303 P.O. Box 565 • Gonzales, TX 78629 Dave Shelton Mobile 830-857-5394 Mike Brzozowski Mobile 830-857-3900 Office 830-672-2845 Fax 830-672-6087 Dry Fertilizer Custom Application & Soil Testing Reyna’s Taco Hut 1801 Sarah DeWitt Dr., Gonzales, TX TACLB6030C/M-37285 STEVE EHRIG P.O. Box 1826 Gonzales, TX 78629 830-263-1233 Morgan Mills 830-857-4086 Next to the Courthouse Annex Open for Breakfast, Lunch & Dinner Mon.-Sat. 5 a.m. - 9 p.m.; Sun. 5 a.m. - 3 p.m. Authentic Mexican Food Including Caldo & Menudo 830-672-2551 Home of the “Silverado” FOR THE TIMES Old/New Business/Vehicle Lettering/Magnetics/Banners Metal/Wood/Special Events/Stickers/Etc... FREE ESTIMATES - 15 Yrs. Experience Call for Appt. Steve & Cheryl Turner 830-857-0270/830-522-4723 SIGNS HOLIDAY FINANCE CORPORATION 506 St. Paul St. • Gonzales, TX 78629 Serving Gonzales & surrounding Counties Family owned with over 20 yrs. experience HOUSE FOUNDATIONS • STAINED CONCRETE DRIVEWAYS • SIDEWALKS • DIRT WORK ALL YOUR CONCRETE NEEDS Tony’s ConCreTe Finishing & MeTal Building ereCTion Craftsmanship You Can Finally Afford 830-857-0488 830-672-1821 (830) 672-6556 No One Beats Our Price • Free Estimates • Insured Cell Office Tony Fitzsimmons, Owner Thursday, September 22, 2011 Workshop participants focus on child advocacy By KEY GARNER Cannon Correspondent The Gonzales Cannon Page B3 “I believe you”…. praise a child’s courage to report sexual abuse… be supportive and nonjudgmental – it’s not their fault…be alert to situations where an adult and a child are one-on-one in private settings – all of these recommendations and elements of encouragement advocating for children were a part of the Stewards of Children interactive workshop produced and published by Darkness to Light. The workshop was presented to almost 100 attendees Tuesday night at the Two Rivers Bible Church in Gonzales, Busy representing Gonzales County hosted by Norma’s House and SHAC (School Health Advisory Council). The facilitator was Carolyn Morrow, a former executive director of Norma’s House, who currently heads SHAC. One in four girls and one in six boys will be sexually molested before they are eighteen years of age. With these epidemic numbers, it can be said that for some adults the problem is a lack of information, for others passive acceptance, and for others, a deliberate overlooking of the existence of sexual abuse among children. Thirty to forty percent of children who are sexually abused are abused by family members. Sixty percent are abused by people the family trusts, and more than ninety percent know their abusers. Less than ten percent of abusers are strangers. The workshop challenged those attending to talk about sexual abuse of children, watch for signs, and then act on them by reporting suspicions. Dignitaries present were Mary Ann Martinez, victim advocate from the district attorney’s office; Paul Watkins, county attorney; Dennis Richter, chief deputy for the sheriff ’s department; and Doug Mundine, school resource officer from the police department. Also in attendance was Norma’s House namesake, Norma Ehrig. Sponsors were T-Rex Therapy Services, Christian Kids Day Care and Preschool, HEB, and Ti- ger Tote, providing an elaborate buffet and door prizes. All attendees earned continuing education credit. Advocates for children Kersey, Finch exchange vows Jennifer Maureen Kersey and John Allan Finch were united in marriage on June 4, 2011 at 4:00 p.m. at Queen of Angels Chapel in Spicewood, Texas. Father Wade Russell of College Station officiated at the double ring ceremony. Jennifer is the daughter of Richard and Patty Kersey of Lake Jackson, and John is the son of Larry and Priscilla Finch. Grandparents of the couple are Paul and Janet Varga of Houston, Leon and Rosemary Netardus and Mrs. Anne Finch of Gonzales. The altar was flanked by clear cylinder vases of light green and white hydrangeas, pink and white peonies and light pink spray roses. The bride, escorted by her father, wore an ivory lace over light gold Maggie Sottero dress. It was a strapless fitted A-line gown with dipped neckline and corset closure with a gathered tulle skirt. She carried a bouquet of green hypericum berry, pink and white peonies, and hot pink spray roses. Serving as bridal attendants were matron of honor, Clacie Ciaccio, maid of honor, Cynthia Saenz, and bridesmaids, Sydney Brown, Jessica Swope, and Nikki Montanez, all close friends of the bride. They wore latte colored silk duponi cocktail length dresses from Purely Alfred Angelo collection. They also carried bouquets of light pink and white peonies, hot pink spray roses and green hypericum berry. Best men were John Fischer and Bryan Scheu, college friends of the groom. Groomsmen included Matt Thiele, Ryan Mills, and Jared Moore, all close friends of the groom. Ushers for the wedding were Patrick Kersey, Chris Kersey, Jonathan Kersey, brothers of the bride, and Jacob Flynt, cousin of the groom. Handing out programs The Stewards if Children workshop Tuesday focused on advocating for child victims of sexual abuse. (Photo by Key Garner) The Gonzales County Court has been continuing to participate in numerous parades throughout the area. On Sunday, July 31, they won 2nd place at the Moulton Town & Country Parade. They then traveled to the Annual Schulenburg Festival on Sunday, August 7, winning 3rd place in the float division and then participated in the 46th Annual Pleasanton Cowboy Homecoming Celebration on Saturday, August 20. The court consists of Miss Gonzales County Katie Jo Staton, Junior Miss Abby Garcia, Little Miss Madison Pirkle, and Little Mister Craig Tuch. (Courtesy Photos) before the ceremony was Emily Flynt, cousin of the groom. The wedding reception was held at Spicewood Vineyards immediately following the wedding ceremony. Guests were served appetizers which included antipasto and assorted cheese with strawberry lemonade, tea, wine or beer. When the wedding party arrived guests were served family style seated at round tables covered with sage green tablecloths. The menu consisted of slow smoked brisket, barbecued chicken, Elgin sausage, brown butter green beans, green chile macaroni and cheese and buttermilk biscuits with honey butter. For dessert guests were served wedding cake, Blue Bell ice cream with hot fudge, sprinkles, nuts, whipped cream and cherries. The bride’s cake was a 4 tier round strawberries and cream cake with white butter cream icing and light pink fondant lace band on each tier. It was also decorated with light pink dust sugar garden roses on each tier. The groom’s cake was a chocolate confection shaped and decorated like a yellow catfish depicting the groom’s favorite hobby. The bride and groom danced to “Always and For- Mr. and Mrs. John Allan Finch ever” by Cory Morrow. DJ Floyd Banks from Complete Music and Video provided music for the evening. After toasts made by the best men and matron/maid of honor, the wedding party and guests spent the remainder of the evening dancing and visiting. A highlight of the evening was the Aggie War Hymn with all Aggies on the dance floor. Guests danced to “Goodnight Irene” at the conclusion of the evening. Members of the house party included Lauren Kersey, Robin Conner, Katy Sedlar, Sarah Finch and Emily Flynt. The couple spent their honeymoon in Oahu, Hawaii and is now making their home in College Station. Women’s Study Club begins its 86th year The Woman’s Study Club of Gonzales began their 86th year with a noon salad luncheon and meeting in the home of Mrs. Vicki Frenzel. After lunch President Jean Ollom presided over the business meeting and updated members on programs for the coming club year. The library and treasurer’s report and the proposal of new members were made. The highlight of the luncheon was each member sharing their summer adventures interlaced with much humor. Many visited children, grandchildren and attended family weddings. Travels included the nearby and far away cities of Texas, the states of South Carolina, New Mexico, Colorado, New York, Alaska and foreign countries such as The Canadian Rockies and Argentina. The Study Club’s next meeting will be held on October 12th at The Pilgrim Presbyterian Church with Mrs. Patti Nance as hostess. for Come & Take It Hair Cuts, Hilites, Facials, Massages, Grazie , Yellow box, The Hearty Gourmet Let Us Help You Celebrate! jewelry & More . ~~~~~~~~~~~~~~~ Crystal Neitsch & Michael Ehrig October 22, 2011 September 16, 2011 Sissy Ackman & Tom Johnson HAIR IT IS & CO. 830-672-3904 1402 St. Louis, Gonzales, TX 830-672-GIFT 4438 Hours: Wed.-Sat. 10-5 813 ST. JOSEPH ST GONZALES, TX Page B4 FREE Classifieds FREE The Gonzales Cannon Thursday, September 22, 2011 830-672-7100 or Fax 830-672-7111 To Place your ad: CALL: The Gonzales Cannon weekdays from 8 a.m. to 5 p.m. at VISIT: 8 a.m. to 5 p.m. weekdays at 618 Saint Paul Street, Gonzales. MAIL: The Gonzales Cannon Attention: Classifieds P.O. Box E, Gonzales, TX 78629; Free Classified Ads COMMERCIAL ACCOUNTS: Liner and display ads CALL: Deadlines: CLASSIFIED LINE & DISPLAY ADS For Friday due Noon, Tuesday ONLINE HOW MUCH IS AN AD? Non Commercial Rates: 830-672-7100 830-672-7100 FREE *Merchandise less than $20,000 *One free ad per classification BUSINESS-RELATED *ALL HELP WANTED LINE ADS WILL BE CHARGED EFFECTIVE NOW (excluding NonProfit Orgs.) CLASSIFIED ADS: 25 cents per word/ 35 cents per word in BOLD Minimum $5 charge AD & PHOTO PACKAGE*: 1 week ad with photo: $20.00 *excludes Rentals and Real Estate Some restrictions may apply Please call for details PAYMENT OPTIONS: Cash, Check or Credit Cards BILLING INFORMATION: For information about your account call WHATS ELIGIBLE: 830-672-7100 LOST & FOUND Found: Wedding Ring left at WalMart. Call (830) 4456597 and describe. -------------------------Lost - 5 Donkeys. 1 Black, 4 white. I-10, 304, Hensling Lane area. 830-437-2952. NOTICES 672-8291 for information. -------------------------Job Corps is currently enrolling students aged 16-24 in over 20 vocational trades at no-cost! Will help students get drivers license GED or High School diploma and college training if qualified. For more info call 512-6657327 HELP WANTED Immediate Opening. Food Safety Compliance. Must be computer literate & have HACCP & Food Safety Knowledge. Bilingual Preferred. Benefits include: Vacation, Sick Leave, Hosp. Ins., Dental, Vision, 401k, Retirement. Apply in person at: Cal-Maine Foods, Inc., 748 CR422, Waelder, Texas 78959. Or fax or email resume with references to: FAX: (830) 540-3996; EMAIL, maguero@ cmfoods.com. -------------------------Ranch Hand, mostly cattle, but a variety of other work. Must have own transportation. 830-437- HELP WANTED 5772. -------------------------Part-time position available for Weekend RN Supervisor. Long Term Care experience required. Please apply in person at The Heights of Gonzales Nursing and Rehabilitation Center, 701 N. Sarah DeWitt Drive, Gonzales, TX. -------------------------Part-time position available for MDS Coordinator. Must be Licensed Vocational Nurse with knowledge of MDS HELP WANTED in Long Term Care. Please apply in person at The Heights of Gonzales Nursing and Rehabilitation Center, 701 N. Sarah DeWitt Drive, Gonzales. TX. -------------------------Full-time position with benefits available for Housekeeping/Building Supervisor. Management experience required. Please apply in person at The Heights of Gonzales Nursing and Rehabilitation Center, 701 N. HELP WANTED. -------------------------CDL DRIVERS WANTED J.M. Oilfield Service, a family oriented company is seeking professional & reliable Class A CDL employees. Re- HELP WANTED quirements: 2 years experience tanker and must be willing to get HazMat endorsement ASAP. Call 830-672-8000. -------------------------AVON Representatives Wanted! Great earning opportunities! Buy or Sell! Call 830-672-2271, Independent Sales Rep. GARAGE SALES 228 Ponton St. Saturday, 7:30 - 2:30. Furniture, adult & childrens clothing, misc. items. -------------------------3 Family Garage Sale. 140 W. Wallace, Saturday, 8-1. NOTICES The Heights of Gonzales Activity Department is looking for a fridge/freezer to hold supplies for event refreshments. If you would like to donate or know of one that is reasonably priced, contact Gwen Koncaba, 830-672-4530. -------------------------Gonzales Learning Center seeking donations of caps and gowns. Call 830- MISC. FOR SALE Power Box Asphalt Paving Machine and Roller. Good Condition. $9,900. Call after 5 p.m. 361-594-3668. Call 672-7100 to Place your Garage Sale Ads free! GARAGE SALES Yard Sale: Sat., Sept. 24. 713 Wells St. 7:00 a.m. to 11:00 a.m. Lots of everything. HELP WANTED The Vaz Clinic, 1103 N. Sarah DeWitt, 672-2424, needs a Certified Medical Assistant, preferable Spanish. Apply within or fax resume to 866-6222180. -------------------------- LEGAL NOTICES LEGAL NOTICES LEGAL NOTICES LEGAL NOTICES LEGAL NOTICES Texas Commission on environmenTal QualiTy NOTICE OF APPLICATION FOR AN AIR QUALITY STANDARD PERMIT FOR PERMANENT ROCK AND CONCRETE CRUSHERS PROPOSED AIR QUALITY REGISTRATION NUMBER 98074 APPLICATION. O&G Rocks, 709 North Gonzales Street, Cuero, Texas 77954-2840 has applied to the Texas Commission on Environmental Quality (TCEQ) for an Air Quality Standard Permit, Registration Number 98074, which would authorize construction of a permanent rock and concrete crusher. The facility is proposed to be located near Smiley, Gonzales County, Texas 78159. The following driving directions are provided: from the intersection of Highway 87 and Farm-to-Market Road 1116 south of Smiley travel 6.3 miles north on Farm-to-Market Road 1116, take a right and travel 0.5 miles east on County Road 301, the gate entrance to the site is located on the right. This application was submitted to the TCEQ on August 15, 2011. The executive director has determined the application was technically complete on August 24, 2011. PUBLIC COMMENT. Written public comments about this application may be submitted at any time during the public comment period. You may submit public comments either in writing to the Texas Commission on Environmental Quality, Office of the Chief Clerk, MC-105, P.O. Box 13087, Austin, Texas 78711-3087, or electronically at www. tceq.texas.gov/about/comments.html. If you choose to communicate with the TCEQ electronically, please be aware that your email address, like your physical mailing address, will become part of the agency’s public record. The deadline to submit public comments is 30 days after newspaper notice is published. RESPONSE TO COMMENTS. A written response to all relevant comments will be prepared by the executive director after the comment period closes. The response, along with the executive director’s decision on the application, will be mailed to everyone who submitted public comments and requested to be added to the mailing list. The response to comments will be posted in the permit file for viewing. The executive director shall approve or deny the application not later than 30 days after the end of the public comment period, considering all comments received within the comment period, and base this decision on whether the application meets the requirements of the standard permit. CENTRAL/REGIONAL OFFICE. The application will be available for viewing and copying at the TCEQ Central Office located at 12100 Park 35 Circle, Austin, Texas, and the TCEQ Corpus Christi Regional Office located at NRC Bldg Ste 1200, 6300 Ocean Dr, Unit 5839, Corpus Christi, Texas 78412-5839, during the hours of 8:00 a.m. to 5:00 p.m., Monday through Friday, beginning the first day of publication of this notice. INFORMATION. For more information about this permit application or the permitting process, please call the TCEQ Office of Public Assistance, toll free at 1-800-687-4040. Si desea información en Español, puede llamar al 1-800-687-4040. General information regarding the TCEQ can be found at our Web site at. Further information may also be obtained from O&G Rocks, 709 North Gonzales Street, Cuero, Texas 77954-2840, or by calling Mr. Joe Adams at (361) 275-3424. Notice Issuance Date: August 25, 2011 LEGAL NOTICES LEGAL NOTICES LEGAL NOTICES PUBLIC ADVERTISEMENT FOR ENGINEERING SERVICES The City of Smiley is soliciting proposals from a qualified engineer/engineering firms (registered to practice in the State of Texas) to prepare all preliminary and final design plans and specifications, and to conduct all necessary interim and final inspections. Proposals must be received by the City no later than 5:00 p.m. on October 3, 2011 to be considered. The City of Smiley reserves the right to negotiate with any and all engineers/engineering firms that submit proposals, as per the Texas Professional Services Procurement Act and the Uniform Grant and Contract Management Standards. The City of Smiley is an Affirmative Action/Equal Opportunity Employer. PUBLIC ADVERTISEMENT FOR MANAGEMENT SERVICES The City of Smiley is soliciting proposals from a qualified management consultant/firm to carry out several aspects of overall program management-0189. Proposals must be received by the City no later than 5:00 p.m. on October 3, 2011 to be considered. The City of Smiley reserves the right to negotiate with any and all management firms that submit proposals, per the Texas Professional Services Procurement Act and the Uniform Grant and Contract Management Standards. The City of Smiley is an Affirmative Action/Equal Opportunity Employer. Thursday, September 22, 2011 MISC. FOR SALE Small computer desk, wood, $40.00 obo. Large playpen, $25.00. Both in great condition. MISC. FOR SALE 830-203-9159. -------------------------For Sale: pickup bed utility trailer, $125. (830) 377-8814. MISC. FOR SALE CLASSIFIEDS MISC. FOR SALE MISC. FOR SALE ATTENTION TRUCKERS. Cobra 25, NW Ltd, Classic CB, Igloo Ref. Cooler, Wave The Gonzales Cannon MISC. FOR SALE round folding tables w/6 armchairs. Contact Tommy, 830-351-1263. -------------------------Excellent condition. 20” push mower, weed eater, $125/ both. 361-7412604, Yoakum. -------------------------Maytag Washing Machine. $150.00. Call 361-208-3565. -------------------------FOR SALE: 35mm Minolta SLR Film Camera, 3 lenses, strobe, filters, tripod, case. $75.00. Call 830-822-6857. -------------------------For sale: Float tube for fishing, like new. $50 obo. Also electric trolling motormake offer. 8575720. -------------------------Baby bed for sale. $60. Call 254-9315712. -------------------------Electric Hospital Bed, $300. 12 function exercise bicycle, $65. Prices Negotiable. 830582-1120, Nixon. -------------------------Fresh shelled peas. Cream, purple hulled & black eyed. Also unshelled peas. Sold by the bushel. 2001 Water St., Gonzales. -------------------------Proform Treadmill. Model 380CS. Programmable, in- Page B5 MISC. FOR SALE cludes built-in fan, speaker for IPOD radio. Like new condition. $350/obo. Contact Liz, 830263-2103. -------------------------Radio Control Airplane parts/kits. If I don’t have it, I can get it. Lockhart, TX. Call 979-393-8642. -------------------------For Sale: Calf table/ shoot with self catch gate. $950. 830-437-5747. -------------------------Whirlpool Heavy Duty Gas Dryer. Good condition. $75. Can be seen at 511 Church St. 830857-4993. -------------------------Fresh Produce. Watermelons, Cantaloupes, Tomatoes, Squash, Cucumbers, Onions, Peppers & Peas. 2001 Water Street, Gonzales. 512-2276950. -------------------------For Sale: 3 pt. Chipp e r / S h re d d e r, never been used, $600. Also Windmill Seeder, $250. 830540-4971. -------------------------For Sale: Thomas Playmate with Color Glo Chord Organ. Good condition. All instruction books included. Call Sue, 672-2192. -------------------------Utility trailer. All wired for lights. Current tag. $575. 512-917-4078. -------------------------Hats from the makers of Koozie-Norwood 48 @ 192.08 “plus” transportation charges. 4 color heat transfer. Color of hats - Red, yellow, pink, green, bone, khaki, orange, black navy and royal. That’s only $4.00 a hat. DBK Advertising. 830-437-5142 or 830-857-0876. -------------------------Prayer Shawl, 38x72, handmade, $75.00. Animal or bird cage, utility wire, 14x18, $60.00. 512-917-4078. -------------------------FREE 3 haul Fiberglass boat, 16 ft. Needs work & no leaks. Call for information. 830-5403574. -------------------------Fullsize Mattress & Box Springs, $100. Queensize Mattress and Box Springs, $175. Both in excellent condition & sanitized. 830-6723728. -------------------------2 young ladies black jackets size 14. One is leather. 672-8034. -------------------------Old Readers Di- MISC. FOR SALE gests for Sale. Call 830-672-3362. -------------------------Autograph picture of Muhammad Ali/ Cassius Clay (60’s), Certificate of Authenticity (11x16), $1,400. Yellow Lab Stud Service. (806) 577-3962. -------------------------Beautiful handmade “orange poinsettia” pottery bowl. Large. Great gift. $35. Call (512) 917-4078. -------------------------Pecans for Sale. This year’s crop. Shelled, Halved. $10/1 lb. bag. 512-417-3032. -------------------------Culligan Water Softner and Rust Remover, old cars, elect. water heater, 2001 Fiber Truck bed w/key, Hay Balers, Bar B Q pipe. 830-437-5759. -------------------------2 pipe BBQ pits for sale. Ozarka Water cooler with bottle. Call 361-208-3565. -------------------------128 used letter-size hanging file folders, most have colored tabs, excellent condition. $30 cash for all or $7 per 25. 830672-1106. -------------------------Computer, printer & desk, all $400. Stamina #4755 exercise machine. Like new condition. $100/obo. 6722267. -------------------------4 tires. LT2457QR17 in good condition. $100 obo. 830-6722075. -------------------------Metal Bench, $150; Organ, $50; School desk & books of all kinds. Just out of Moulton on 532. Call 361-596-4403. -------------------------Tanning Bed for Sale. 1996, 24SF. $300. Children’s wardrobe, good condition. $300. 672-7127. -------------------------Beautiful Vintage water color painting, landscape & water. 12x19. $375. Antique very ornate picture frame. 16x20, $295. Call 512-917-4078. -------------------------Dalhart Winberg original oil painting, landscape, $3800 (512)9174078. -------------------------For Sale: Picnic tables built with treated 2x6 lumber with bolts and screws. No nails. 4, 6, and 8 foot sizes available. For more details call 830540-4776 or 830857-3273. Delivery Available. Deer Hunters: For sale; feeder and feeder parts; cameras etc. 830-8575720 LEGAL NOTICES LEGAL NOTICES LEGAL NOTICES LEGAL NOTICES NOTICE OF A PUBLIC HEARING NOTICE IS HEREBY GIVEN, that the Zoning Board of Adjustments of the City of Gonzales will hold a Public Hearing on October 3, 2011 at 5:30 p.m. in City Council chambers at City Hall to consider the request of the below addresses: Address 1606 St. Michael 1313 Ewing St. 1313 Ewing St. 616 Seydler St. 2020 Zint St. 2027 Church St. 1400 Church St. 1204 St. Louis St. Property Owner/Applicant Herb Karnau – set back variance for new house Mary Barnes/Darryl K. Russell – horse permit app. Mary Barnes/Billy Jones – horse permit app. Barry Miller/Alfred Brown – horse permit app. John DuBose/Bertha Erskin-Eddie Hunt–horse permit app. Ryan Wilkerson-horse permit app. Armando Izaguirre-horse permit app. L. Hernandez/Rogelio Perez-end 6 month probation period Recycling Center All interested parties are encouraged to attend. Please visit the City website at or City Hall to view Agenda. REQUEST FOR SEALED PROPOSALS Gonzales County Appraisal District is soliciting proposals for the lease of mass appraisal software and hardware for the years 2012 and 2013. Specifications are available at the Gonzales County Appraisal District, 928 St. Paul Street; Gonzales, Texas 78629. For more information, contact John Liford at (830) 672-2879. All proposals must be sealed, addressed to “Gonzales County Appraisal District, Hardware and Software Proposal”, signed by an authorized representative of the vendor and must be received prior to, or on the date and time specified. Proposals may be hand delivered to Gonzales County Appraisal District, 928 St. Paul Street, Gonzales, Texas or mailed to P.O. Box 867 Gonzales, Texas 78629. Late proposals will not be accepted. The deadline for submitting proposals is 5:00 P.M., Thursday, October 20, 2011. Proposals will be opened at 5:30 P.M. on Thursday, October 20, 2011 at the appraisal district office located at 928 St. Paul Street, Gonzales, Texas. Contract may be awarded on Thursday, October 20, 2011 during the regular called meeting of the Board of Directors which begins at 5:30 p.m. The district reserves the right to accept or reject any and all proposals. The submitted proposals may be evaluated based on the following factors: price, cost and time of conversion, lost time due to training, for ease of operation, and the responsibility and reputation of the Vendor. The contract may be awarded to the lowest responsible bidder or to the bidder who provides goods or services at the best value for the district. Box, Portable Microwave. $50.00 each. 361-596-4502 or 361-401-0556. -------------------------For Sale: Used 2x4’s. Call 263-1181 for information. -------------------------Radio Controlled “R/L” model airplane kits. Kits are complete. Engine and radio sold separate. Kits range from $5.00 to $15.00. Call for details, 512-431-0823. -------------------------Like new, 26” Men’s 21 spd., $50 obo. Call Theresa at 830203-5212. -------------------------2000 Buick Century, large capacity Estate Clothes Dryer, Kingsize mattress & standard box spring. 857-8090. -------------------------2 Teenagers Formals-Party Dresses. 1 White w/spaghetti Strap, with rhinestones. 1 Beige/ golden color, spaghetti straps, gold rhinestones. Call 672-8034 or come by 1822 St. Louis. -------------------------Heavy, vinyl tarps. 15’x50’. UV proof, tuff boogers. $50 each. 830-6602813. -------------------------6 oak restaurant booths w/copper inlaid tops. Large HELP WANTED HELP WANTED GONZALES ECONOMIC DEVELOPMENT CORPORATION GONZALES, TEXAS IMPROVEMENTS TO GADC INDUSTRIAL PARK SUBDIVISION ADVERTISEMENT FOR BIDS Separate sealed bids addressed to the Gonzales Economic Development Corporation (GEDC) (OWNER) clearly labeled IMPROVEMENTS TO GADC INDUSTRIAL PARK SUBDIVISION will be received at Gonzales City Hall, 820 St. Joseph Street, Gonzales, Texas 78629, until 2:00 o’clock P.M. on October 6, 2011, and then publicly opened and read aloud immediately. This project entails the construction of approximately 500-linear feet of street with curb & gutter, installation and removal of temporary erosion controls, site re-vegetation, and pavement repairs. The Contract Documents, consisting of Advertisement for Bids, Information for Bidders, Bid Proposal, Bid Bond, Agreement, Performance and Payment Bonds, General Conditions, Special Conditions, Notice of Award, Notice to Proceed, Technical Specifications and Plans, together with any Addenda are available at Doucet & Associates, Inc. (830-6721205), 427 St. George Street, Suite 304, Gonzales, Texas 78629, or at Doucet & Associates, Inc., (512-583-2600), 7401 B Hwy 71 West, Suite 160, Austin, TX 78735. Plans, Specifications, and Contract Documents may be examined and purchased for a non-refundable fee of $30.00. Each bid shall be accompanied by a cashier’s check or certified check upon a national or state bank in an amount not less than five percent (5%) of the total actual bid price payable without recourse to the Gonzales Economic Development Corporation, GEDC for a period not to exceed thirty (30) days from the date of the opening for Bids for the purpose of reviewing the Bids and investigating the qualifications of Bidders, prior to awarding of the Contract. There will be no pre-bid conference. Contractors shall make their own individual site inspections and/or investigations to make themselves aware of existing conditions/issues. Failure to make adequate observations and/or ask questions shall not be grounds for requesting additional work or services. Questions shall be forwarded to J. Keith Schauer, P.E., 427 St. George Street, Suite 304, Gonzales, Texas 78629, (830) 672-1205, by 5:00 o’clock September 30, 2011. Accolade Homecare, a regional faith based homecare provider is looking for knowledgeable, energetic and compassionate Registered Nurses who delight in serving others. Our office is located in Yoakum and we are seeking full time and PRN Registered Nurses to manage care for our patients in the Yoakum and Victoria areas. Accolade Homecare offers competitive salaries, mileage reimbursement, generous PTO benefits, excellent health and life insurance options, a 401k program, and an excellent work environment. To learn more about this opportunity, please contact Dot Heller at 361-401-1209 or you may email your resume to dorothy.heller@fms-regional.com. needed for J Bar B Foods at our Weimar and Waelder facilities. Needed to perform a variety of job duties ranging from: Operating mixing, stuffing and cooking machinery, placing and removing product from racks, washing items used in the production of our products, inspecting and packing the finished products.Qualified candidate will have the ability to work in a COLD environment. Follow instructions and directions. The ability to interact cordially with our employees to accomplish common tasks is essential to this position. Excellent benefits offered. MUST be available to work overtime and weekends. Please send resume and salary requirements to:kdeagen@jbfoods.com If interested please apply in person at J Bar B Foods, 1078 Hwy 90 W, Weimar, TX or at 100 Main Street, Waelder, TX. Production Employees J Bar B Foods HELP WANTED HAY FOR SALE Hay for Sale. 120 large round bales of coastal. Heavily fertilized. $70.00. 830582-1057. -------------------------Heavily fertilized, horse quality, coastal square & round bales. Bebe, Tx. 210-326-6053. Kitchen Pride Mushroom Farms Inc Now Hiring Full-Time for Irrigation • Production • Packing Harvesting • Maintenance • Night Sanitation We offer competitive wages along with 401K, Vacation and Life Insurance Plan Apply in person at Kitchen Pride Mushroom Farms Inc., County Road 348, Gonzales, Texas 830-540-4516. An EOE Employer FARM EQPMT. Dozer BD26 - Mitsubishi, 40hp, good condition, sell $9,800 or trade for larger. Call after 5 p.m. 361594-3668. -------------------------For Sale: 4 bale hay hauler. $1,000. (830) 437-2826. -------------------------For Sale: Case 970 tractor, new rear tires. $5,000. (830) 3778814. -------------------------John Deere 350 C Dozer. 90% Condition Overall and 1988 Wrangler, new motor. Sahara special Make offer on Jeep and Tractor. Call 8571781. CDL DRIVERS NEEDED Bobtail Truck Driver Day & Night Positions Available Requirements: Class A CDL with HazMat/Tanker Endorsements Must be at least 25 years of age Insurance, 401K and vacation included Applications available at: Schmidt & Sons, Inc. 2510 Church St. • Gonzales, Texas 78629 (830) 672-2018 • James @ ext. 107 WANTED: Call 672-7100 to Subscribe to The Gonzales Cannon! Page B6 FARM EQPMT. $150 (512) 917-4078. -------------------------2 wheel trailer. Call Robert at 830-2030540. AUTO 299, Box 577. -------------------------1988 Wrangler, new motor. Sahara special and John Deere 350 C Dozer. 90% Condition Overall. Make offer on Jeep and Tractor. Call 857-1781. -------------------------“Simply the best deal on new Chevrolets and GMCs and over 100 used vehicles with financing to fit most credit situations. Grafe Chevrolet GMC - Hallettsville, TX - 800 798-3225 or 361-798-3. -------------------------2,000 F-250, Powerstroke, Ford Diesel truck, Hunter Green, Tow Ball, Bedliner, CLEAN, 182K Miles. Power windows, locks. $6,500 cash. (512) 917-4078. ------------------------. HOMES FOR RENT 2BR/2BA house for rent, w/covered patio, w/electricity. Lots of trees, quiet. No pets, no smoking. $650/mo + dep. 1st and last months. Appliances available. Luling area. 210-386-1399. -------------------------Home in Seguin for Rent. Two bedroom, one bath. Completely updated with all new appliances. $750.00 per month and $750.00 deposit. Call Debbie at 830-445-9583 for details. -------------------------House in country for rent. 3/2, nice yard. 361-594-3233 or 830-857-4364. CLASSIFIEDS HOME SERVICES The Gonzales Cannon RV’s FOR SALE RECREATION miles, tires excellent, new battery & new rear tire. $5,200.00 FIRM. Call 830-560-0238. -------------------------2 80CC Kawasaki 4-wheelers for sale. $900/each. Call 830-534-4996. -------------------------Enduro 55 lb. Thrust Minn Kota used 1 hour. $150. 916 Qualls St., Gonzales. -------------------------Boat Fender and life vests. $5 to $10 each. 916 Qualls St., Gonzales. -------------------------For Sale: 2007 Honda Shadow, VT 750 C2, 3,902 Miles. Like New condition. $3,000.00. Call after 5:00 p.m. M-F. 830540-3555. ------------------------. -------------------------FOR RENT: 2-RV Parking Sites, shade trees, all hook ups. 5 miles East Gonzales. $350/mo. Call 263-0292. -------------------------5 RV Spots for rent. $350/mo. Electric, sewer hookups, water all included in price. Off 90A and Kelly Loop. For information call 830857-3112. -------------------------2003 Dyna SuperGlide Harley 100 yr. Anniv. Gold Key addition windshield, backrest, forward controls. Great condition. $7,500. 830875-2278. -------------------------For Sale or Trade. 2006 Yamaha VStar 1100 Midnight Custom motorcycle w/helmet & deluxe motorcycle cover for sale or trade. Purchased new July 2007 - currently has only 987 miles - Pristine condition, garage kept & mature owner-must see to appreciate. $5,400 or trade for good condition Jon Boat, Jet Ski, or Pontoon boat. I can email photos. texashorns@stx.rr.com. 830-672-6033. -------------------------Having Fun with piano lessons with Shelia Wright 1622 N. College St. Youth and Adults Flexible Schedule (830) 6722719. Thursday, September 22, 2011 PETS Adorable Longhair Chihuahua puppy, last one, is looking for a new home, male, 9 wks, pure bred, health,. -------------------------Free coonhound mix pups. Two spayed females, 1st shots, wormed, 6 months old. Rescued after abandoned on dirt road. Smart, healthy, gentle, already hunting together. Get along with other dogs. 830-540-4591. -------------------------For Sale: Dog carrying cage. Asking $40.00. Call 361208-3565. -------------------------AKC German Shorthair pointer puppies for sale. Great hunters & family companions. Male $200; female - $250. 830-203-0470. -------------------------Pups For Sale. Great Pyrenees, (1/8 Anatolian). Call Sammie Gibson at (830) 203- PETS 8666. -------------------------Professional pet grooming with a Pawsitive experience. Call Stacy Garcia @ 830-540-3344 or 972-464-6312. I do difficult dogs, also Saturdays with appointments. ------------------------. -------------------------Rhodesian Ridgeback and lab mix puppies. With ridges $50. They are blond, brown and tan. Without ridges, $25. Will be big dogs around 75-100 lbs. Call Leia Dalton at 830-2632570. -------------------------AKC Bichon Puppy’s. Shots and wormed. Females, $500; Males, $450. 830-540-4368. 830203-8511, cell. FURNITURE Bar Stools, 2-24” dark w/rattan cane, swivel seats, nice. $35.00 each. 2-24” V-finish ladder back w/woven seats. $15.00 each. 830263-1702. -------------------------Beautiful 6 month old dark brown all leather sofa & loveseat, 4 recliners built in. Very comfortable. Need to sell, too large for room. Store will not take back. They are custom made. Paid $4,000, will take $3,000 for them. Call 672-3613. -------------------------Cargo style sofa. $100.00. Call 361772-5859. -------------------------Custom Designed Western motif 3 panel decorative screen, 54”x78 1/2’, horses, brands & leather look, $395. 512-917-4078. -------------------------For sale antique set twin beds, antique wardrobe, table with chairs, sofa and two matching chairs. 830-672-7347. -------------------------For sale 3 piece antique loveseat, lamp’s new and used mobile chair with batteries. 1827 St. Louis 830-672-8034. HOME SERVICES Little Miss Dawn’s Cleaning Services Residential, RV. Janitorial Services, Carpet Cleaning, Window Cleaning, Floor Maintenance, Laundry & Ironing. At reasonable Rates. Licensed & Bonded.(512) 5086221. -------------------------I want to share my gift of making a room come alive. I can see the room and vision what I can do. Clean picture frames, knickknacks, move furniture around. If thats what it takes to make my vision come alive. Guaranteed you will be enchanted. Just give me a try, give Laura’s Gift a call. 830-203-5180. Free estimates on site. -------------------------You Vacation, I’ll.. -------------------------Private Caregiver. 20+ years experience. Hospice certified. Looking to do private duty, cook, clean, drive. 361772-2011. Ironing done, in my home can pick up & deliver. References if needed. Call Louise (830) 582-1120. -------------------------Will clean your house. I’m dependable and have references. Call Mary at 830-672-4691. -------------------------All-around handyman available. I also build sheds, 16x8 tool shed. Call 830857-1959. -------------------------Building Demolition – House, barns, etc. 830-263-0663 or 830-203-0540. ------------------------. LAWN & GARDEN Need help with lawn or pool? Please call Gene Kridler at 830-8571576. -------------------------Lawn care & shredding. Call for free estimates. 830-2039385. -------------------------Lawn mowing service, residential & commercial. Liability ins., free estimates and low cost.. No job too large or too small. 830-263-4181. -------------------------Will mow yards reasonable rates. Call for free estimate, 830-8575147. Excellent condition. Call 361-218-1880. -------------------------2004 Fleetwood RV Pecos pop-up. Like new, only pulled from dealer. $4,000. Both units located near Old Moulton. Call 857-0734 or 361-596-7317. -------------------------1990 25ft Dutchman travel trailer for sale. Fifth wheel hitch, queen size bed and couch, rear bathroom with closet, gas stove and microwave, new tires. Gonzales area, $4,000. 830857-4750. -------------------------1976 Ford Eldorado Motorhome. V-8, super clean, good motor & A/C. New refrigerator. $3,700/ obo. 830-437-5659 or 857-6565. -------------------------24 ft. 2006 bought in 2007. Zeppelin Travel Trailer w/ slide out; Lg. corner shower, qn. bed, m/w, stove, refrigerator, sat./cable prep, tires 2-yrsold. $9,800; located near Gonzales. Call 936-203-4378 or 936-594-9809. -------------------------FOR SALE: 25 ft. 5th wheel travel trailer with 5th wheel hitch. Good condition. Microwave, stove, refrigerator, sleeper couch, queen bed. Asking $4,000. Call 830437-2359. -------------------------1996 Pace Arrow. Ready to travel. Good condition. Runs well. 830-6603883. -------------------------2009 38’ Landmark. 3 slide-outs. Like new. King size bed. Great Buy. $39,900. 830-437-5211. REAL ESTATE REAL ESTATE BREITSCHOPF COOPER REALTY Duplex, + 2 M/H set up, Moulton..... $56,000 Ideal family home Rivercrest, Sold 3BR/2BA...$130,000..Reduced......$115,000 1602 Water St.-commercial/rental..$150,000 2342 FM 108, 3 bd.,2 story home...$145,000 4 acs with extra nice redonehome....$155,000 70 acs., wooded, hills, game, tanks ........................................................$420,000 153 acs., FM 2091...........................$795,000 8.7 acs., city limits..........................$120,000 58 acs., trees, potential, edge of town........... ......................................................$12,000/Ac. 4+ Acres, city ..................................$125,000 6 Acres, 183 N., city........................ $195,000 Highway 183 N: 1.9acs., across from new Sale Pending motel.................................................$65,000 1.4 Acres - US 183S., 3BR/2BA, MH., office....................................................$150,000 Lot - Live Oak....................................$8,000 Serving Gonzales and Central Texas Homes CHILD CARE Willing to do babysitting at my house. 8-5 M-F. 511 Church St., 830-857-4993. RECREATION Fire Fox Go Cart, 1 seater, very good condition. $375. Call after 5 p.m. 361-594-3668. -------------------------2000 Wellcraft, 14 ft. flatbottom, 8 hp Johnson, trailer. Great shape. Ready to fish. $1,500. 361594-8247. -------------------------For Sale: Motorcycle trailer, $100. (830) 377-8814. -------------------------2008 Honda Fourtrax with only 250 miles $3,500 o.b.o. 830-857-5236. -------------------------Harley Sportster, 883 Custom, 2005 model. Hwy. guard bars, detachable windshield, saddle bags, windshield bag. Yellow custom paint, garage kept, excellent condition, never laid down. 9K Land RV’s FOR SALE GREAT DEAL! 1997 Kountry Star 34 ft., 5th Wheel. 2 slideouts, upgraded kitchen, ducted A/H, 11 storage compar tmenbts, ceiling fans. NADA. com/RV appraised RV at $15,900. Asking $10,000. Great home for oilfield. Located in Rockport, TX. 361-6451009. -------------------------2004 Wildcat 5th Wheel RV. 28 ft., equipped to sleep 5, w/lrg. slide containing sofa & dinette. Lots of storage. Adapted to pull as gooseneck. MOBILE HOMES be moved. Reduced $18,000/obo. Call 830-445-9889. ------------------------. Commercial Shirley Breitschopf 830-857-4142 lynnette@gonzalesproperties.com Carol Hardcastle - 830-857-3517 You can reach our staff by calling: Lynnette Cooper HOMES FOR RENT For Rent: 3/2 house in town. $775/mo $400/deposit. 830832-3163. -------------------------2BR/1BA home in Shiner. Contact 361-594-3201 or leave message. -------------------------3BR/2BA home for rent on 318 DeWitt St. Central Air. Big back yard. $850/ mo., $500/dep. Call 830-445-9294. APARTMENTS Efficiency & 1 Bedroom Apartments For The Elderly 62 or older with 10% for the Mobility Impaired. REAL ESTATE Phone: 830-672-2522 or Fax: 830-672-4330 Country Village Square Apartments (830) 672-2877 Tuesday-Friday 8 a.m. - 5 p.m. 1800 Waelder Road Gonzales EXCELLENT Value. Great for Deer Lease, Camping, Travel, Or ??? Starting at 2006 Totally ReFurbished 28 ft. BPull Travel Trailers. Call 979-743-1514. $5,950. View at. com. Specializing in locating land, homes, and rentals for the oil/gas industry. “Expert & fast construction of office/warehouse/shop.” vGONZALES New home under construction, complete by 9/30/11. Home has 3 bed/2 baths, metal roof, double pane windows, pec plumbing system, HUGE monster size lot with large trees, great location, 711 St. Francis Gonzales..........................................$159,500 vTHOMPSONVILLE 2br/1ba home on 30 ac. Recent new metal UNDER CONTRACT Con roof, remodeled and updated. On CR 240 in Thompsonville ........................................................................................... $199,500 vWAELDER 97.44 acres, 4BR ranch house, great house, oil/gas income, Ranching/Investment............................................$750,000 vGONZALES 28 acres, 2 story, 3BR, 2 Bath custom built home.. ............................................................................................$375,000 vTHOMPSONVILLE 10 ac. fronting CR 240........ $4,900.00/ac. vRED ROCK 181 acres......................................................$895,000 vGONZALES 2.25 acres fronting Oil Patch Lane. Raw land includes metal shed and fencing............................................................$50,000 vGONZALES One acre fronting Oil Patch Lane withwater, phone and elec. ready for hook-up....................................................$50,000 vGONZALES 7.62 acres SOLD w/access to Sarah DeWitt. Con Bank Foreclosure, great investment.....................................$42,000 vWAELDER Poultry Farm. 4 breeder hen houses, 50 acres, mobile home.........................................................................$1,250,000 HOMES AUTO For Sale: 1981 Chevy dually, 10’ dump bed, $1,800. 1986 Chevy dually, welding bed, $1,800. 1970 Ford gravel truck, new brakes, $1,000. 1965 Chevy 1/2 ton pickup, flat bed, $600. Call (830) 377-8814. ------------------------. Look no further... You’ll find it in the classified section of The Gonzales Cannon! FARM & RANCH Got Items to Sell??$$ Line ads are FREE!!! Help Wanted line ads Only $5.00 each time up to 20 words Classified Border ads at great prices! All Classified display ads will be put on website at no additional charge! For quotes & to place your ad, Call Sanya today at 830-672-7100 e-mail: subscriptions@gonzalescannon.com ACREAGE COMMERCIAL 618 St. Paul Gonzales, Texas 78629 672 CR 447 • Waelder, TX 78959 830-788-7777 Thursday, September 22, 2011 PETS Turn your favorite pet photo into a work of art! Artist Brenda Shannon, Pastel or Acrylic. Great gift idea. (512) 917-4078. -------------------------5 Cockatiels. 2 years old. Yellow and gray. $50 each. Call 830-534-5930. LIVESTOCK Needs a strong rider. Gentle, calm disposition. $850/firm. Call 361-596-4954. -------------------------Black Limousin & Black Angus Bulls. Also Heifers. Gentle. Increase your weaning waits. Delivery available. 979-26358304 & LIVESTOCK CLASSIFIEDS REAL ESTATE LAND. ------------------------. Abundant wildlife, great hunting, pond, nice homesite. $4,500/acre. Call 713-203-2814 for information. The Gonzales Cannon WANTED 830-822-5076. Page B7 MISC. SERVICES Mobile Massage is now serving Gonzales & Luling. Specializing in Therapeutic Massage for pain in lower back, neck, knees etc. Also corporate chair massage. 13 years experience. LMT Steve Turner, Lic. # MT021213. Call 830-857-0270. “Let me help getting you mobile.” -------------------------Brush Busters. Bobcat, w/tree cutter attachment, land clearing, mesquite spraying, fence building, misc. odd end jobs. Reasonable Rates. Call James at 512738-0848. -------------------------Electrical wiring, troubleshooting & Repairs, new construction, additions,meter loops, ceiling fans, metal buildings, panel upgrades, etc. 830-437-5747. -------------------------Photographer - Professional, Affordable, and Convenient. Specializing in families, children MISC. SERVICES. LIVESTOCK. -------------------------Baby Guineas. $2.00 each, your choice. Multiple colors. 830-540-4063. Leave number, will return call. -------------------------For Sale: Guinea eggs for setting. Call 830-672-7384. -------------------------For Sale: Calf table/ shoot with self catch gate. $950. 830-437-5747. -------------------------For Sale: Sorrel Gelding, 10 yrs. old. Big, strong, sound ranch horse. Very good looking. REAL ESTATE REAL ESTATE IN: Outdoor Living, Fully Concealed Appliances, Bathroom Suites OUT: Formal Living Rooms, High Ceilings, MC Mansions What’s In, What’s Out. ------------------------. STORAGE SPACE. WANTED Looking for a good,. -------------------------Want to Buy used electric wheelchair, 5 yrs. old or approx. Jet 3 Ultra. 830-4372232. -------------------------Wanted: Any make rifle, caliber 22-250. Call 830-857-1781. -------------------------I want to buy a used shower stall & kitchen cabinets. 830-437-5659 -------------------------WANTED: Old, broken and unwanted costume and vintage jewelry, chain necklaces/belts and loose beads. I am a crafter who loves beading and making jewelry, and can’t. -------------------------Needed: I need to rent a 2 bed or 3 bedroom apartment or house in Gonzales or Luling area. Please call MISC. SERVICES Buy loose gemstones and allow us to custom design your upcoming gift. Over 1,000 cts. to choose from. Call 979-743-5840. -------------------------Hello. Need someone to fill in for an absent employee for a day or two. Maybe I can help. I’m 54, female, coowner auto shop, 16 yrs, Dental Asst. 3 yrs, Photographer, newspaper & aerial, weddings, etc, great with the public & full of common sense. 24 hr. prior notice. Will be glad to drop by before hire. Laura Gift, 830-203-5180. -------------------------JCK Services. Tree shearing, brush stacking, stump treatment, small brush grubbing. Call Jeff (830) 2631016 or Wayne, (830) 857-3611. -------------------------Welding, Fabrication and repairs. Call 830-437-5747. ------------------------. -------------------------- LAND For Sale: 37 acres land. North of Waelder, TX. FMR 1296. Contact Info. 830-237-9227. -------------------------6+ Acres for Sale or Lease. Build to suit. End of Oil Patch Lane.’x130’ on Church St. Call 830423-2103. -------------------------25 acres for sale. I-10 & 304 area. REAL ESTATE 511 Williams, Updated, 2BR/1BA, central A/C & Heat. Insulated. Wood floors throughout, kitchen & shower hard tile. Nancy, Stobaugh, Realtor, 512-297-8500, Sale or Lease. -------------------------Brick Home for Sale. 4BR/3BA, 1513 St. Michael Street on about 1 acre. Lots of trees. 830-857-5231 or 830-857-5236. ------------------------ Clearwater Real Estate Services 830-672-2300 Put Knowledge on Your Side PUBLISHER’S. Call The Gonzales Cannon to place your FREE Garage Sale Ads here. 830-672-7100 or fax to FREE!! TexSCAN Week of September 18, 2011 BUSINESS OPPORTUNITIES THINK CHRISTMAS -START now! Own a Red Hot, Dollar, Dollar Plus, Mailbox or Discount Party Store from $51,900 worldwide. 100% turnkey; 1-800-518-3064; CABLE/SATELLITE AT&T U-VERSE for just $29.99/month! Save when you bundle Internet+Phone+TV and get up to $300 back! (Select plans). Limited time call now! 1-877-577-4394 DRIVERS $5,000 SIGN-ON Bonus! Great pay, tons of South Texas work. Frac Sand hauling; Must have tractor, pneumatic trailer and blower. 1-888-880-5918 DRIVER-$2000 Sign-on bonus! Start a new career. 100% paid CDL training! No experience required. CRST Expedited. 1-800-3262778, DRIVER-GOOD MILES! Regional truck drivers start at 37¢ cpm with 1+ year(s) experience. Home every week. Affordable family benefits. Call 1-888-362-8608 or visit EOE DRIVERS-OWNER OPERATORS and Fleet drivers Texas and Oklahoma with CDL-A. FINANCIAL $3,000 Sign-on bonus! $1.28 per mile. Return Run Y our Ad In T exSCAN! $500 LOAN SERVICE; No credit refused, to Texas every 6-8 days. Call 1-800-765-3952 Statewide Ad ................ $500 fast and secure. Easy on the budget. Payments 301 Newspapers, 942,418 Circulation E X P E R I E N C E F L AT B E D D R I V E R S : spread out over three months. Toll free: 1-855North Region Only ...... $230 R e g i o n a l o p p o r t u n i t i e s n ow o p e n w i t h 626-4373. LoanHere.com 98 Newspapers, 263,811 Circulation plenty of freight and great pay. 1-800-277South Region Only ..... $230 JOB TRAINING 0212 or FAMILY COMPANY LOOKING for Class A flatbed drivers with 1 year experience. Should live within (30 miles) 1-20 corridor between Sweetwater and Dallas. Top pay, benefits. Home 40/52 weekends. 1-877-724-4554, 101 Newspapers, 366,726 Circulation PAID CDL TRAINING! No experience needed. REAL ESTATE Stevens Transport will sponsor the cost of your ABSOLUTELY THE BEST VIEW Lake CDL training. Earn up to $40K first year and $70K third year. Excellent benefits! EOE, 1-800- Medina/Bandera 1/4 acre tract, central W/S/E, RV/motor home/house, OK only 333-8595, $830 down $235 month (12.91%/10yr), T E X A S S TA R E X P R E S S n o w h i r i n g Guaranteed financing, more information call company drivers, owner operators, lease 1-830-460-8354 purchase, 2012 drivers, refresher course drivers. CDL Class A required. 1-800-888- AFFORDABLE RESORT LIVING on Lake Fork. RV and manufactured housing 0203, OK! Guaranteed financing with 10% down. TO P PAY o n e x c e l l e n t r u n s ! R e g i o n a l Lots starting as low as $6900, Call Josh, runs, steady miles, frequent hometime, new 1-903-878-7265 equipment. CDL-A, 6 months experience required. EEOE/AAP; 1-866-322-4039 www. HUNT WEST TEXAS, near Sanderson, Terrell County. Mule deer, 192.65 acres at $265/acre. Drive4Marten.com Whitetail, 157.07 acres at $295/acre. Owner YOU GOT THE drive, we have the direc- financed/TX Vet, 5% down. 1-210-734-4009. tion. OTR drivers, APU Equipped, Pre-Pass, EZ-pass, Pets / passenger policy. Newer equipment. 100% NO touch. 1-800-528-7825 10.1 ACRES, SOUTH Texas brush county, north of San Diego. Deer, hogs, and quail. Private EDUCATION roads, locked gate; $29,500 long term owner ATTEND COLLEGE ONLINE from Home. financing. Several to choose from. 1-866-286Medical, business, paralegal, accounting, crimi- 0199. nal justice. Job placement assistance, computer 676 ACRES Reeves County, 15 Miles North available, financial aid if qualified. Call 1-888- Pecos River Frontage. Call Jack 214-755-6224. 205-8920; $ 1 0 6 M O N T H B U Y S l a n d f o r R V, HIGH SCHOOL DIPLOMA graduate in MH or cabin. Gated entry, $690 down, 4 weeks! Free brochure! Call now! 1-866- ($6900/10.91%/7yr) 90 days same as cash, 562-3650, ext. 55. Guaranteed financing, 1-936-377-3235 Place your garage sale ads FREE of charge in Call or visit Sanya for Details. The Gonzales Cannon AIRLINES ARE HIRING Train for high West Region Only ....... $230 paying aviation career. FAA approved program. 102 Newspapers, 311,881 Circulation Financial aid if qualified, job placement assisT Order: Call this Newspaper o tance. Call Aviation Institute of Maintenance, direct, or call Texas Press Service 1-877-523-4531 at 1-800-749-4793 T oday! Deadline - Tues., at 5 p.m. 618 St. Paul, Gonzales, TX 78629 Ph: 830-672-7100 Fax: 830-672-7111 subscriptions@gonzalescannon.com Extend your advertising reach with TexSCAN, your Statewide Classified Ad Network. Page B8 The Gonzales Cannon Thursday, September 22, 2011 Belmont VFD annual fund-raiser The Belmont Volunteer Fire Dept. held its annual fund-raising barbecue and auction on Saturday. This year’s event held an extra poignancy for many because the department has been busy battling so many wildfires this summer. As always, the food was outstanding — above, Dale DeCola checks the fall-off-thebones chicken — the music was toe-tappin’ and the event served as a meet-andgreet for political candidates (such as Seguin Municipal Judge Kevin Kolb, bottom left, who is campaigning for the 25th District seat). Auction items included an all-terrain vehicle and what may be a county record for kolaches — one batch reportedly brought nearly $900 at auction after being sold, re-donated and sold again. In what may be becoming an annual tradition for the event, it rained — but you’re not about to hear anyone complaining. (Photos by Dave Mundy) Seguin Chevrolet 509 W. IH 10 - Seguin, TX 78155 (830) 303-4381 - (877) 309-0314 “WE NEVER FORGET PRICE MATTERS!” SeguinChevrolet.com Facebook.com/SeguinChevy SAVE $9000 ON for Up to 60 Months PLus $1000 Bonus Cash! $17,988 +TT&L Stk G1226 OR 0% financing choose $33,988 +TT&L Stk 112071 2011 TEXAS EDITIONS STK #11376 STK # 11440 ION 2011 TEXAS EDIT CAB CREW SILVERADO 2011 TEXAS EDIT ION SILVERADO EXT CAB Stk 111941 Certified, Leather Sunroof 2011 Chevy HHR LT Certified, 4x4 Z71, Gooseneck - Ready to work! 2010 Chevy 2500 Crew Cab 2008 GMC Acadia Certified, Leather, 3rd Row Seat ‘10 Chrysler Sebring Limited Leather, 32k miles ‘94 Chevy Cheyenne 1500 - Reg Cab V/6 Nice! Stk 113661............$ 5,888 ‘06 Ford Freestar Minivan - Tan Stk 112712......................................$ 9,488 ‘07 Toyota Prius Hybrid - Stk 11324C.................................................. SOLD!!! ‘07 Mazda 6 - Stk G1202......................................................................................... SOLD!!! ‘07 Chevy Silverado LS - Reg Cab, V8, Gray Stk G1165.......................... $14,488 ‘08 Chevy Colorado LS - Reg Cab, White Stk G1166............................... $14,788 ‘10 Chevy HHR LT - Certified Stk A1205...........................................................$15,588 ’10 Ford Focus - Stk G1218.....................................................................................$15,588 ‘10 Chevy HHR LT - Certified Stk A1225........................................................... $15,888 ‘08 Mazda 3 - Stk G1201........................................................................................... SOLD!!! ‘10 Dodge Caliber - Stk G1199........................................................................... $15,888 ‘07 Saturn Aura XR - Leather, Sunroof Stk G1167....................................... $15,888 ‘07 Saturn Aura XE - Sunroof, Silver Stk G1169..................................$16,388 ‘09 Toyota Carolla S - Stk G1200........................................................$16,388 ‘08 Mercury Mariner - Premium Package Leather Sunroof Stk A1223.$17,588 ’11 Dodge Nitro - Maroon Stk G1079....................................................................$18,988 ‘10 Chevy Equinox - Certified Stk G1079....................................................... $22,488 ’08 GMC Yukon - Certified Stk G1222............................................................... $26,888 Seguin Chevrolet is pleased to announce our 172 point initial inspection 2 years - 30,000 miles Free Oil Changes and multi-point Inspections and tire rotation 12 month bumper to bumper warranty - remainder of 5 year 100,000 Mile limited powertrain warranty. Plus 24/7 roadside assistance with courtesy transportation. OnStar and XM Satellite radio trial programs * All prices plus TT&L. 0% financing up to 60 months on select vechicles and with approved credit plus get $1000 bonus cash OR Choose rebates. *Savings based on MSRP. Crew Cab Stk # 11440 MSRP $38,971 - 1995 Pkg Svgs - 4505 Consumer Rebates - 750 USAA Disc - 1750 SC Disc. Sales Price $29,971 plus TT&L. Ext Cab Stk# 11376 MSRP $33,918 - $1995 Pkg Svgs - 4505 Consumer Rebates - 750 USAA Disc - 1750 SC Disc Sales Price $24,918. Must be a member of USAA to receive, if not savings reduced by $750. Certain restrictions apply - See Store for details. Stk G1190 $26,988 +TT&L $17,388 +TT&L Sports page sponsored by: 830-672-6556 • 1-888-562-6588 • 506 St. Paul., Gonzales, TX. 78629 Keep up with all the local news at our web site: gonzalescannon.com We have the loan for you! Holiday Finance Corporation Serving Texas for over 40 Years! The contenders, pretenders as district begins With the majority of our area teams wrapping up the non-district portion of their schedules, I figured I would take the opportunity to assess each team as they begin district play. There are always two trains of thought in regards to creating a nondistrict slate of games. Some coaches like to load up on lesser opponents in order to allow their teams the chance to enter into district play with a head full of steam. Others like to fill out this part of the schedule with as many challenging opponents as possible so that they will be battle-ready for league play. By MARK LUBE Bulldogs challenge Apaches While every district game is a big game, Friday’s Homecoming game against Yoakum may be an especially big one for the Apaches and the Bulldogs alike. The Apaches, who sit at 4-0 on the season, welcome in a 3-1 a Yoakum team at 7:30 p.m. at Apache Field. The game is the 28-3A opener for both teams “Yoakum is a good football team,” Gonzales head coach Ricky Lock said. “This game will be two good ball clubs playing each other,” Bulldog head coach Brent Kornegay said. “The team that makes the least amount of mistakes will come out the winner.” Gonzales will have its usual candidates to carry the ball with Cecil Johnson, Donald Cartwright and Jon Anthony Casares with Landon Lock, Cory Espinosa and DJ Gonzales getting some carries as well. Quarterback Matt Hillman will have Casares, Cartwright, Espinosa and tight end CamFriday Night Lights Football roundup, See Pages C3-C4 Sports The Gonzales Cannon Thursday, September 22, 2011 C sportseditor@gonzalescannon.com eron Smith as targets for his passes when the Apaches drop back to pass. Lock said Gonzales will build on what they accomplished at Columbus. “We will need to score when we have opportunities, get first downs, moved the chains and protect the football,” he said. Kornegay said the Yoakum defense will have to keep the Apache running game bottled up, especially watching Johnson, Cartwright and Casares. “Our defense will have to play hard and let the chips land where they fall,” he said. Lock said Yoakum makes a lot of rotations on their defense. “They have a lot of kids who play on defense,” he said. “They have forced a lot of turnovers and are much improved.” Leading tackles for the ‘Dogs are Blake McCracken, Devante Price, Timmy Blakeney, Rico APACHES, Page C8 Gridiron Gab Cedric Iglehart Regional Editor Gonzales QB honored; Apaches rank No. 24 Gonzales quarterback Matt Hillman was nominated for the Marine Corps Elite Warrior of the Week for the week of Sept. 5, the Marine Corps Recruiting Station San Antonio announced. Hillman went 7-for-7 for 171 yards and three touchdown passes in the Apaches’ 45-7 win over Austin Lanier on Sept. 8. “I feel honored to be nominated for this award,” Hillman said. “This is a big state and there are a lot of players. I feel good to be nominated out of many, many players.” Hillman is a junior at Gonzales High School and is the son of Wayne and Tammy Hillman. • The Apache foot- Neither strategy is wrong or right because I’ve seen them both work in terms of getting a team into the playoffs. The bottom line is good coaches always use the non-district games as opportunities to correct errors in execution, finalize rotations and develop depth. This column won’t include Flatonia, Hallettsville Sacred Heart, Luling, Shiner and Shiner St. Paul because they don’t begin district play this week. However, I will do the same for these schools in the immediate future. So without further adieu I offer up my list of predictions and possibilities of probable playoff positions for area schools: Cuero For the first time in a long while, the Gobblers are off to an 0-4 start. Even though they have had to play without superback Trent Jackson, they also still had to face a killer non-district schedule with perennial powerhouses Wimberley and Liberty Hill, and currently undefeated Bellville and Port Lavaca Calhoun. With such formidable games under their belts, it would be easy to rationalize their record as a matter of just playing better foes. But I’m really concerned about the ability of their skill players to get the job done. Cuero only has three offensive touchdowns all year and are averaging just 6.5 points per contest. There are some good experienced players manning the trenches in Caleb Harvey, Tommy Longoria, Randy Sierra and Javon Thomas, so it would be foolish to write off the Gobblers in district play. But at the same time with district looming, the clock is running out for Cuero to get back to being Cuero. The Gobblers are too proud to lay down for anyone, but it’s going for hard them to outscore more talented offenses like Sam Houston, Yoakum and Gonzales. It’s quite likely the winner of their game at Yoakum on Oct. 14 will get that coveted third spot for postseason play, but right now I’d give that nod to the Bulldogs. Either way, it’s going to be a down year by Cuero’s lofty standards. Predicted final record: 3-7 overall, 3-3 district. Gonzales For the second year in a row, the Apaches have steam-rolled their way to a perfect non-district 4-0 mark. Unlike last year though, Gonzales has pushed their way to perfection with an overpowering running game behind a veteran offensive line highlighted by Cody Jurek, Zac Perez-Clack, Donnie Grauke, J.T. Miller and Damien Airhart. The skill players aren’t as explosive as they were a year ago, but there’s still plenty of firepower in the Apache arsenal. Cecil Johnson goes into district Smith with a catch Gonzales receiver Cameron Smith covers up the ball as Columbus defenders close in during last week’s game. (Photo by Mark Lube) ball team has also made their way into the Dave Campbell’s Texas Football and TexasFootball. com’s rankings for the week of Sept. 19. The Apaches were ranked No. 24 in the Class 3A rankings, above Lubbock Estacado. Fellow 28-3A school San Antonio Sam Houston is occupying No. 21. Matt Hillman Road success buoys Luling By CEDRIC IGLEHART region@gonzalescannon.com Football Roundup By MARK LUBE and CEDRIC IGLEHART sportseditor@gonzalescannon.com LULING — All throughout the year, Luling head coach Michael Waldie has stressed the importance of winning road games to his team. It seems as if his preachings registered because the Eagles are coming off their second straight road win, pounding Karnes City last week 41-19. “We have three tough road games in district,” said Waldie. “The only way for us to be a factor in district is to win two of the three. If you don’t do that, then you’re going to be asking for help down the road. We set the same goal for our non-district games and we were able to do that. That was the first team goal we accomplished and that made me excited.” Perhaps almost as fulfilling for Waldie as the actual victory was the caliber of opponent it came against. “Karnes City is coming off a nine-win season last year and they’ve been to the playoffs eight of the last 10 years,” he said. “They’re the type of program that we need to start beating if we’re going to take the next step. It was exciting for us and good for the progression of our program.” Luling has another such development opportunity Friday when they face fourth-year program Fischer Canyon Lake. The Hawks (4-0) are ranked 22nd in Class 3A by MaxPreps.com and have two big wins in their last two games, beating Sealy (3834) and Travis (52-21). “They’re very well-coached,” Waldie said. “Coach (Matt) Monzingo has done a fantastic job starting that program up and they’ve come a long way. They’re solid across the board, they’ve got good depth, and good players. They will be another tremendous challenge for us.” Canyon Lake is explosive on offense and their most dangerous weapon is running back Zach Henshaw, who has run for 937 yards and 10 touchdowns. He had 254 yards for three scores in last week’s win and will garner a lot of attention from the Eagle defense, who hopes to contain him. “Honestly it’s going to come down to tackling and making good fits with our defense,” said Waldie. “We’ve seen good backs all year. Gonzales had three good backs, Navarro had a tailback we thought was excellent and so did Karnes City. In my opinion, in high school football you’re going to face an excellent running back in 90 percent of your weeks. Henshaw is special, but we don’t put him in a different category than anybody else.” The Luling offense is starting to come around and is now averaging 237 yards per game. The Eagles have scored over 40 points in their last two games EAGLES, Page C8 ‘Streak’ over, St. Paul gets back to business early season with losses in all of its first four games. Defensive players the Cardinals will be looking out for are end Spencer Grey, linebacker Lucas Lipscomb and end Chris Frieda. The streak is over. The St. Paul Cardinals had their 17-game run come to end last week against Flatonia, 21-17. Now, the Cardinals will get back to business and get ready for Bryan St. Joseph for a Saturday contest at 7 p.m. at Comanche Stadium. St. Paul will need rushing yards from its usual candidates of Brett Hodges, Martin Kennedy and Adam Hollenbach, and a stellar passing game from quarterback Dakota Kresta to targets such as Justin Natal, Mitchell McElroy and Cole Hybner. IGLEHART, Page C2 Bryan St. Joseph has hit a rough patch The Mustangs will look to eradicate a two-game losing streak. But it will not be easy as they travel to Dilley for a 7:30 p.m. contest on Friday. The game will be the District 14-2ADII opener for Nixon-Smiley. “It will be a tough game for us,” Mustang head coach Carlton McKinney said. “We have had a difficult two weeks but I think our kids will be up for this game.” Nixon-Smiley at Dilley He said the team is searching for another running back to step up to fill in the hole left by the injury to Joe Medina. Dilley runs the spread offense with equal emphasis on the run and pass. They also have a sizeable offensive line. “They will run a lot of zone reads with quarterback Will Urban, who is their leading rusher,” McKinney said. “Our defense will need to contain him and use our quickness to offset their big line.” The offensive line will be led by tackle Riley Matthews, who is an all-District pick from last year. Defensively, the Wolves run a split defense. McKinney said they will load up to stifle ROUNDUP, Page C8 Page C2 By MARK LUBE Lady Wildcats keep building confidence Lady ‘Dog head coach Rodney Stryk said he thought his club could have played a little better. “I know we can play better,” he said. “Sometimes, you don’t rise to the occasion.” Flatonia must improve play as it battles throughout the rest of the district season. “We had 21 errors on serving and hitting,” Stryk said. “That has to get better if we want to beat the Round-Tops, Louises and Shiners that are in our district.” The Lady ‘Dogs opened the first set with a 7-2 advantage as Chandler Fike got an early kick, and Alex Bruns and Kaci Pavlicek had great serving including an ace for each. Waelder was able to pull back a couple of points on some Flatonia miscues. Flatonia went ahead 17-4 as Waelder had trouble playing the serves of Leanna Dunk. The Gonzales Cannon Thursday, September 22, 2011 sportseditor@gonzalescannon.com WAELDER — Waelder volleyball coach Marisa Clement wants her players to continue building confidence. The Lady Wildcats hosted the Flatonia Lady Bulldogs Tuesday evening at Waelder ISD Gymnasium for the District 28-1ADII opener for both schools. Flatonia was successful with a 3-0 (25-8, 25-8, 2510) sweep. “I know our girls can play better than they did,” Clement said. “It was our first district game and now they will be getting used to seeing good competition. It was also the first time I have coached against Flatonia and now see how good they are.” Clement said another thing that will help the Lady ‘Cats is for them to put into games what they have learned and done in practice. Volleyball Roundup From coaches’ reports sportseditor@gonzalescannon.com Behind the service of Matilda Vela, the Lady ‘Cats pulled to within 17-7 before Flatonia went on an 8-1 run to close out the opening set. In the second set, early kills by Abi Scacherl, Abigail Rodriguez and Dunk helped the Lady ‘Dogs out to a 17-3 advantage and eventually cruised to the second-set win with the Lady ‘Cats registering a couple of points. In the third set, Waelder took an early 4-1 lead as Hailey Rincon punished Flatonia with her serves, including an ace. A kill from Courtney Mica gave the ‘Lady Dogs a 6-5 advantage which shot up to 15-7 off Waelder miscues and a pair of aces from Dunk. The Lady ‘Cats eventually got within 18-10 before a 7-0 run by Flatonia with a trio of aces from Bruns and a kill from Pavlicek closed out the Waelder’s Hailey Rincon (4) passes the ball up to teammate Alex Benitez (21) set. during Tuesday’s district match. (Photo by Mark Lube) Passing it on Yoakum sweeps Cuero; Shiner slaps Goliad won a non-district match on Friday against Goliad (25-22, 25-23, 29-27). Ryah Michalec had 12 points, three aces and 18 digs; Kaylyn Benes had 12 points; LaNeisha Hunt had 12 kills; Cassie Stafford had 25 assists and Emmalie Berkvosky had four blocks. Shiner continued district play Tuesday with a 3-0 (25-10, 25-11, 25-10) over Prairie Lea. Benes had 16 points and six aces; Hunt had nine kills and 11 digs and Stafford had 21 assists • The St. Paul Lady Cardinals defeated Victoria Faith Academy on Sept. 13 25-18, 25-22, 23-25, 25-20 to improve to 4-0 in district and 11-4 overall. Marrisa Ynclan had 23 kills, six aces and five blocks; Kourtney Knesek had 37 assists and Kali Kocian had 13 digs. In a 3-1 win over Flatonia, Ynclan had 13 kills, three blocks, one ace and 10 digs; Knesek had 22 assists and Alexa Schaefer had one ace. • Sacred Heart earned it fifth district win by sweeping Austin San Juan Catholic School 25-12 25-6, 25-10 on Thursday. Adrienne Klimitchek had five kills; Robyn Pavlicek had five kills; Shelby McElroy had 15 assists; Jenna Brown had nine aces and Kelsie Buchanan had five digs. “San Juan lost a lot of seniors and was a young team,” head coach Wanda Orsak said. “My goal for us was to play at a high level and finish this week with a 5-1 district record.” The Indianettes swept San Marcos Baptist Academy 25-11, 25-5, 25-19 on Monday. McElroy had 16 aces, 14 assists; Klimitchek had six kills and three blocks and Pavlicek had six kills. “We played tough at times in this match and at other times we were content to sit back and wait for their mistakes,” Orsak said. “We have to learn to earn our points and not wait for the other team to give us a point.” Sacred Heart junior varsity won 25-9, 25-17. •The Hallettsville Lady Brahmas defeated Brazosports 3-0 (26-24, 25-18, 25-13) on Tuesday to improve its district record to 4-1. Cheyenne Dowdy had 11 kills; Ali Patek had four aces; Cassidy Targac had four blocks; Lauren Jones had 26 assists and Katie Wagner had 12 digs. Brazosport won the JV match, 25-12, 25-20. • The Flatonia Lady Bulldogs split games against Prairie Lea and Shiner St. Paul last week. On Yoakum swept Cuero (25-18, 25-18, 25-22) on Tuesday to improve to 2-1 in district. For the Lady Bulldogs, Leslie Seidenberger had 13 kills; Ryan Hagan and Callie Witte had 13 digs; Camille Desmet had 35 assists; Ashytn Henkes had six blocks and Witte had two aces. Abby Sheppard had six kills and 15 digs for Cuero; Tiffani Shellenbarger had nine assists; Emily Valenta had three blocks and Brandi Phillips, Ashley Grahmann, Sheppard and Valenta had one ace each. Yoakum won the junior varsity match 25-19 and 25-21, and was the winner in the freshmen game, 25-19 10-25 25-13 The Shiner Lady Comanches From coaches’ reports Mustangs, Eagles fare well in UTSA cross country rero crossed the line at 17:43.5 for No. 132 and Arturo Rodriguez was No. 205 in 19:07.4. For the Mustangs, Baltizer Tovar finished in No. 10 in 18:08.5; Victor Coronado was No. 12 in 18:13.3; Robbie Mejia was No. 31 in 19:19.6; Luis F. Vasquez was No. 74 in 21:26.4 and Raul Tovar followed hum in 21:27.2; Jose Vasquez was No. 93 in 22:35.2; Cain Perales was No. 95 in 22:37.2 and Luis Vasquez was No. 105 in 23:06.8. Luling’s Lady Eagles came in ninth in the Golden Girls 5K with 242 points, paced by Carley Glass’ seventh-place finish in 19:24.3. Maira Salinas placed No. 21 in 20:32.1; Kristaly Munoz at No. 55 in 21:35.7; Hanna Clark was No. 81 in 22:36 and Maria Castillo was No. 92 in 24:21. The Lady Apaches finished the Girls 3A two mile-race in No. 20 with 626 points. Contessa Baird was No. 55 in 13:42.2; Kimberly DeLeon was No. 112 in 14:31.4; Dora Rodriguez was No. 149 in 15:09.2; Alejandra DeLeon was No. 171 in 15:30.2; Katy Guerra was No. 189 in 15:52.1; Brittany Pakebusch placed 194 in 16:02.0 and Juana Sanchez was No. 201 in 16:32.6. Alexander Villafranca of Cuero was No. 46 in 13:32.6 while teammate Luzy Flipse was No. 69 in 13:57.9 and Sarah Southern was No. 76 in 14:01.0 In the Boys 3A 5K, Cuero runner Jordan Venor was No. 113 in 19:51.1. The Yoakum girls cross country team edged out Lockhart by nine points, 27-36, to take first in the Moulton cross country meet on Sept. 10 in the Girls 3A-5A Varsity race. The Lady Apaches were third with 65 points and Columbus was fourth with 77. Host Bobkatz held a 15-point cushion for the title in the Boys 1A-2A Varsity Division, defeat- Sept. 13, Flatonia defeated Prairie Lea 3-0 (25-8, 25-11, 25-10) for a district win. Leanna Dunk had nine kills: Alex Bruns had 19 assists; Courtney Mica had three digs and Kaci Pavlicek had seven aces. On Friday, Shiner St. Paul won 3-1 (25-22, 15-25, 25-19, 25-18). Dunk had 13 kills, four blocks; Bruns had 27 assists; Mica had 17 digs with Abigail Rodriguez and Abigail Schacherl each scored an ace. • Gonzales fell to Pleasanton 3-1 to fall to 1-2 in 28-3A play. • Nixon-Smiley fell to Universal City-Randolph in three sets (19-25, 17-25, 17-25) to fall to 1-2 in district. Kelby Henderson had eight kills; Jessica Flores had 10 assists; Jennifer Flores had eight digs and D’Laine Palacio had two aces. sportseditor@gonzalescannon.com Several area teams took part in the Ricardo Romo meet Saturday at University of Texas-San Antonio. The Nixon-Smiley and Luling boys both placed ninth in the Boys 2A 5K and the Gold Boys 5K with 191 and 278 points, respectively. The Eagles top finisher was Jose Campos with a No. 25 finish in 16 minutes, 27.0 seconds; Danny Castillo was No. 39 in 16:35.7; Michael Barnett was No. 41 in 16:39.9; Brian Guerrero was No. 106 in 17:29.8; Will Frazier was No. 108 with a time of 17:32; Fabian Guer- IGLEHART: A few teams with post-season aspirations Continued from page C1 play as the league’s top rusher with 634 yards and nine scores, JonAnthony Casares has three touchdowns on just eight catches, and Matt Hillman is emerging as an effective field general after missing most of last year due to injury. The Apaches missed the playoffs in 2010 because of a close loss at Sam Houston and a surprising loss at La Vernia, where they may have been caught looking ahead to the season finale with Cuero. Don’t expect to see such a loss of focus out of Gonzales this season, but do expect revenge to be a motivating factor when host Sam Houston in two weeks in what I believe will be a game where the winner goes on to claim the district championship. Predicted final record: 10-0 overall, 6-0 district. Hallettsville What do we know about the Brahmas beyond their 1-3 record? Well for starters, this team is not as bad as their mark suggests. Coach Tommy Psencik has his program heading in the right direction, but it was a tall order expecting them to be any better at this point than .500 after opening the season at Ganado, and hosting Refugio and 3A Yoakum. The Brahmas can move the football with Carson Schindler, Braden Kahanek, Teidrick Smith and Trevor McGee. The problem is they’ve had difficulty finding the end zone this year, averaging 14 points per game. That’s not going to get it done in a district that includes Edna, Hempstead and Rice Consolidated. Predicted final record: 3-7 overall, 2-4 district. Nixon-Smiley The Mustangs got off to a promising start with wins at Flatonia and over Bloomington, but have regressed in recent weeks with back-to-back losses to Yorktown and Sacred Heart. It will be interesting to see how NixonSmiley will divvy up the carries while former 1,000-yard rusher Joe Medina sits out with a leg injury, but it appears sophomore Jared Van Auken is in line to get his fair share along with Alex Hernandez, Miguel Hernandez and Jaime Moreno. The Mustangs will need to return to the form that caused them to field the district’s second-best rushing attack in 2010 in order to compete for a postseason slot. Karnes City is in a rebuilding year, Stockdale is scrappy but not scary, and San Antonio Brooks Academy doesn’t appear to be much more than an also-run. If the Mustangs are going to win district they have to come out virtually unscathed after their three opening games with Dilley, Poth and Three Rivers, who have a combined record of 11-1. That might prove to be a knot that’s too tough to untangle, but it’s not beyond reason to think they can knock out at least of the three and convert the district race into a mad scramble. Predicted final record: 5-5 overall, 3-3 district. ing second-place Industrial, 3853. Nixon-Smiley came in third at 68 followed by Victoria St. Joseph (105), Stockdale (131), Fayetteville (156), Ganado (171) and St. Paul (195). The Luling Eagles squeezed by Lockhart, 29-31, for first place in the Boys 3A-5A Race. Gonzales came in third (99) and Columbus in fourth (109). Lockhart won the Boys Junior Varsity race with 24 points with Yoakum second with 58, and Industrial won the JV Girls with 29 points; Yoakum was second with 58 and Moulton came in fourth with 95. Yoakum The Bulldogs are the pleasant surprise of this season, but I’m trying to determine whether they are the real deal or just overachievers who are doing it with smoke and mirrors. Off to a 3-1 start, Yoakum has scored impressive wins against Edna, at Columbus and at Hallettsville. The only loss was a 6-0 defensive struggle with LaGrange. From a statistical standpoint the Bulldogs don’t look like anything special, but Andrew Jimenez and Blake McCracken are both averaging over seven yards per carry. Their defense is ranked in the middle among league leaders, but they have proved their mettle on the field. It should be interesting to see how well they will perform against high-powered offenses like Gonzales and Sam Houston, but at any rate, they seem to have enough talent to carry them past Cuero, La Vernia, Pleasanton and Poteet. Pencil them in for the playoffs. Predicted final record: 7-3 overall, 4-2 district. Thursday, September 22, 2011 Gonzales gets the plays when it needs them, improves mark to 4-0 By MARK LUBE sportseditor@gonzalescannon.com Apaches erupt to thwart Columbus Casares aided the defensive effort with two picks, including picking off Darius Stevens on a wide-receiver reverse pass in the second quarter. “The coaches had been on me all week in practice about that trick play,” he said. “I read it right and got it at the right time.” Lopez said the defense did well but had a couple of mistakes. “We thought it would be a really tough job to contain their speed but it was not as bad as we thought.” he said “The secondary started out slow. We got scored on the first (Columbus offensive) play but overall we played a heck of a game,” Casares said. Gonzales moved the ball on the opening possession but penalties stalled it. Columbus muffed the punt and the Apaches had a first down at the Cardinal 32. Matt Hillman went to Cartwright for the score with the extra point no good. It took Columbus one play to answer — a 55yard pass from starting quarterback Seth Vickers to Stevens. The point after was blocked and the score was tied 6-6. Hillman was picked off on the Apaches’ next two possessions. But Columbus made miscues after getting those turnovers — failing to covert a fake-punt play and having a field goal blocked. Casares picking off Stevens’ reverse pass with 1:31 left in the half set up an Apache touchdown drive, starting at the 35 of Friday Night Lights The Gonzales Cannon Page C3 COLUMBUS—Gonzales may not have passed the test with flying colors but still took care of business Friday night at Columbus Memorial Stadium against the Cardinals, 41-27. “I’d say the effort was there and we found ways to get into the end zone,” Gonzales head coach Ricky Lock said. He said the passing game went better than expected, the defense played well, and the offense continued running the ball effectively behind a very hardworking line. “You throw the ball well and it opens up the running game and vice versa,” he said. Lock said one of the standouts on offense was tight end Cameron Smith, who hauled in two catches for 23 yards and caught a two-point conversion pass. According to Lock, Smith has not competed in football in nearly three years. “He grew up a lot tonight,” Lock said. “Smith blocked well.” Lock also mentioned the second-half rushing performance of Cecil Johnson, overall performance by Jon Anthony Casares and the receiving of Donald Cartwright who ran the ball as well as well as blocking from Landon Lock and Hunter Noack. On defense, Lock mentioned Casares and Lopez as having good performances. Turning the corner Lights Out: By CEDRIC IGLEHART region@gonzalescannon.com Gonzales running back Cecil Johnson (12) picks up a block from teammate Landon Lock (23) as he turns the corner during first-half action Friday. (Photo by Mark Lube) Columbus Hillman con- a Cartwright 11-yard the second half for the Seth Vickers (kick blocked) G-Cory Espinosa 22 pass from Hillverted a third down with touchdown catch. win. man (Hillman kick) C-Royce Caldwell 80 kickoff return a 9-yard pass to Smith Johnson scored twice, He said Columbus had and later found Espinosa on a 27-yard burst and a lot of big-play potential. (kick failed) Lopez 1 run (Cameron G-Zack in the corner of the end 7-yard scamper, for a 41“They have a couple Smith pass from Hillman) C-Stevens 10 pass from Vickers (Jazone with Espinosa barely 19 Gonzales lead. of guys who can really cob Christen kick) managing to secure the With 2:29 left in the play football,” Lock said. G-Cartwright 11 pass from Hillball for the 13-6 Gonzales game, Stevens took over “They will either win dis- man (pass failed) G-Cecil Johnson 27 run (Hillman lead at the break. at quarterback. On a third trict or come in second.” kick) G-Johnson 7 run (Hillman kick) Columbus returned the and six from the Apache “We went 7-3 last year C-Stevens 1 run (Stevens run) opening kickoff of the sec- 42, Stevens took it up the and I felt like there was a ond half for a touchdown middle for 41 yards and lot of room for improve- Team stats Gonz Colum 18 8 and Gonzales responded scored two plays later. ment,” Lock said. “You First Downs Rushes-yards 45-207 28-108 with a 67-yard drive with His two-point conversion cannot win every game by Passing 14-21-2 5-10-2 Passing yards 172 103 Cartwright gaining 45 brought the game to with- a hundred.” Punts-average 2-32 1-33 yards on a wildcat direct in 41-27. “We just scored 41 on Fumbles-lost 2-0 3-1 snap and Lopez scoring The onsides kick failed the best football team we Penalties-yards 8-53 3-45 from the 1 to give Gonza- for Columbus and Gon- have faced all year.” Individual stats les a 21-12 lead. zales closed out its nonThe Apaches open 28Rushing —Gonzales: Cecil JohnThe Cardinals drove district schedule. 3A play Friday by hosting son 21-112, Donald Cartwright 13-91, Jon Anthony Casares 3-11, Landon 57 yards in just a couple Lock said the Cardinals their district rivals, the Lock 1-6, Zack Lopez 3-2, Matt Hillman 4-(-15). Columbus: Darius Steof minutes and Stevens were the toughest oppo- Yoakum Bulldogs. vens 4-42, Royce Caldwell 13-37, Tayhauled in a 10-yard score nent the Tribe has faced Yoakum is 3-1 for the lor Long 6-26, Kyle Appelt 2-4, Phillip to cut the Apache lead to in its four non-district year, including upsetting Leyendecker 1-3, Seth Vickers 2-(-4). Passing— Gonzales: Hillman 1421-19 contests. Edna 21-13 in its most re- 21-2 172. Columbus: Vicker 5-8-1 103, The Apaches then ate “Once we got adjusted cent outing. Stevens 0-2-1 0. Receiving— Gonzales:Corey Espiup the remaining time in to the speed, we were nosa 6-70, Cartwright 4-52, Cameron Gonzales 41, Columbus 27 the third with a 67-yard okay,” he said. Smith 2-23, Casares 1-26, Hillman 1-1. Gonzales 6 7 8 20—41 Columbus: Stevens 4-99, Laird Toliver march, relying on the Last year, Gonzales Columbus 6 0 13 8—27 1-4 G-Donald Cartwright 32 pass from running game and fin- trailed the Cardinals 26-6 Matt Hillman (pass failed) ishing off the drive with at halftime and rallied in C-Darius Stevens 55 pass from Oft-maligned defense helps ‘Dogs end St. Paul win streak jump start. On fourth and six from the Cardinal 46, Miguel Grifaldo blocked a punt and it was recovered by Aaron Manzano at the St. Paul 20. Alternating runs by Griffen and Mica resulted in a seven-yard Mica scoring run and Flatonia was on the board for the first time. Will Bruns made the extra point kick to cut the lead to 14-7 at the 8:33 mark of the second quarter. St. Paul turned the ball on downs on their next possession and Flatonia responded by mounting a 12play drive that went all the way down to the Cardinal 14. However their scoring attempt was frittered away when Colby Mica fumbled a quarterback sneak on fourth down and one, and Brett Hodges recovered it at the Cardinal 18. The Cardinals then ran a beautifully executed twominute drive - highlighted by a 26-yard strike from Kresta to Hodges - and Hybner booted a 25-yard field goal with one second left to allow St. Paul to take a 17-7 advantage into the intermission. The Bulldogs went three and out on the opening possession of the second half, but caught another break when St. Paul was facing a fourth and five on their first possession of the third quarter. A low kick on the punt ricocheted off a Flatonia player and was recovered by Christihan Rodriguez at the Cardinal 35. On the third play of the drive, Colby Mica scampered 45 yards for an apparent score but it was called back due to a holding penalty. Three plays later, they turned the ball over on downs after an incomplete pass. The Cardinals went three and out, but the punt attempt by Martin Kennedy went awry when he was pressured and tried to run for the first down. He lost five yards on the play and Flatonia took over at the Cardinal 25. Three plays later, Colby Mica snuck it in from the one and Bruns made another extra point kick to narrow the lead to 17-14 midway through the third quarter. The Bulldogs forced another three and out, and after eight straight running plays Mitchell Mica found the end zone from four yards out. Bruns’ kick made it 21-17 early in the final quarter. The Cardinals went on to run seven plays before turning the ball over on downs again. Eight plays later Flatonia was preparing to punt on fourth and 12, but St. Paul was flagged for a roughing the kicker penalty that kept the drive alive. They drove the ball down to the Cardinal 16 before failing to convert on a fourth and three run. St. Paul benefitted from a pass interference call on first down to put the ball at their 29. After two incomplete passes, Kresta was sacked on third down by Manzano with 1:39 left to play. Kresta tried to connect with Kennedy on fourth and long but Griffin had tight coverage and the ball went back to Flatonia, who ran out the rest of the clock with three straight kneel downs. Flatonia 21, St. Paul 17 Score By Quarters Flatonia 0 7 7 7—21 St. Paul 14 3 0 0—17 Scoring Summary S - Adam Hollenbach 1 run (Cole Hybner kick) S - Hollenbach 37 pass from Dakota Kresta (Hybner kick), F - Mitchell Mica 7 run (Will Bruns kick) S - Hybner 25 field goal F - Colby Mica 1 run (Bruns kick) F - M. Mica 4 run (Bruns kick) Team Statistics Flat St. Paul First downs 13 12 Rushes-yds 54-232 26-123 Passing yds 0 138 Passes 0-10-0 12-33-0 Punts-avg 3-38 1-23 Penalties-yds 6-60 2-29 Fumbles-lost 1-1 1-0 Individual Statistics RUSHING - Flatonia: Mitchell Mica 21-109, Colby Mica 10-51, Dalton Griffin 19-48, Zane Ponder 2-18, Daniel Flores 2-6. St. Paul: Martin Kennedy 5-47, Adam Hollenbach 7-39, Dakota Kresta 11-30, Justin Natal 1-6, Brett Hodges 2-1 PASSING - Flatonia: Colby Mica 0-10-0. St. Paul: Dakota Kresta 12-32- SHINER — It was supposed to be a night of celebration, but instead the theme centered around redemption. The much-maligned Flatonia defense underwent a halftime rejuvenation to pitch a second half shutout to overcome a 10-point deficit and spoil Homecoming for St. Paul with a 21-17 win. After giving up 830 yards in their first three games, the Bulldogs (2-2) allowed 261 total yards to St. Paul but they held the Cardinals to just two first downs in the game’s final two quarters. “We didn’t play very well in the first half,” said Flatonia head coach Chris Freytag. “At halftime there was some doubt in the schemes we were running, but we decided to stick with it and just play better. Defensively we played lights out in the second half.” “The bottom line is our defense is getting better. We tackled terribly in the first half, but we really improved in the second half. St. Paul is a good football team and Coach Johnston does a great job, but our kids caught on fire when we needed to. We needed this win in the worst way.” If the play of the Bulldog defense was the game’s top story, then a close second was the play of their offensive line. Without the services of last year’s leading rusher Andres Melendez, Flatonia ran for 232 yards behind the gutsy efforts of sophomores Mitchell Mica and Dalton Griffen. Mica finished ran 21 times for a game-high 109 yards and two touchdowns, while Griffen contributed with 48 yards. “Mitchell had been fumbling the ball and we weren’t sure if we were going to give him the ball tonight,” said Freytag. “He played the game of his life tonight. We got him back on track and that makes us a much better ball club. Their skill guys were as good as our skill guys, but up front we dominated them and that was the difference in the game.” The Cardinals passing attack from the spread was lacking. Plagued by spats of inaccuracy all night, Dakota Kresta only completed 12 of his 32 passes for 138 yards. There were also several drops by St. Paul receivers including three by the sure-handed Justin Natal, who also dropped one in the end zone. “We could not complete passes tonight,” said St. Paul head coach Paul Johnston. “We missed a lot of open receivers deep. Flatonia did a good job of taking some stuff away from us, but we just didn’t execute. You’ve got to give credit to Flatonia, but by the same token we beat ourselves with the mental mistakes. I made some bad calls and mistakes that I have to correct, so you can put a lot of this on me.” St. Paul (3-1) opened the game hitting on all cylinders. The Cardinals marched down the field with an 11-play drive that culminated in a one-yard plunge by Adam Hollenbach. Cole Hybner’s extra point kick was good and St. Paul held the early 7-0 lead. Flatonia tried to answer on the ensuing drive, but two costly penalties thwarted their scoring attempt and they were forced to boot a 44-yard punt that gave the Cardinals possession at their own 37. St. Paul wasted no time getting back on the scoreboard, as Hollenbach hauled in a screen pass on the drive’s fifth play and took it 37 yards for his second score of the game. Hybner’s kick was good and the Cardinals were up 14-0 midway through the opening frame. After punting away again on their next possession, Flatonia’s special teams gave the Bulldogs a Page C4 Big plays help Indians hurdle Mustangs By DAVE MUNDY manager@gonzalescannon.com Friday Night Lights The Gonzales Cannon Thursday, September 22, 2011 Football Roundup From coaches’ reports sportseditor@gonzalescannon.com HALLETTSVILLE — There’s something to be said for controlling the ball and running the clock in football — but when the opportunities present themselves for a big play, well, never pass up the chance. The Sacred Heart Indians used three big play strikes to open up a 21-0 halftime lead, then got a big kick return for a touchdown to take the wind out of Nixon-Smiley’s sails en route to a 35-20 non-district victory Friday at Brahma Stadium. “We knew coming in we might have to throw the ball a little, and I was very pleased at the way we threw the ball in the first half,” said Sacred Heart head coach Pat Henke. His quarterback, Jared Krischke, hit five of six passes for 119 yards and two scores in the first half and the Indians got 85 more yards on Matt Holub’s quick-opener rumble as they opened a 21-0 lead. The biggest play for the 4-0 Indians, however, was yet to come. The Mustangs got their groove back to start the second half and launched two long scoring drives to make it a 21-12 game, but on the kickoff after the second score, Sacred Heart’s Sterling Hrncir slipped through two arm tackles and found a seam around the left side to go 83 yards for a back-breaker score. “I thought that kick return was the big play of the game,” Henke said. “They’d gotten some momentum.” The Indians, known for be- to put the Mustangs back in business. Nixon-Smiley promptly moved 48 yards in nine plays, with Hernandez slashing in from three yards out to make it a 21-12 ball game. Hrncir’s kick return shortcircuited the comeback moments later, however, and the Indians later mounted another 55-yard scoring drive that ended with a 7-yard TD run by Hrncir. The Mustangs still weren’t finished, however. N-S followed the Hrncir run with a 70-yard, 10-play drive which ended with Moreno sneaking over from a yard out as time expired for a “So what?” touchdown. Sacred Heart 35, Nixon-Smiley 20 Nixon-Smiley 0 0 6 15 -- 20 Sacred Heart 6 15 7 7 -- 35 Scoring Plays SH--Sterling Hrncir 63 pass from Jared Krischke (pass failed) SH--Matt Holub 85 run (Krischke run) SH--Cade Brewer 21 pass from Krischke (Colton Brown kick) NS--Jared VanAuken 1 run (kick failed) NS--Alex Hernandez 3 run (run failed) SH--Hrncir 83 kickoff return (Brown kick) SH--Hrncir 7 run (Brown kick) NS--Jaime Moreno 1 run (A.Hernandez run) Team Statistics N-S SH First Downs 15 11 Rushes-Yards 53-221 31-206 Passing Yards 51 119 Passes 4-16-1 5-8-1 Punts-Avg 7-30.4 3-28.7 Fumbles-Lost 2-0 6-2 Penalties-Yards 6-60 3-15 Individual Statistics RUSHING: Nixon-Smiley, Jared Van Auken 25-106, Alex Hernandez 15-50, Miguel Hernandez 8-46, Jaime Moreno 5-19. Sacred Heart, Matt Holub 8-109, Jared Krischke 7-49, Sterling Hrncir 13-47, Cole Bludau 1-1. PASSING: Nixon-Smiley, Jaime Moreno 4-16-1-51. Sacred Heart, Jared Krischke 5-8-1-119. RECEIVING: Nixon-Smiey, Miguel Hernandez 2-23, Damien Perez 1-20, Robbie Mejia 1-8. Sacred Heart, Cade Brewer 4-56, Sterling Hrncir 1-63. In the open Sacred Heart’s Sterling Hrncir hurdles Nixon-Smiley defender Miguel Hernandez during first-half action Friday. (Photo by Dave Mundy) ing a grind-it-out club, went to the air to sail to the first-half lead. On Sacred Heart’s second play from scrimmage, Krischke laid a pass out for Hrncir in the flat and the speedy senior stepped out of traffic and turned on the burners to race 63 yards for a score. Both defenses kept their opponents’ running games in check through most of the first half, with Holub’s run accounting for nearly half the Indians’ total for the game. It came midway through the second period following a Mustang punt, and the 200-pound fullback turned a quick opener into an 85-yard ramble for a 14-0 lead. An interception by Leightin Pilat put the Indians back in business three plays later at the Nixon-Smiley 38yard line. Sacred Heart took seven plays to go 17 yards, surviving two fumbles, before Krischke found Colton Brewer open on the sidelines and threaded a pass between two defenders for a 21-yard scoring play. The Mustangs weren’t quite ready to call it a night, however. Nixon-Smiley took the second half kickoff and promptly marched 69 yards in a time-consuming 15 plays, with fullback Jared Van Auken pulling the lion’s share of work on the march and Jaime Moreno converting a key fourth-down run with a 14-yard scramble. Van Auken punched it in from a yard away to cut the Indians’ lead to 21-6. Following an exchange of punts, the Indians looked to be on the move again but Alex Hernandez stepped in front of a Krischke pass ‘Cats deny Shiner; Brahmas slip by Sharks Late heroics helped Weimar edge Shiner and propelled Hallettsville to its first win of the season in Friday football action, while Yoakum and Luling settled scores with some old nemeses and Cuero ran into a buzzsaw of Sandcrabs. Payton Wells snagged a 15-yard touchdown pass from Seth Helmcamp with just 1:32 remaining to enable the Weimar Wildcats to slip past the Shiner Comanches by a 19-14 score, while a TD pass from Braden Kahanek to Dalton Harrington with 2:58 left in the game gave the Brahmas a 21-17 victory over the Palacios Sharks. Elsewhere Friday, a swarming Yoakum defense forced three turnovers as the Bulldogs upended Edna 21-13, while Luling pinned a 41-19 pasting on Karnes City. In Cuero, meanwhile, the state-ranked Port Lavaca Calhoun Sandcrabs pounded out more than 400 yards rushing in racing past the Gobblers by a 35-0 score. scoring pass and Luke Blaschke nailed the conversion kick. The two TD tosses by Stafford were his only two completions of the night, however, as the Wildcats limited Shiner to just 191 total yards. Jones led all rushers with 82 yards on 20 carries. The Wildcats took their first-half lead on a 58-yard punt return by Josiah Jarmon and a 13yard run by B.J. Jones. Weimar totaled just 179 yards of offense, and the two teams combined for eight turnovers. Weimar 19, Shiner 14 Shiner 0 0 7 7--14 Weimar 6 7 0 6--19 Scoring summary WEI -- Josiah Jarmon 58-yard punt return (kick failed) WEI -- B.J. Jones 13-yard run (Noe Rosales kick) SHI -- Trevian Flowers 16-yard pass from Jacob Stafford (Luke Blaschke kick) SHI -- Evel Jones 25-yard pass from Jacob Stafford (Luke Blaschke kick) WEI -- Payton Wells 15-yard pass from Seth Helmcamp (run failed) Team Stats Shiner Weimar First downs 12 12 Rushes/Yds 48-150 37-130 Pass yards 41 49 Passes 2-8-2 5-11-0 Punts-Avg 2-40 5-31.6 Fumbles-Lost 8-4 4-2 Penalties-yards 6-55 6-34 Individual Statistics RUSHING: Shiner, Evel Jones 20-82, Jacob Stafford 17-47, Marlon Wallace 11-21. Weimar, Josiah Jarmon 6-54, B.J. Jones 12-53, D’Quanne Rhodes 7-23, Jalen Almeida 1-4, Delexus Gordon 2-(-4). PASSING: Shiner, Jacob Stafford 2-8-2-41. Weimar, Seth Helmcamp 5-11-0-49. RECEIVING: Shiner, Evel Jones 1-25, Trevian Flowers 1-16. Weimar, Josiah Jarmon 3-25, Payton Wells 1-15, Alex Delgado 1-9 rushing yards on the Cuero defense. Jeremy Loya added an 87-yard TD run and Grandon Griffin had a third-quarter 12-yard scoring run for the Sandcrabs. The Gobblers, off to an uncharacteristic 0-4 start, managed just 111 total yards offensively and turned the ball over three times. Calhoun Cuero Calhoun 35, Cuero 0 7 7 14 7--35 0 0 0 0-- 0 Scoring Plays CA-Joseph Bargas 8 run (Victor Rodriguez kick CA-Jeremy Loya 87 run (Rodriguez kick) CA-Brandon Griffith 12 run (Rodriguez kick) CA-Bargas 66 run (Rodriquez kick) CA-Bargas 21 (Rodriguez kick) Team Statistics Cal Cuero First Downs 18 8 Yards Rushing 50-423 31-82 Yards Passing 0 29 Passes 0-3-1 5-10-2 Punts 2-46 6-36.3 Fumbles-lost 2-1 1-1 Penalties-yards 6-50 2-10 Individual Statistics RUSHING -- Cuero, V. Davis 7-25, C. Davis 6-28, L. Balfanz 4-3, S. Schoenfeld 12-23, D. Hopkins 1-5, S. Solis 1-(2). Calhoun, J. Williams 4-30, J. Loya 3-99, J. Bargas 25-209, A. Garza 2-8, D. Cantu 2-11, B. Griffith 13-67. PASSING -- Cuero, S. Schoenfeld 10-5-29-2. Calhoun, J. Bargas 3-0-0-1. RECEIVING -- Cuero, R. Gray 2-9, R. Riemenscheider 2-19, C. Davis 1-1. Callies. The Bulldogs extended their lead early in the fourth on another TD pass, this a 5-yarder from Harrison to Kyle Mikulik. Edna came back to score with 54 seconds remaining in the game n a pass from Cantu to Austin Kelley. Each team fumbled five times, losing three, but the Cowboys drew 14 penalties for a total of 95 yards. Yoakum 21, Edna 13 0 0 7 6--13 7 0 7 7--21 Scoring summary YOA -- Andrew Jimenez 53-yard run (Jeff Harrison kick) YOA -- Ryan Kvinta 51-yard pass from Jeff Harrison (Jeff Harrison kick) EDN -- Darius Callies 80-yard pass from De’Quan Cantu (Jesse Martinez kick) YOA -- Kyle Mikulik 5-yard pass from Jeff Harrison (Jeff Harrison kick). EDN -- Austin Kelley 5-yard pass from De’Quan Cantu (kick failed) Edna Yoakum Team Stats Edna Yoak First downs 13 7 Rushes-Yds 34-171 40-136 Passing yards 207 122 Passes 10-18-0 8-11-0 Punts 5-24.6 4-32 Fumbles/Lost 5-3 5-3 Penalties-yards 14-95 5-55 Individual Statistics RUSHING: Edna, Devin Parks 12-93, Dominique Gosson 15-68, De’Quan Cantu 7-10. Yoakum, Andrew Jiminez 4-70, Reagan Jacobs 12-32, Keith Ratley 5-24, Rico Moya 4-20, Kyle Mikulik 2-8, Devante Price 2-8, Timmy Blakeney 1-(-1), Jeff Harrison 9-(-25). PASSING: Edna, De’Quan Cantu 10-18-0-207. Yoakum, Jeff Harrison 8-11-0-136. RECEIVING: Edna, Darius Callies 3-99, Davin Parks 2-69, Domonique Gosson 2-32, Austin Kelley 2-6, Xavier Redland 1-1. Yoakum, Kyle Mikulik 3-23, Ryan Kvinta 1-51, Keith Ratley 1-26, T.J. Hights 2-9. came back to move in front 7-3 on a TD pass from Braden Kahanek to Justin Reeves. Palacios went back in front erly in the fourth period on a TD pass from Anthony Garcia to Dylan Brune, but the Brahmas answered less than a minute later when Kahanek scooted 56 yards for the go-ahead score. The Sharks rallied to take a 17-14 lead when Garcia hit Jacob Nguyen with a 74-yard scoring pass before Hallettsville won it on Kahanek’s pass to Harrington. WEIMAR--The Wildcats’ late TD foiled a comeback bid by the Comanches, who had just taken a 14-13 lead. Down 13-0 at halftime, Shiner closed the gap on a TD pass in the third period from Jacob Stafford to Trevian Flowers that covered 16 yards. Shiner took the lead with 4:29 remaining when Stafford hit Evel Jones with a 25-yard Weimar 19, Shiner 14 CUERO -- Joseph Bargas rushed for 209 yards and scored three times as Calhoun manhandled Cuero by a 35-0 score. Bargas had scoring runs of 8, 66 and 21 yards as Calhoun pounded out 423 Calhoun 35, Cuero 0 YOAKUM -- The Bulldog defense bent to allow the Edna Cowboys 378 total yards, but also forced five fumbles and the Cowboys had to deal with a sea of penalty flags as Yoakum matched its last year’s total with its third win of the season by a 21-13 score. The Bulldogs took a 7-0 halftime lead on a 53-yard run by Andrew Jiminez, and extended their lead to 14-0 on a 51-yard scoring pass from Jeff Harrison to Ryan Kvinta. Edna ralleid to close the gap on an 80-yard TD play from De’Quan Cantu to Darius Yoakum 21, Edna 13 Hallettsville 21, Palacios 17 Hallettsville 0 0 7 14 --21 Palacios 0 3 0 14 --17 Scoring Summary PAL- 18 field goal by Jesus Hernandez HAL- Justin Reeves pass from Braden Kahenek (Ruben Danz kick) PAL- Dylan Brune from Anthony Garcia (Hernandez kick) HAL-Braden Kahenek 56 run (Danz kick) PAL- Jason Nguyen 74 pass from Anthony Garcia (Hernandez kick) HAL- Dalton Harrington pass from Kahenek (Danz kick) Team Statistics Pal Hal First Downs 12 15 Yards Rushing 35-116 26-222 Yards Passing 174 175 Passes 7-14-2 6-14-2 Punts 4-154 3-62 Fumbles-lost 2-1 3-2 Penalties-yards N/A N/A Individual Statistics RUSHING -- Palacios, A. Garcia 1857, S. Garcia 15-57, A. Nguyen 1-2, D. Aparicio 1-0. PASSING -- Palacios, A. Garcia 147-174-2. RECEIVING -- Palacios, Z. Garcia 3-21, J. Nguyen 2-110. around a 21-yard TD pass by the Badgers’ Kenneth Glenn to Philip Vaughan. Medford broke loose for a 56-yard scoring run at th e9:52 mark of the third, and 29 seconds later scooped up a Karnes City fumble and returned it 44 yards for a score as well. The Eagles added another defensive score later in the period when John Palomo returned another fumble 10 yards for a TD. Medford got his fourth score of the game on a 7-yard run midway through the final period. He ended the evening with 144 yards on 17 carries as the Eagles piled up 329 total yards. Luling 41, Karnes City 19 Luling 8 6 20 7—41 Karnes City 6 0 7 6—19 Scoring summary LUL -- Brendon Cubit 7-yard run (Billy Medford run), 10:00, 1st. KAR -- Philip Vaughan 21-yard pass from Kenneth Glenn (kick failed), 06:32, 1st. LUL -- Billy Medford 5-yard run (kick failed), 01:01, 2nd. LUL -- Billy Medford 56-yard run (kick failed), 09:52, 3rd. LUL -- Billy Medford 44-yard fumble recovery (Brett Eckles kick), 09:23, 3rd. KAR -- Kevon Shelton 2-yard run (Wally Gonzales kick), 07:43, 3rd. LUL -- John Palomo 10-yard fumble recovery (Brett Eckles kick), 02:12, 3rd. LUL -- Billy Medford 7-yard run (Brett Eckles kick), 10:02, 4th. KAR -- Kenneth Glenn 53-yard run (kick failed), 05:20, 4th. Team Stats Luling Karnes City First downs 15 6 Rushes/Yds 35-213 35-195 Passing yards 113 21 Passes 12-17-0 1-9-0 Punts-avg 2-29 4-25 Fumbles/Lost 3/1 4/3 Penalties-yards 7-45 2-5 Individual Statistics RUSHING: Luling, Billy Medford 17-144, Brendon Cubit 14-71, quinton Grant 2-4, Traden Staton 2-(-6). Karnes City, Kenneth Glenn 11-81, Kevoin Shelton 13-46, Dontrell Lyons 4-56, Garrett Liska 3-2, Allen Cordaway 1-1, Daniel Rosales 1-4, Nick Adams 2-5. PASSING: Luling, Billy Medford 8-90-61, Trayden Staton 4-8-0-50. Karnes city, Kenneth Glenn 1-8-0-21, Garrett Liska 0-1-0. RECEIVING: Luling, Billy Medford 1-(-3), Vince Garcia 7-46, Joreges Munoz 1-8, Josh Alvarez 1-6, Keeton Coe 1-15, Ty Anderson 1-41. Karnes City, Phillip Vaughan 1-21. PALACIOS -- A defensive battle turned into a free-for-all, see-saw fourth quarter before the Brahmas pulled it out to bag their first win of the season. The Sharks had taken a 3-0 halftime lead on a short field goal by Jesus Hernandez before Halletttsville Hallettsville 21, Palacios 19 KARNES CITY — The Eagles’ Billy Medford scored four touchdowns, including two in a 29-second span in the third period, as Luling erupted to bury the Karnes City Badgers 41-19. Luling claimed a 14-6 halftime lead by sandwiching a 7-yard scoring run by Brendon Cubit and a 5-yard run by Medford Luling 41, Karnes City 19 Thursday, September 22, 2011 The Gonzales Cannon Page C5 Sub-Varsity Football Roundup The Apaches scored the winning touchdown with three minutes remaining then had to hold Columbus inside the 35-yard line on four downs to preserve the 20-18 victory. The Apaches first score was on a run by Marvin Lewis from 5 yards out. The second score was on a run by Morgan Martinez from 10 yards out. The winning drive started at the minus- 40 yard line. The big play of the drive was a pass completion to Darnell Arnic on the near sideline. The winning touchdown was a keeper around the left side by Morgan Martinez. The defense was led by Darrin Hernandez, Sky Walker, Taylor Walker, Eduardo Angel, and August Bordowsky. The Apaches travel to Yoakum on Thursday. •On Sept. 6,The Gonzales 7th grade B team Apaches defeated the Luling Eagles 8 – 0. The entire defensive squad played a great game. Defensive leaders were Joshua Gomez with several great tackles and Elandreus Thorne with the gamechanging interception. Offensively, Gabriel Camarillo scored for the Apaches on a 60-yard touchdown run, and Matthew Grauke added the two- point conversion. •The Apaches’ 7th grade A team conquered their Luling counterparts 14 – 0. The Apache defense held the Eagles offense scoreless the entire game with strong performances from Wayne Fowler and the whole defensive line. Early in the game, Apache quarterback Kameron Glass connected with Dawson Hull for a 40 yard touchdown pass. In the second half, Aaron Hunt reached the end zone with a 35-yard run and followed that up with a successful two- point conversion play. •The Gonzales 8th grade B team defeated the Luling Eagles 28 -0. Mike Mendez had a 23-yard interception return and a 65-yard touchdown run on the other side of the ball. Ryan Benes followed up a 65 yard march with a 5-yard touchdown run. He also scored a twopoint conversion. Another offensive standout was Mason Matejcek, with 65 and 60 yard touchdown scampers. Matejcek also added to the Apache point total with a two-point conversion. An impressive Apache defense kept the Eagle offense from reaching the end zone throughout the game. Defensive players of the game were Marcos Sampayo, Ericka Hernandez, and Ruben Gonzales. •The Apache 8th grade A team held the Eagle A team to a scoreless tie. Strong defensive play from both teams kept the offenses in check. • The Gonzales junior varsity team defeated Austin Lanier, 24-12, on Sept. 9 Quarterback Morgan Martinez completed passes to five different receivers with August Bordovsky scoring on a 40-yard reception, Troy Hernandez rushed from 15 yards out, Francisco Diaz rushed from 35 yards out, and Morgan Martinez rushed from 20 yards out. The last touchdown was set up by a Trey Lester interception. Darrin Hernandez, Eduardo Angel, Levi Snider, and Taylor Walker led the Brando Juarez (8 yards) and Allen Beene (14 yards). Apaches scored three times in the second with a 7-yard pass from Grayson Meredith to Nathan Burek, a 3-yard run by Beene and Juarez reeling a 25-yard pass from Meredith to lead 34-0 at halftime. Gonzales got a 28-yard touchdown run by Damien Vella in the third and Nathaniel Montgomery scored on a 12-yard run in the fourth. Tyshawn Erskin and Beene each made a two-point conversion play. • The Shiner Comanches junior varsity shut out Schulenburg 14-0 Sept. 8 with both scores coming in the first half of play. Marcus Coleman scored both touchdowns — a 42-yard run in the first quarter and a 27-yard pass from Tyler Patek in the second. Hunter Mraz added both extra points. The defense had their first shutout of the year, and Meredith threw for 80 yards in the game. Reeling it in Gonzales’ Darrance James (20) looks ot get his hands on a pass that is deflected by a Columbus defender during last week’s freshman football action. (Photo by Mark Lube) Apache defensive effort. two wins out of three with The Apaches are 3-0 and a 46-0 win over Lockhart host the Columbus Cardi- on Sept. 9. nals today. The Apaches got two • The Gonzales fresh- rushing touchdowns in man team improved to the first quarter from Come & Take It Deals!!! 127,533 mi., XCab, Runs Great #4273 ‘07 Chevy Silverado Z71 $17,900 Frank Supak 112,816 mi, Smooth Running. #4334 ‘04 GMC Yukon XL $12,900 87,410 mi., Sunroof, Very clean. #4121 ‘05 Chevrolet Avalanche $16,900 All Vehicles + TT&L West Motors Call Frank at 830-857-8017 or 830-263-1441 136,702 mi., Cover Included #4358 ‘07 Ford F150 $11,900 Scramble winners Sept. 7 1800 Sarah DeWitt Relay For Life Kick-Off Party Saturday, September 24 10 am to 1 pm Victoria College Gonzales Center Taking first place in the weekly Wednesday Scramble at Independence Park Golf Course Sept. 7 was the team of Joseph Milburn, Mike Turk, Phil McCaskill Roy Staton and Kerry Lowry. Taking second was the team of Clinton Hicks, Bill Kessler, Glenda Kessler, Zachary Outlaw and Raul Contreras. (Courtesy Photo) Everyone is invited to attend!! Hero of Hope Monica Flores will be speaking at 10:30 am about her caregiver experience for her infant son who was diagnosed in utero with neuroblastoma. Hot Dog Lunch provided along with special activities for the young and young-at-heart. Team Captains can pick up information and learn details about the Relay which will be held March 23-24, 2012 at J.B. Wells Show Barn Scramble winners Sept. 14 Taking first place in the weekly Wednesday Scramble at Independence Park Golf Course Sept. 14 was the team of Dave Wilson, Will Snyder, Landon Allen, Wiley Bluhm and Clay Harris. Taking second was the team of Aaron Burek, Ricky Bazan, Mario, Wayne Berger and Lance Behlen. (Courtesy Photo) Gonzales Co. buck contest begins Oct. 1 The 2011-12 Gonzales County Buck Contest will kick off Oct. 1 and includes s archery, youth and regular hunting seasons. Entry fee is $20 for adults and $10 for youth. The overall grand prize is a $750 certificate. Prizes in the adult divisions are: 1st place, shoulder mount whitetail buck; 2nd place, $200 certificate; 3rd place, $100 certificate. Youth division prizes are: 1st place, $400 certificate; 2nd place, $200 certificate; 3rd place, $100 certificate. Other prizes are $100 certificate for the Spike Kill- longest unbranched antler; $100 certificate for Bow Hunt Kill-best score bow harvest and $100 certificate for Oldest Hunterbest score harvested by hunter 65 and older as of Oct. 31, 2011. The deer must be a whitetail buck harvested on a property that has a current membership in a WMA in Gonzales County during the current hunting season. For more information, contact Gonzales_buck@yahoo.com. For more information contact: Arline Rinehart 830-672-2077 or Patty Stewart 830-672-7581 Page C6 Beat the experts Last week: Season The Cannon 11-4 39-21 Gonzales Yoakum La Vernia Nixon-Smiley Hallettsville Shiner Flatonia St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Lions Texans The Gonzales Cannon Thursday, September 22, 2011 Out-Guess our panel of “experts” to win a weekly cash prize! Mark Lube The Cannon 8-7 40-20 Gonzales Yoakum Cuero Nixon-Smiley Rice Cons. Shiner Flatonia St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Lions Texans Cedric Iglehart The Vaz Clinic 11-4 44-16 Gonzales Pleasanton Cuero Nixon-Smiley Rice Cons. Shiner Burton Regents Louise Houston Ball State Texas Kansas Cowboys Texans Dr. Garth Vaz Johnson Oil 7-8 35-25 Gonzales Yoakum Cuero Poth Rice Cons. Ben Bolt Burton St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Lions Texans Randy Harkey Glenn Glass D&G Automotive 10-5 41-19 Gonzales Pleasanton Cuero Poth Hallettsville Shiner Burton St Paul Sacred Heart Houston Oklahoma Texas Texas Tech Cowboys Texans Apache Cleaners Caraway Ford Stan Ledbetter 11-4 48-12 Bret Hill Christina Andrew Jahns Rodriguez Gonz. Livestock 11-4 46-14 10-5 41-19 Gonzales Yoakum Cuero Nixon-Smiley Hallettsville Shiner Flatonia St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Cowboys Texans Sleep Inn 11-4 41-19 Gerard Nunez Sonic 9-6 35-25 Week 6 Games Gonzales at Poteet Pleasanton at Yoakum La Vernia at Cuero Poth at Nixon-Smiley Hallettsville at Rice Cons. Shiner at Ben Bolt Flatonia at Burton St. Paul at Austin Regents Louise at Sacred Heart Houston at UTEP Ball St. at Oklahoma Texas at Iowa St. Texas Tech at Kansas Lions at Cowboys Steelers at Texans Gonzales Yoakum Cuero Poth Hallettsville Shiner Flatonia St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Cowboys Texans Gonzales Yoakum Cuero Poth Rice Cons. Shiner Burton St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Lions Steelers Gonzales Yoakum Cuero Poth Rice Cons. Ben Bolt Burton St. Paul Sacred Heart Houston Oklahoma Texas Texas Tech Cowboys Steelers Gonzales Yoakum La Vernia Nixon-Smiley Halletsville Shiner Burton St.Paul Sacred Heart Houston Oklahoma Texas Texas Tech Cowboys Texans Game 1: Gonzales at Poteet 1107 East Sarah DeWitt Gonzales Texas 78629 DuBose Insurance Agency 826 E. Sarah DeWitt Drive 830-672-3447 Game 2: Pleasanton at Yoakum 830-672-9581 Game 3: La Vernia at Cuero Game 4: Poth at Nixon-Smiley Ledbetter’s Apache Cleaners 510 St. Andrew 830-672-3750 Mon.-Fri. 7:00 am - 6:00 pm Sat: 8:00 am - 1:00 pm D&G Automotive & Diesel Wrecker Service 830-672-6278 134 Hwy. 90A • Gonzales, TX 78629 Game 5: Hallettsville at Rice Cons. Game 6: St. Shiner at Ben Bolt Best Western Regency Inn & Suites 1811 E. Sarah DeWitt Dr. Gonzales, Texas 78629-2612 (830) 672-5555 Fax (830) 672-4441 For Reservations Call 1-800-WESTERN Email: 44554@hotel.bestwestern.com Glenn & Linda Glass, Owners 2115 Water St., Gonzales Game 7: Flatonia at 672-1000 Burton Sean Kendrick Specializing in Domestic & Foreign Car Repair Automotives M&K Most insurances accepted, we welcome Medicare - Medicaid. (No one is turned away for inability to pay.) 228 St. George Street, Gonzales, Texas 78629 Mon.-Thurs. 8 - 8, Fri., 8 - 5 Sun. 1 - 4, Saturday Closed 830-672-6511 • Fax: (830) 672-6430 “Making a difference one life at a Game 8: time since 1966” St. Paul at Regents Community Health Centers Of South Central Texas, Inc. Turn Around Tavern 830-672-1302 Cold Beer Good Music Garth O. Vaz, M.D. Family Practice 1430 St. Paul St. Gonzales, TX Game 9: Louise at Sacred Heart with live webcast @ Sale every Saturday at 10am Game 10: Houston at UTEP Loans up to $1,200 830-672-6556 • 1-888-562-6588 506 St. Paul., Gonzales, TX. 78629 Serving Texas for over 40 Years! Game 11: Ball St. at Oklahoma Holiday Finance Corporation The Vaz Clinic, P.A. Game 12: Texas at Iowa st. 1103 N. Sarah DeWitt Dr., P.O. Box 562 Gonzales, Texas 78629 P.O. Box 565 • Gonzales, TX 78629 Dave S. Mobile 830-857-5394 Mike B. Mobile 830-857-3900 Office 830-672-2845 Fax 830-672-6087 24 hrs. a day, 7 days a week - coverage by phone THEVAZCLINICPA@stx.rr.com 830-672-2424 Caraway Ford Gonzales 1405 Sarah DeWitt Gonzales, TX 78629 830-672-9646 • 800-299-9646 Game 13: Texas Tech at Kansas Game 14: Lions at Cowboys Game 15: Steelers at Texans 2138 Water Street HWY 183, Gonzales, Texas 78629 Phone 830.672.1888 Fax 830.672.1884 The Gonzales Cannon Honesty Integrity Fairness 618 St. Paul, Gonzales Phone: 830-672-7100 Fax: 830-672-7111 Beat the experts Entry Form Game 1:________________________________________ Game 9:________________________________________ Game 2:________________________________________ Game 10:_______________________________________ Game 3:________________________________________ Game 11:_______________________________________ Game 4:________________________________________ Game 12:_______________________________________ Game 5:________________________________________ Game 13:_______________________________________ Game 6:________________________________________ Game 14:_______________________________________ Game 7:________________________________________ Game 15:_______________________________________ Game 8:________________________________________ TIE BREAKER: Total Points in Gonzales vs. Columbus: _____________ Follow The Winners! Sept. 15 Winners 1st Place, $25 David Janota 2nd Place, $15 Robert Lee 3rd Place, $10 Joseph C. Rivera Winners will be announced in our Sept. 29 edition! Your Name:________________________________________ Address: __________________________________________ City: ___________________ Phone: ___________________ E-Mail:______________________ Mail, fax or hand-deliver this form to: The Gonzales Cannon, 618 St. Paul, Gonzales, TX 78629, FAX 830-672-7111 One entry per person, please. Contest Deadline: Date August 31 Thursday, September 22, 2011 The Gonzales Cannon Page C7 Lunch Specials 726 Sarah Dewitt, Gonzales Mariachi’s Every Friday Night Full Bar The Gonzales Cannon’s ReGional FooTball sCoReboaRd GONZALES APACHES Record: 4-0 A 26 at CC Miller W, 42-6 S 02 Luling W, 35-0 S 08 at Austin Lanier W, 45-7 S 16 at Columbus W, 41-27 S 23 Yoakum* S 30 at Poteet* O 07 Sam Houston* O 14 Open O 21 at Pleasanton* O 28 La Vernia* N 04 at Cuero* YOAKUM BULLDOGS Record: 3-1 A 26 at Columbus W, 19-16 S 02 La Grange L, 0-6 S 09 at Hallettsville W, 29-14 S 16 Edna W, 21-13 S 23 at Gonzales* S 30 Pleasanton* O 07 at La Vernia* O 14 Cuero* O 21 Open O 28 at Poteet* N 04 Sam Houston* CUERO GOBBLERS Record: 0-4 A 26 at Wimberley L, 6-34 S 02 at Liberty Hill L, 7-14 S 08 at Bellville L, 13-21 S 16 Calhoun L, 0-35 S 23 at Pleasanton* S 30 La Vernia* O 07 Open* O 14 at Yoakum* O 21 Poteet* O 28 at Sam Houston* N 04 at Gonzales* POTEET AGGIES Record: 0-4 A 26 at SA CentCath. L, 13-21 S 02 Dilley L, 20-39 S 08 at Carrizo Spr. L, 21-27 S 16 Waco Robinson L, 21-49 S 24 at Sam Houston* S 30 Gonzales* O 07 at Pleasanton* O 14 La Vernia* O 21 at Cuero* O 28 Yoakum* N 04 Open* LA VERNIA BEARS Record: 0-3 A 26 Open S 02 Canyon Lake L, 23-34 S 09 at Giddings L, 7-45 S 16 at Wimberley L, 6-49 S 23 Bandera S 30 at Cuero* O 07 Yoakum* O 14 at Poteet* O 21 Sam Houston* O 28 at Gonzales* N 04 Pleasanton* PLEASANTON EAGLES Record: 1-3 A 26 SA Edison W, 41-8 S 02 at SA Jefferson L, 32-44 S 09 SA Lanier L, 12-17 S 16 at Aransas Pass L, 33-34 S 23 Cuero* S 30 at Yoakum* O 07 Poteet* O 13 at Sam Houston* O 21 Gonzales* O 28 Open N 04 at La Vernia* SAM HOUSTON HURRICANES Record: 4-0 A 26 SABrackenridge W,45-13 S 02 at SA Southside W, 58-7 S 09 SA Brennan W, 26-13 S 17 at SA Edison W, 48-10 S 24 Poteet* S 30 Open* O 07 at Gonzales* O 13 Pleasanton* O 21 at La Vernia* O 29 Cuero* N 04 at Yoakum* NIXON-SMILEY MUSTANGS Record: 2-2 A 26 at Flatonia W, 36-33 S 02 Bloomington W, 33-3 S 09 at Yorktown L, 13-20 S 16 at Sacred Heart L, 20-35 S 23 at Dilley* S 30 Poth* O 07 at Three Rivers* O 14 Karnes City* O 21 SA Brooks* O 28 Stockdale* N 04 Open DILLEY WOLVES Record: 4-0 A 26 Charlotte W, 62-13 S 02 at Poteet W, 39-20 S 09 La Pryor W, 37-0 S 16 at Cotulla W, 42-13 S 23 Nixon-Smiley* S 30 Open O 07 at Poth* O 14 Three Rivers* O 21 at Karnes City* O 28 SA Brooks* N 04 at Stockdale* POTH PIRATES Record: 4-0 A 26 Marion W, 8-7 S 02 at Falls City W, 49-6 S 09 at George West W, 22-21 S 16 Natalia W, 49-13 S 23 Stockdale* S 30 at Nixon-Smiley* O 07 Dilley* O 14 Open O 21 at Three Rivers* O 28 Karnes City* N 04 at SA Brooks* STOCKDALE BRAHMAS Record: 2-2 A 26 Falls City W, 34-14 S 02 Jourdanton L, 22-28 S 09 at St. Paul L, 34-38 S 16 Odem W, 49-28 S 23 at Poth* S 30 Three Rivers* O 07 at Karnes City* O 14 SA Brooks* O 21 Open O 28 at Nixon-Smiley* N 04 Dilley* THREE RIVERS BULLDOGS Record: 3-1 A 26 George West W, 21-20 S 02 Natalia W, 48-22 S 09 at Jourdanton L, 14-55 S 16 at Kenedy W, 35-21 S 23 SA Brooks* S 30 at Stockdale* O 07 Nixon-Smiley* O 14 at Dilley* O 21 Poth* O 28 Open N 04 at Karnes City* KARNES CITY BADGERS Record: 1-3 A 26 at Kenedy L, 12-13 S 02 at Marion L, 0-21 S 09 at UC Randolph W, 10-7 S 16 Luling L, 19-41 S 23 Open S 30 SA Brooks* O 07 Stockdale* O 14 at Nixon-Smiley* O 21 Dilley* O 28 at Poth* N 04 Three Rivers* SA BROOKS TIGERS Record: 1-3 A 26 at Runge L, 8-14 S 02 at SM Baptist L, 6-35 S 09 Center Point L, 7-28 S 16 at SA St. Gerard W, 34-0 S 23 at Three Rivers* S 30 at Karnes City* O 07 Open O 14 at Stockdale* O 21 at Nixon-Smiley* O 28 at Dilley* N 04 Poth* Regular Hours: Sun.-Thurs. - 5:00 a.m. - 9:00 p.m. Fri. & Sat. - 5:00 a.m. - 10:00 p.m. 830-672-5599 LULING EAGLES Record: 2-2 A 26 Navarro L, 22-43 S 02 at Gonzales L, 0-35 S 09 at Woodsboro W, 48-0 S 16 at Karnes City W, 41-19 S 23 Canyon Lake S 30 Open O 07 at Lago Vista* O 14 at Comfort* O 21 Ingram Moore* O 28 at Marion* N 04 Blanco* LAGO VISTA VIKINGS Record: 3-1 A 26 La Pryor W, 52-0 S 02 at E. Memorial W, 45-0 S 09 at Wac.Robinson L, 54-61 S 16 Austin Reagan W, 65-0 S 23 at San Saba S 30 Open O 07 Luling* O 14 at Ingram Moore* O 21 Marion* O 28 at Blanco* N 04 Comfort* INGRAM MOORE WARRIORS Record: 0-4 A 26 at Natalia L, 21-28 S 02 UC Randolph L, 14-55 S 09 at Harper L, 26-45 S 16 at Crystal City L, 34-42 S 23 Mason S 30 Open O 07 at Blanco* O 14 Lago Vista* O 21 at Luling* O 28 at Comfort* N 04 Marion* MARION BULLDOGS Record: 3-1 A 26 at Poth L, 7-8 S 02 Karnes City W, 21-0 S 09 at SA Cole W, 48-0 S 16 UC Randolph W, 53-0 S 23 Open S 30 Goldthwaite O 07 Comfort* O 14 Blanco* O 21 at Lago Vista* O 28 Luling* N 04 at Ingram Moore* COMFORT BOBCATS Record: 3-1 A 26 Lytle W, 49-6 S 02 Skdmore-TynanW, 28-14 S 09 Mason L, 24-48 S 16 at F’ricksburg W, 31-24 S 23 Boerne S 30 Open O 07 at Marion* O 14 Luling* O 21 at Blanco* O 28 Ingram Moore* N 04 at Lago Vista* BLANCO PANTHERS Record: 2-2 A 26 at Canyon Lake L, 7-14 S 02 at Lexington L, 7-32 S 09 Somerset W, 36-7 S 16 at SA Christian W, 36-34 S 23 Sonora S 30 Open O 07 Ingram Moore* O 14 at Marion* O 21 Comfort* O 28 Lago Vista* N 04 at Luling* HALLETTSVILLE BRAHMAS Record: 1-3 A 26 at Ganado L, 7-32 S 02 Refugio L, 21-64 S 09 Yoakum L, 14-29 S 16 at Palacios W, 21-17 S 23 Edna* S 30 at Rice Cons.* O 07 Hempstead* O 14 Open O 21 at Van Vleck* O 28 Hitchcock* N 04 at Industrial* EDNA COWBOYS Record: 3-1 A 26 Needville W, 42-7 S 02 George Ranch W, 34-18 S 09 Boling W, 48-7 S 16 Yoakum L, 13-21 S 23 at Hallettsville* S 30 Van Vleck* O 07 at Hitchcock* O 14 Industrial* O 21 Open O 28 at Rice Cons.* N 04 Hempstead* VAN VLECK LEOPARDS Record: 1-2 A 26 Schulenburg L, 8-55 S 02 at Louise W, 22-16 S 09 at Weimar L, 8-27 S 16 Open S 23 Industrial* S 30 at Edna* O 07 Rice Cons.* O 14 at Hempstead* O 21 Hallettsville* O 28 Open N 04 at Hitchcock* HITCHCOCK BULLDOGS Record: 2-2 A 26 at Clear FallsJV L28-33 S 02 Danbury W, 19-0 S 09 Tomball Luth. L, 21-28 S 16 Lutheran South W, 47-7 S 23 Open S 30 at Industrial* O 07 Edna* O 14 at Rice Cons.* O 21 Hempstead* O 28 at Hallettsville* N 04 Van Vleck* HEMPSTEAD BOBCATS Record: 3-0 A 26 Open S 02 at Stafford W, 24-23 S 09 at Austin Reagan W, 77-0 S 16 Brookshire Royal W, 28-0 S 23 at Rice Cons.* S 30 Hou. St. John’s O 07 at Hallettsville* O 14 Van Vleck* O 21 at Hitchcock* O 28 Industrial* N 04 at Edna* RICE CONS. RAIDERS Record: 2-1 A 26 at Refugio L, 7-26 S 02 Somerset W, 62-22 S 09 at Columbus W, 22-14 S 16 Open S 23 Hempstead* S 30 Hallettsville* O 07 at Van Vleck* O 14 Hitchcock* O 21 at Industrial* O 28 Edna* N 04 Open INDUSTRIAL COBRAS Record: 3-1 A 26 at Shiner W, 20-7 S 02 Ganado L, 13-33 S 09 at Tidehaven W, 42-6 S 16 Somerville W, 33-7 S 23 at Van Vleck* S 30 Hitchcock* O 07 Open O 14 at Edna* O 21 Rice Cons.* O 28 at Hempstead* N 04 Hallettsville* SHINER COMANCHES Record: 1-3 A 26 Industrial L, 7-20 S 02 Brazos W, 53-10 S 09 at Schulenburg L, 14-21 S 16 at Weimar L, 14-19 S 23 Navarro S 30 at Ben Bolt O 07 Ganado* O 14 at Yorktown* O 21 Flatonia* O 28 Open N 04 at Louise* FLATONIA BULLDOGS Record: 2-2 A 26 Nixon-Smiley L, 33-36 S 02 Sacred Heart L, 27-33 S 09 Bloomington W, 26-2 S 16 at St. Paul W, 21-17 S 23 at Thrall S 30 at Burton O 07 Yorktown* O 14 Open O 21 at Shiner* O 28 Louise* N 04 at Ganado* YORKTOWN WILDCATS Record: 2-2 A 26 at Sacred Heart L, 19-22 S 02 at Agua Dulce W, 55-0 S 09 Nixon-Smiley W, 20-13 S 16 at Falls City L, 37-40 S 23 Kenedy S 30 Open O 07 at Flatonia* O 14 Shiner* O 21 at Louise* O 28 Ganado* N 04 at SA Cornerstone LOUISE HORNETS Record: 1-3 A 26 at Danbury L, 13-35 S 02 Van Vleck L, 16-22 S 09 at Burton L, 8-49 S 16 Woodsboro W, 48-0 S 23 San Mar. Baptist S 30 at Sacred Heart O 07 Open O 14 at Ganado* O 21 Yorktown* O 28 at Flatonia* N 04 Shiner* GANADO INDIANS Record: 4-0 A 26 Hallettsville W, 32-7 S 02 at Industrial W, 33-13 S 09 at East Bernard W, 22-21 S 16 Tidehaven W, 42-0 S 23 George Ranch S 30 Palacios O 07 at Shiner* O 14 Louise* O 21 Open O 28 at Yorktown* N 04 Flatonia* ST. PAUL CARDINALS Record: 3-1 A 26 at Pettus W, 28-24 S 02 at Cornerstone W, 59-0 S 09 Stockdale W, 38-34 S 16 Flatonia L, 17-21 S 24 Bryan St. Joseph S 30 at Austin Regents O 08 Brazos Christian* O 14 Open O 21 at St. Gerard* O 29 at Sacred Heart* N 04 St. Dominic Savio* SACRED HEART INDIANS Record: 4-0 A 26 Yorktown W, 22-19 S 02 at Flatonia W, 33-27 S 09 at Faith West W, 33-19 S 16 Nixon-Smiley W, 35-20 S 23 at Hyde Park S 30 Louise O 08 Bryan St. Joseph O 14 at SA St. Gerard* O 21 St. Dominic Savio* O 29 St. Paul* N 04 at Brazos Christian* SA ST. GERARD ROYALS Record: 0-4 A 26 at Nuec.Canyon L, 6-56 S 02 CP Summit L, 19-66 S 09 SA Cornerstone L, 12-13 S 16 SA Brooks L, 0-34 S 23 D’Hanis S 30 at Sabinal O 07 at St. Dominic Savio* O 14 Sacred Heart* O 21 St. Paul* O 28 at Brazos Christian* N 04 Schertz John Paul II ST. DOMINIC SAVIO Record: 1-3 A 26 at C.TexChrist. W,20-13 S 01 Texas Sch. Deaf L, 0-13 S 09 San Marc. Baptist L, 0-42 S 16Texas Christian L, 20-27 S 23 Open S 30 Somerville O 07 SA St. Gerard* O 14 Brazos Christian* O 21 at Sacred Heart* O 28 Dallas Homeschool N 04 at St. Paul* BRAZOS CHRISTIAN EAGLES Record: 3-1 A 26 Cypress Christ. W, 12-10 S 02 Snook L, 7-27 S 09 at Tx. Sch. Deaf W, 54-29 S 16 St. Joseph W, 39-6 S 23 Woodlands Christ. S 30 at Faith West O 08 at St. Paul* O 14 at St. Dominic Savio* O 21 Open O 28 SA St. Gerard* N 04 Sacred Heart* DALLAS COWBOYS Record: 1-1 S 11 at NY Jets L, 24-27 N 13 Buffalo S 18 at SanFranciscoW, 27-24 N 20 at Washington S 26 Washington N 24 Miami O 02 Detroit D 04 at Arizona O 16 at New England D 11 NY Giants O 23 St. Louis D 17 at Tampa Bay O 30 at Philadelphia D 24 Philadelphia N 06 Seattle J 01 at NY Giants HOUSTON TEXANS Record: 2-0 S 11 Indianapolis W, 34-7 N 06 Cleveland S 18 at Miami W, 23-13 N 13 at Tampa Bay S 25 at New Orleans N 27 at Jacksonville O 02 Pittsburgh D 04 Atlanta O 09 Oakland D 11 at Cincinnati O 16 at Baltimore D 18 Carolina O 23 at Tennessee D 22 at Indianapolis O 30 Jacksonville J 01 Tennessee Any Topping Pizza’s 12” - $6.99 St. Joseph Food Mart - TEXACO 1817 St. Joseph, Gonzales Sun.-Thurs. 6 a.m. - 10:30 p.m. Fri.-Sat. 6 a.m. - 12 Midnight 830-672-3355 Lottery - Monthly 2nd Chance Drawing Fountain Drinks $1.00 44 oz Page C8 APACHES: ‘Dogs ROUNDUP: Comanches hunt win looking for upset Continued from page C1 Continued from page C1 The Gonzales Cannon Thursday, September 22, 2011 EAGLES: Offense gets untracked Continued from page C1 Moya, Kody Perez and Rex Kutzer. Yoakum’s ground game will be led by running backs Andrew Jimenz, Moya, Kyle Mikulik and McCracken. Jeff Harrison is Yoakum’s starter at quarterback and will try to link up with McCracken or Fred Thompson when Yoakum looks to pass. Lock said Yoakum will run the Slot-T or Veer formation when handing the ball off to McCracken, and Gonzales’ defensive players will have to be prepared for that scheme. “Our offense will continue to work to get better,” Kornegay said. “We will need to not turn the ball over because you cannot afford to make those kinds of mistakes against a quality team like Gonzales.” Kornegay said Yoakum had a successful non-district season and credits the players’ attitudes and work ethic. “The kids are believing in what we do,” he said. “And that is bringing us success. They work hard and I just cannot say enough about their efforts.” Lock said the Apaches have gotten better throughout non-district. “You never work out all of the kinks before district,” he said. “We made a lot of progress and it has to do with the efforts of our players.” Gonzales displays an intense, positive attitude. “We expect to get after every team we play and we talked about being successful. You have to have that kind of mindset or you will not be very successful,” he said. “In our non-district games, we have found our strengths and weaknesses and try to correct them. Non-district is also getting people in the right places on the field.” Lock said players at several positions have stepped up their efforts. Next week, Gonzales will travel to Poteet while Yoakum hosts Pleasanton the run game. “I anticipate Dilley moving eight to nine players in the box,” he said. “We will need to execute well and hope our misdirection can throw them off.” Key Wolves players are safety Ryan Autrey, nose guard Frankie Flores and linebacker Moises Gonzales. The Dilley game is one of the two away district games Nixon will play this year, leaving four games in district to be played before the Mustangs’ own crowd. “We look to play well on the road and use our home-field advantage to do well in district,” he said. and Billy Medford had his best game of the season last week. The versatile senior went 8-for-9 with 61yards passing and rushed for 144 yards and three touchdowns. He also returned a fumble for another score. The increase in offensive productivity is also a testament to the development of Luling’s other weapons, including backup sophomore QB Trayden Staton. Five different Eagles caught passes against Karnes City. “We’ve found our identity $40,845 MSRP -$1,618 -$1,000 B U I LT F O R D T O U G H The senior-laden Indian team is off on offense,” Waldie said. “We know we’re going to move Billy around and we feel like Trayden can come in and fill a need at quarterback if we need him to. Our key is execution and staying patient on offense. It’s part of our schematics and it’s coming together for us so far.” “It’s a tough game this week, but we’re going to show and hit them in the mouth then see what happens. Win, lose or draw, we’re going to be at war. Then it’s just a matter of finding out how we can better because after that, it’s for real with district play starting.” The Shiner Comanches have not tasted a win since a 53-10 win over Brazos nearly three weeks ago. Which means they are hungry for a victory. Shiner gets the chance to lineup against the Navarro Panthers. Navarro sticks with the ground game with a Slot-T style offense, utilizing running backs Eric Schieler and Evan DeLeon as its main threats. The Panthers will throw the ball some with quarterback Chris Sestak. The Comanches will need to have solid tackling to slow down the Navarro running game and be on their toes for when the Panthers do decide to go to the air. Shiner will need its offense to have sustained drives and no turnovers as to keep the Panther offense on the sideline. Evel Jones, Marlon Wallace and Jacob Stafford will be counted on to pace the Shiner offense. Key defensive players for Navarro are defensive back Greg Bowles, end Zane Conlin and linebacker DeLeon. The Flatonia Bulldogs will be seeking their second road win in as many weeks when they travel to play tonight at Thrall. Momentum is on the side of the Bulldogs, who came back to beat Shiner St. Paul 21-17 last Friday to spoil their Homecoming and snap their 17-game winning streak after trailing by 10 at halftime. “The offensive line really played well in the second half and our backs ran the Navarro at Shiner ball well,” said Flatonia head coach Chris Freytag. “We started playing defense like I know we’re capable of playing and we stayed on assignment a lot better. I was very pleased with the second half, we really played almost perfect.” Thrall has opened the season by losing three of their first four games including last week’s 27-20 loss at Granger. The Tigers’ offense revolves around the play of quarterback Kollen Scruggs, their leading passer and rusher. While they move the ball fairly well, the Tigers are only averaging 14.25 points per game. “They like to spread you out kind of like St. Paul,” Freytag said. “They throw the ball and they have some good receivers, a good quarterback and they’ve been in nearly every game this year.” Flatonia at Thrall The Brahmas got the first-win monkey off their back with a 21-17 win over the Palacios Sharks last week. They welcome the Edna Cowboys to Hallettsville Memorial Stadium to open District 14-2ADII play. The Cowboys opened the year 3-0 but fell last week to the Yoakum Bulldogs, 21-13. Hallettsville head coach Tommy Psencik said Brahmas must continue to improve their turnover ratio. “Against Palacios, we only had three turnovers but they almost killed our chances,” he said. “We need to control the ball and keep out of the hands of Edna’s speedy offense.” The Brahmas’ defense will need to swarm the ball and gang tackle, getting participation from every defender. Edna runs a typical spread offense, using inside/out zone plays, counter and power running mixed in with bubble and laser screen passes. The Cowboy defense generally lines up in a 4-3 set and uses man-to-man coverage in the secondary. Psencik said a victory for the Brahmas would not only benefit the football team but the school and the community as a whole. Key Cowboy players are OT Mac Long, CB Anthony Stevens, QB DeQuan Cantu, WR Darius Callies and RB Devin Parks. Edna at Hallettsville to a great start— 4-0 after wins against Yorktown, Flatonia, Katy Faith West and Nixon-Smiley. Sacred Heart continues non-district as they will face Austin Hyde Park Baptist of TAPPS DII-District 3 on Friday at 7:30 p.m. in Austin. Hyde has an explosive offense, on the ground or through the air. “We need to keep the ball away from their offense,” Indian head coach Pat Henke said. “They have two receivers who are over 6 feet tall with 4.6 or 4.7 speeds in the 40 who average around 30 yards a catch.” Austin Hyde will pound the ball out of the Power I formation. Henke said it is important for Sacred Heart to stop the big play, which Hyde will depend on. On defense, Hyde will come out with an eight-man front to try to stuff the run. “We need to control the line of scrimmage with our running attack,” Henke said. “We need to cut out turnovers. We have been winning games but are turning the ball over too much.” The Cuero Gobblers have played quite a non-district schedule. They have gone up against the likes of Wimberly, Liberty Hill, Bellville and Port Lavaca Calhoun. “All of our non-district teams were very physical,” Cuero head coach Rick Owens said. “They have just lost two games between the four of them.” “They are all pretty good teams.” The Gobblers will be used to physical ball clubs after their non-district schedule but it is tough to be 0-4 on the season. “On the downside, our confidence is affected (after losing four games),” Owens said. “And we are a little banged up with some nagging injuries.” Cuero plays its first District 28-3A game Friday at 7:30 p.m. in Pleasanton. Owens said the Eagles like to put the ball in the air. “Pleasanton will throw about 65 percent of the time,” he said. “They have a good left-handed quarterback in Luke Walters and several good receivers such as Zack Jackson, Albert Mares and Justin Llamas.” “Our defense will have to pressure Walters and try to disrupt the timing of the receiver routes.” Cuero at Pleasanton Sacred Heart at Hyde Park Caraway Discount Rebates # 10444 $38,2 + T.T.L . 27 # 10428 • Auto Trans • Power Windows • 8 ft. bed • Trailer Tow Pkg. • Tilt/Cruise • All Terrain Tires • 6.2 L Engine • Tinted Glass • 40/20/40 Seat New 2012 F-250 Supercab XLT 4X4 $46,415 MSRP -$2,076 -$4,500 Caraway Discount Rebates • Leather Interior • Real DVD System • Power Liftgate • Drivers Vision Package New 2011 Expedition XLT 839 39, $ Good News.... S ice + ale Pr T.T.L. Save an Extra $1,000 off the sale price of $39,839 + T.T.L if you finance thru Ford Motor Credit w/a/c. Offer expires 10/3/11. Rebate total of $5,500 • 5.4L V8 Engine • Rear View Camera • Heavy Duty Trailer Tow Pkg. • XLT Package Offer Expires 10/3/2011 Caraway 1405 Sarah DeWitt • Gonzales, TX 78629 • 830-672-9646 Gonzales 830-672-6278 Business 830-857-4277 After Hours 134 Hwy. 90A W • Gonzales, TX 78629 Glenn Glass, Owner Keep up with all the local news at our web site: gonzalescannon.com D&G Automotive & Diesel Wrecker Service Mon.- Fri. 8:00 am - 5:30 pm Lockout Services includes Light, Medium and Heavy Duty Towing and Service Calls, Light, Medium and Heavy Duty Mechanic DOT & State Inspections 24 Hour Towing/Accident Recovery The Arts Cannon News Services newseditor@gonzalescannon.com The Gonzales Cannon Thursday, September 22, 2011 D Kirk, local talent head music lineup for CATI The area in and around Gonzales County is rapidly being recognized as a hotbed of Texas music talent, and the entertainment lineup for this year’s Come and Take It Festival Sept. 30-Oct. 2 has a distinct local flavor. Among the local talent scheduled to be showcased during the weekend is Shiner’s Mark Winston Kirk, who has overcome some bashfulness and now ranks among the top regional music talents. Mark began singing and songwriting in the privacy of his room at the age of 18. He had an old Homer piano that he had to bang the keys in order to hear as he wrote and sang after school when no one else was around. One day his mom came home early and heard him singing from the kitchen. Upon entering his room, immediately clammed up and refused to sing for her. After 30 minutes of pleading, Mark gave in and sang. She instantly recognized his talent and potential and enlisted Mark*s father, who got him a spot on The Texas Opry. Scared to death, having never performed before a crowd, Mark sang his all-time favorite song, “Statue of a Fool,” and received a standing ovation. He never looked back Mark put a band together and began booking them in Texas clubs, including Cheetham Street Warehouse in San Marcos, Billy Bob’s in Fort Worth, The Broken Spoke in Austin and The Bluebonnet Palace in Schertz. For 10 years, he has grown and networked, playing clubs and halls all over the country, including the National Finals Rodeo in Las Vegas, and opening for artCATI, Page D2 Shiner’s Mark Winston Kirk will perform at the Biergarten Saturday, Oct. 1 Homecoming Mums by... Beverly’s Crafts 3”, 4”, 6” & Double Mums Garters (w/no flowers) Pom Poms, Spirit Pins, Spirit Sticks, face stickers September 22 & 23 7 a.m. - ? 1507 St. Joseph (Across from football field) Page D2 ‘Santa Claus’ is coming to town Renowned artist Lynn Haney set to appear here Cannon News Services newseditor@gonzalescannon.com The Gonzales Cannon Thursday, September 22, 2011 Santa Claus is Coming to Town… well not really Santa, but certainly the Spirit of Santa Claus will be in Gonzales over the Come and Take It Weekend. World-renowned artist, Texas native son, and Santa-smith Lynn Haney, will make his only personal appearance in the entire state of Texas this year in Gonzales during the Come and Take It festivities. Haney, who is celebrating 25 years of artistry and of the creation of collectible Santas, will CATI: Local talent heads lineup Continued from page B1 visit with collectors and personalize their purchases at Laurel Ridge Inn, Antiques, and Christmas Store located at 827 Saint Joseph, Gonzales, Texas. Appearance times are Friday evening, Sept. 30 from 4-8 p.m., and Saturday afternoon Oct. 1, starting at noon immediately following the Come and Take It Parade. Haney chose Gonzales and Laurel Ridge for his anniversary celebration visit because Laurel Ridge has been one of his longest standing and biggest fans through out his long and illustrious career. Haney’s clients include retailing legends Neiman Marcus, Horchow, Gumps, and of course, Gonzales’ own Laurel Ridge. An impressive collection of Haney’s early creations will be on display in the newly renovated rooms of Laurel Ridge’s second floor Inn. Several years ago, Haney invited Laurel Ridge founder, Barbara Crozier to become part of his design team and they started designing Santas exclusively for Laurel Ridge. Each year Crozier and Haney design a Santa that is available only at Laurel Ridge in Gonzales. This year’s creation, Sleighbells in the Snow is the most limited collection in all of Haney creations, and is the third in the White Christmas series exclusive to Laurel Ridge. Be sure to come and visit with Haney, come and preview the Inn rooms now available at Laurel Ridge, and come at take in all the fun and excitement going on in Gonzales this Come and Take It weekend, Sept 30-Oct 2nd! Register to win a two-night stay in the Inn and a Lynn Haney 25th anniversary ornament! Santa-smith Lynn Haney such as George Strait, Brooks & Dunn, Garth Brooks, Merle Haggard, Hank Williams Jr., David Allan Coe, Marshall Tucker Band, Reba McEntire, Tracy Lawrence, Joe Diffie, Mark Chesnutt, Pam Tillis, Confederate Railroad and many others. After living and breathing Nashville for many years, Mark has developed a style like no other. Over the last decade he has written over 110 songs and performed thousands of shows. He has written With many of his musical influences, as well as his peers and you can only imagine the overwhelming intensity he puts into his music until you’ve actually heard it for yourself. In the last few years, Kirk has captured heart after heart and crowd after crowd with his infectious smile and personality. his unique voice, clean edge and slight vibrato is becoming a recognized trademark in the country music industry. The warmth of his powerful, emotional vocal, expert Gonzales Cannon Music Calendar Thursday, September 22 Thursday Night Acoustic Jam, Ole Moulton Bank, Moulton, 6:30 p.m.-midnight, call 361-596-7499 for info Saturday, Sept. 24 Mike Ryan at Scooter’s Dancehall, Moulton. Tickets $10. Chad McBride & The Drifters at Yoakum Gin & Feed, 6 p.m. Tickets $8. Scotty Decker & Family at Pardners Dancehall, Gonzales. No cover charge. Sunday, Sept. 25 Wildfire Benefit at Scooter’s Dancehall, Moulton, feat. The Pale Horses, Trevor Cole Band, Broke 60 and Surprise Special Guest, doors open 1 p.m. Thursday, September 29 Thursday Night Acoustic Jam, Ole Moulton Bank, Moulton, 6:30 p.m.-midnight, call 361-596-7499 for info Friday, September 29 The Situations, Max Castillo and Conjunto Lumbre and Clint Martin at the Biergarten at the Come and Take It Festival, Gonzales, 6 p.m.-midnight. No admission charge. Saturday, September 30 Scottie Decker & Family at the Biergarten at the Come and Take It Festival, Gonzales noon-5:30 p.m.. No admission charge. Los Kolaches at the Biergarten at the Come and Take It Festival, Gonzales, 6:30-8 p.m. No admission charge. Pale Horses at the Biergarten at the Come and Take It Festival, Gonzales, 8:30-10 p.m. No admission charge. Mark Winston Kirk at the Biergarten at the Come and Take It Festival, Gonzales, 10:30-12:30 p.m.. No admission charge. Saturday. October 1 Shiner Hobo Band at the Biergarten at the Come and Take It Festival, Gonzales 1-5 p.m. No admission charge. Granger Smith at Scooter’s Dancehall, Moulton. Tickets $12. Sons of Magnolia at Yoakum Gin & Feed, Yoakum. Friday, October 7 Curtis Grimes at Scooter’s Dancehall, Moulton. Tickets $8. Saturday, October 8 The O’Neal Brothers Band at Leesville Country Fair, Methodist Church Grounds, Leesville. Events begin at 10 a.m. Friday, October14 Zack Edwards at Scooter’s Dancehall, Moulton. Tickets $8. Saturday, October15 Jarrod Bingham at Yoakum Gin & Feed, Yoakum. Saturday, October22 Bri Bagwell at Yoakum Gin & Feed, Yoakum. Saturday, Nov. 5 Scott Taylor at Yoakum Gin & Feed, Yoakum. Saturday, Nov. 12 Nightrider at Yoakum Gin & Feed, Yoakum. Saturday, Nov. 19 Jake Kellen at Yoakum Gin & Feed, Yoakum. Musicians and Venues: To add or update events, contact us via e-mail to manager@gonzalescannon.com. showmanship creative lyrics make it easy to see that here is a singer of uncommon ability. And lest you think he’s one-dimensional, here’s an interesting tidbit: Mark has an associates’ degree in culinary carts and has won numerous awards for his cooking ability, as well. Kirk will headline the entertainment lineup at the Biergarten during the Come and Take It Festival from 10:30-12:30 p.m. on Saturday, Oct. 1. Another Shiner artist, Los Kolaches, will open the show at 6:30 p.m., followed by Gonzales favorites The Pale Horses at 8:30. Friday’s lineup runs from 6 p.m.-midnight and includes The Situations, Seguin’s Max Castillo & Conjunto Lumbre and Leesville’s Clint Martin. The Shiner Hobo Band will be back again this year from 1-5 p.m. on Sunday. A C-Store with (More) Live Music Draft Beer Beer - Bait - Ammo Howard’s 1701 N. Ave. E Shiner 361-594-4200 PARDNERS DANCEHALL Scotty Decker & Family Saturday, September 24 8:30 p.m. - 12:30 a.m. No Cover Charge Located at American Legion Hall behind Wal-Mart, Hwy. 90A HAPPY FALL YA’LL SCARECROW CONTEST Decorate Gonzales for FALL Main Street will once again sponsor a scarecrow contest and would like to dress up the town for our Come and Take It Celebration on September 30 – October 2, 2011. Scarecrows should be up by September 30, 2011 and judging will be held on October 5, 2011 after 5 p.m. Applications are on the City website at City Hall or fill out form in The Gonzales Cannon. If you should have any questions, please contact the Main Street Office at 672-2815. Dress up a scarecrow and let your imagination go wild. The possibilities are endless. Shiner Catholic School Fall Festival KC Hall (formerly American Legion) in Shiner BBQ Dinner with trimmings 11:00 -1:00 pm - drive thru available starting at 10:30 am No pre-sale tickets sold Live Auction 12:00 pm - 4:00 pm Cake Walk, Games, Moon Walk & Concessions start at 11:00 am St. Paul Battle of the Classes will be underway after the Live Auction. Great food, fun and fellowship for the whole family October 2, 2011 $7.50 plate Any business or individual can enter Application Deadline - September 28, 2011 Prizes donated by The Gonzales Cannon Newspaper 1st Place - 1/4 pg. Ad 2nd Place - 1 year subscription 3rd Place - 3x5 Ad SCARECROW CONTEST Decorate Gonzales for JUDGING INFORMATION NAME: FALL y ail D ADDRESS: CONTACT PERSON PHONE#: EMAIL ADDRESS: Send Completed Form to: Gonzales Main Street P.O. Box 547 Gonzales, Texas 78629 Remember display deadline is September 30, 2011 at an affordable price Breakfast • Lunch • Dinner Call in Orders! 1801 Sarah DeWitt Dr. Gonzales, TX Next to the Courthouse Annex Reyna’s Taco Hut Open for Breakfast, Lunch & Dinner Mon.-Sat. 5 a.m. - 9 p.m.; Sun. 5 a.m. - 3 p.m. Home of the “Silverado” 830-672-2551 Thursday, September 22, 2011 A Gonzales pioneer, in his own words George W. Davis left his story for his descendants EDITOR’S NOTE: This is another in a series of articles written by lineage research teams with the Daughters of the Republic of Texas, and is presented as the first of two parts. It was authored by Polly Fink, a direct descendant of George W. Davis Sr. choice. I preferred the law. He would not consent — his was a voice potent and I had to submit. I thought maybe it was better and easier than cutting and hammering leather and I yielded. I was placed under the guidance and tuition of one Doctor Johnson, an able physician planted in his library for some 8 or 9 months during which time I read diligently and studied hard. I attended medical lectures at the University of Pennsylvania. I said I gradually relaxed my efforts in the study of medicine. I was determined to quit the stidy which I did after two years labour and devotion. Some would say here was much time and money wasted but i say not so for though I do not love to practice as a physician yet I do love and esteem the medical art and medical knowledge and I respect the practicioners of it when they are learned and able. “In the month of zseptember, 1818, my father with his family, myself one of them, left Philadelphia to seek a home somewhere in the west. This proved to be Cincinnati, Ohio. “Here it was that I first met and became acquainted with your mother. And here it was on the 8th day of October 1820 we were married. Immediately after I married I left that town to seek my fortune apart from my relation and went to Louisville and on to Greensburg, Kentucky. I carrie on shoe making here, employed all the hands I could get — worked steady and hard myself. In short did a considerable business for such a town. Your mother with her untiring industry did her full part in making a living and spared no exertion to make money by enterprise and industry. Here also I commenced the study of law and devoted every The Gonzales Cannon Page D3 By POLLY FINK Special to the Cannon George Washington Davis sr. was born Oct. 12, 1797 in Philadelphia, Penn. The Davises were natives of Wales and came to New England to the Massachusetts Colony and settled on the Island of Nantucket in the 1700s. In 1800, the family moved to Richmond, Va., where they carried on a shoe manufacturing business and had a retail store. We are fortunate that in his later years, Davis shared his family history with his children. In his own words, here is his story of Texas: “The thought has lately arose in my mind that someday, if not now, you would like to know something mor ethan you do of my history and to hear an account of your ancestors and relations or from whence you are the origin of your family and with whom you are come. “If, however, these details should prove uninteresting to you, I will not lose my labor, the employment which the task gives me will be and is some amusement and occupation in my present cheerful loneliness. So, I shall not regret my labors. “It was determined by my father in the year 1816, when I was 19 years old, that I should study some profession and that profession he decided should be medicine or that he would make a doctor of me. This was not my hour I could spare from business to that kind of reading. “The law profession as I have before said was my favorite profession and I knew that the knowledge would be useful to me even if I did not practice. The confinement of my business now had injured my health very much. I became feeble and dyspeptic. I had long heard of Texas — its rich soil, its fine climate, its beautiful scenery and the advantage of getting large trades of land there almost for nothing. I had pondered upon all these things for a long time — but the distance was so great, that country so far off — the heavy expense of going there, the risks, hardships, privations, and dangers a family would be exposed to in making voyage all conspired to An old daguerrotype photo of George to detain me for a long time from the W. Davis Sr., taken around 1860. (Courundertaking, but now I had excited tesy Photo) “After a few days I met with a man your mother’s enthusiasm on the subject and he rgood sense led her to see who was going to Gonzales and wantthe great advantages that would most ed company and I anxious to hunt a probably result from the enterprize. home soon agreed to go with him. “Accordingly, I shouldered my rifle And cheered on by her assistance and smiles of approval I determined to and on foot with a half dollar inmy brave all hazard and make one strong pocket, all the money I had in the daring effort to better a condition and world, a little wallet of provisions with secure a future competence for myself the best heart I could muster, leaning and family. Accordingly I set about on hope alone — set out. Here, now let me pause and look back upon that pemaking preparation for the journey. “For six weeks after we started we riod of my life and may he draw from landed in New Orleans. At last we got it a useful lesson. “How little had I then to build hope on board of the schooner Emblem for Matagorda, Texas, and on the 12th day upon — how gloomy the prospect was of Fenruary 1831, we landed at Cox’s in reality. “And then I — what had I to calcuPoint on the Lavaca Bay opposite to where Port Lavaca now stands; and late or how to expect to live in this about 20 miles from any house on the wilderness. I was alone, unknown and unfriended — unaccustomed and unnaked and lonesome bayshore. “The roads were so bad wagons fit for hard labour — knew nothing could not travel them. So a flat boat about it had neither skill nor strength was built for passage up the Lavaca for it — was no hunter, and was incapable from near sightedness of ever River. “I did not like the country on the becoming one. Thrown here where bay, the Lavaca River and on the Navi- these were prime requisites the only dad at all, nothing could have induced available qualifications. I could to be me to live there. From the meagre de- sure make shoes and boots, but what scription of teh country which I had use for a boot maker among people only been able to obtain and from the who had no leather where leather idea I had formed of it I started fixed could not be got and were well content upon the Guadalupe River as having to wear moccasions.” NEXT: Davis arrives in Gonzales, and the country on it that would please me fortunes change. best. Texas pioneer Patriot Dinner PUBLIC NOTICE FINDING OF NO SIGNIFICANT IMPACT (FOR THE CONSTRUCTION OF THE ROYCE AND SARAH FARRAR POULTRY FACILITY) USDA Farm Service Agency has reviewed the application for financial assistance from the Sage Capital Bank, N.A. on behalf of Royce and Sarah Farrar located in Gonzales County. Mr. and Mrs. Farrar propose to build Four (4) broiler poultry houses in the approximate size of 54’ X 600’. The poultry houses are located on +/-98.20 acres of land owned by Mr. and Mrs. Farrar. The houses (4) will house approximately 49,000 birds per house per flock for a total of 6.5 flocks constituting approximately 1,274,000 broilers, which are owned by Tyson Foods and are processed for human consumption. USDA Farm Service Agency has assessed the potential environmental impacts of these proposed actions and has determined that they will not significantly affect the quality of the human environment or important land resources. Therefore, USDA Farm Service Agency will not prepare an environmental impact statement for this proposed action. Any written comments regarding this determination should be provided within fifteen (15) days of this publication to: Wayne Lyssy, District Director USDA, Farm Service Agency 920 St. Joseph Street Gonzales, TX 78629 USDA Farm Service Agency will make no further decisions regarding this proposed action during this fifteen (15) day period. Requests to review the USDA Farm Service Agency environmental assessment upon which this determination is based or to receive a copy of it should be directed to the “above address.” The Republican Women of Yoakum held their first Patriot Dinner on Aug. 30 at the Yoakum Community Center. This very successful event drew approximately 300 guests from DeWitt, Lavaca and surrounding counties. The evening began with a social hour and silent auction, followed with dinner catered by Werner’s of Shiner. Guest speaker, Texas Land Commissioner Jerry Patterson, delivered an engaging speech about the Texas Land Office and what he believes to be important issues facing our state and nation. Master of Ceremonies was Dr. Donna Campbell, 2010 Republican candidate for U.S. Congress, District 25. Many additional Republican state and local officials were present, including State Senator Glenn Hegar and Representative Lois Kolkhorst, as well as state and local Republican candidates for the 2012 election. Pictured here from left are Becky Berger, Dori Wyatt, Texas Land Commissioner Jerry Patterson, Brenda Cash, Peggy Mayer and Frances Pohl. Commissioner Patterson announced that he plans to run for the office of Texas Lieutenant Governor in 2014. The Patriot Dinner created an excellent opportunity for concerned citizens to speak with and learn about their elected representatives. (Courtesy Photo) Gonzales Healthcare Systems Request the pleasure of your company to meet and welcome our new full time general surgeon, Kathleen Koerner, D.O., M.S. Outpatient Lobby Thursday, September 22, 2011 at 3:00 p.m. Page D4 Great turnout for Belmont VFD fund-raiser I don’t know whose idea it was to have the Sheriff’s Department controlling traffic on either side of the road from the Belmont BBQ but it was certainly a great idea. I have never seen so many cars at this BBQ in my life. There was a long line of people waiting for BBQ plates, but they have the serving down like a fine tuned guitar and you get your meal fast. Thanks to everyone who came out and helped support the fire department. If we had known that this was what it took to get it to rain, we would have had a BBQ long before now. There were many happy and grinning people at this Barthels, the family of Annie Kotwig, family of John Conlin, the family of Marcia and Spike Pinney and our troops and their families, and RAIN-and we do need lots more. This was just a taste. We lost our Mrs. Annie Kotwig or Maw Maw but she will always live in our hearts. She was a special lady and touched many lives in one way or another. She was one of those ladies that just loved everyone and loved a lot. We know that her family doted on her and will miss her so very much, but will not ever forget her. We send you our deepest sympathy We also send sympathy to the family of Roland Barthels. The only one of that family that I know is Bill Barthels, but they have suffered many losses lately and need our comfort. Roland was just our age and seemed to me to live a very interesting life. It always makes me angry and aggravated, to put it mildly, when cancer cuts down a person in the prime of life just when they have so many things going for them. Somehow, someday, a cure is going to be found to knock it out cold. In the meantime, we send you our deepest sympathy. Now I am going to enlighten you about the Gonzales Apache Marching Band. First and foremost, this takes work. It requires a lot of work from the students and a lot of work from the parents. You arrive early to practice and you stay late to practice. It is tiring and you sometimes don’t get much sleep because in between times is when you must get your homework and everything else done. So when you see a band member tell them how much you appreciate them. Think how empty it would sound without them at a football game. This band is going to a pre-UIL-certification Marching Band contest this Saturday in San Antonio at East Central High School. Wish those band members “Good Luck”. They need to hear that. In addition to that, do you realize that their mom and dads are out there working in that concession stand to raise money to fund these trips, to buy the food, the drinks, etc. So while football players and the football game is important, that Gonzales Band is also important. Good luck guys!!!! Wyatt Arp said that his band Deep Water caused it to rain. Well I don’t know about that, but he said he had been saying extra prayers. I think everyone had been saying so many extra prayers that they did not care. I don’t think that they cared if they got wet or if they walked in water, it just did not matter, there was water falling from the sky. He had his real bass player with him Saturday night, who is Larry Wexler. Steve celebrated his birthday out at the Belmont Social Club. Jack Finch was all grins because the wife Jayne and his doctors have finally turned him loose so that he can play with his Sunday prayer breakfast friends. I don’t blame him. If you love to play and you love the fellowship, then you really miss it and I hear that he has a new “play toy” to play on. I think this next week we have Best Friends on Friday, September 23rd and maybe Tommy Schlein on September 24th. Rejoice that Micheane Mercer DeBoord has finally had her baby, who is named Levi Eugene. Levi weighed over seven pounds and was 19 inches long. Proud Father is Chris DeBoord, and grandparents, Rhonda and Mike Mercer, and all of the rest of the Mercer Clan however they fit in. Congratulations!! I couldn’t believe it when they announced on KSAT news that Jim Dawson, passed away at the age of ninety-one years old. He was one of our favorite weather forecasters and always drew a new cartoon every day at the end of his The Gonzales Cannon Thursday, September 22, 2011 Sandi’s Country Fried News Sandi Gandre event because the ground was wet. The Belmont Ladies Club will have their regular meeting at the Belmont Community Center on the fourth Tuesday of September at 2PM. Bingo prizes will be furnished by the club. The Leesville Country Fair is Saturday, October 8th, at 10:00AM at the pavilions by the Leesville Cemetery. This is the annual fund raiser for the Leesville Cemetery and provides a $1000.00 scholarship for a local high school student. Please lift the following people up in your prayers: Joe Kotwig, Kenneth Crumley, Gilby, Roy Wright, Lynn (who is Rudolpho and Edna Garza’s daughter as well as Rudolpho and Edna: Elson Schreiber(a former employee of Johnson Oil); Kathryn, Dena Black, Karen Roecker Mahan, Mary Jane Keith, Bill and Marie Lott, Laddie Studler, William Fink, Doris and Alvin Hewell; Whitney, Chasidy, Mildred O’Neal, Lisa Rodriguez, Joe Keith, Debbie and Bill Read, Aunt Georgie; Lawrence Walshak, Joyce Schellenberg, Pete Kallies, Lillie Lay, Mildred O’Neal, Doug Walshak, Louise Rossow, Selma Vickers, Teresa Wilke, Sandi Gandre, Carl and Vida Tindle; Aunt Betty Gandre, Anna Lindemann, Ann and Bubba Bond; Shirley Dozier, Britt Hindman, Sean Weda, Scott Hindman, the family of Roland weather forecast. I am sure that many of you will remember Jim Dawson. We saw Michael Wilson the other day and Will was teasing him about his hair turning grey. I guess we will just have to tease him a little bit more because he had a birthday last week and I forgot it. Sorry Micheal. Just because I forgot it does not mean that you did not get another year older. Happy Anniversary to Jim and Ellen Wundt. They celebrated their 43rd wedding anniversary at the Belmont Social Club with Jim providing Ellen with a special steak in the shape of a heart. (Really Jim had nothing to do with this delicacy. That was Johnny Abrameit’s idea and done back there in the kitchen.) It was sort of fun to watch them!!! I am going to have to figure out what this funny noise is up on this high shelf before these cats break their necks. You know how noises are. I have tried to find what is making this noise and when I get Will over it, the noise quits. We have completely moved boxes and other items and it is still there. Dililah and Samson may develop permanent cricks in their necks trying to look at this shelf to see what is going on. Maybe someday we will figure it out. Have a good week and God Bless. Mack, Medina to headline ‘LiberTEA’ event Oct. 22 AUSTIN — “Sheriff Mack” and former gubernatorial candidate Debra Medina are among the featured speakers at the Lone Star LiberTEA Fest scheduled starting at 11 a.m. Saturday, Oct. 22 at the Nutty Brown Cafe, 12225 Highway 290 West in Austin. Sheriff Richard Mack of Arizona, the sheriff famed for his anti-illegal immigration stance and the author of “County Sheriff: America’s Last Hope,” is among the headliner speakers for the event. Medina, who finished third in last year’s Republican gubernatorial primary, is the founder of We Texans. Other speakers include State Rep. David Simpson, who authored the TSA “AntiGroping” bill during the 2010 legislative session; Daniel Miller, president of the Texas Nationalist Movement; Steve Baysinger, chairman of the Texas Tenth Amendment Center; George Rodriguez, president of the San Antonio Tea Party; Claver T. KamauImani, founder of Raging Elephants.org; Tim Cox, founder of GOOOH (Get Out of Our House); Jason Rink of the Foundation for a Free Society; congressional candidate Wes Riddle; Phil Pepin, a member of the executive board of the Republican freedom Coalition; liberty activist Bill Moses; Elena Chitta, who survived 30 years in communist Romania; Ken Hoover of the John Birch Society; and Heather Fazio, director of Texans for Acocuntable Government. Music will be provided by Kevin Southwick, Holly Tucker and the County Line Band. Tickets are $10 and are available online at or by calling organizer Diana Moses at 830-220-0217. Tickets are two-for-one through Sept. 30. On the day of the event, tickets will be $15. Children under 18 will be admitted free. Proposition Number 8 authorizing the legislature to allow cities or counties to enter into (SJR 16) interlocal contracts with other citConstitutional Amendments ies or counties without the impo- SJR 16 would amend the constiSpecial Election November 8, 2011 sition of a tax or the provision of tution by requiring the legislature to provide for taxation of open a sinking fund.” Proposition Number 1 students, subject to certain conspace land devoted to water stewstitutional restrictions, including (SJR 14) ardship purposes on the basis of Proposition Number 6 a restriction as to the maximum its productive capacity. SJR 14 would amend the consti- principal amount of bonds out(HJR 109) tution to authorize the legislature standing at any one time. HJR 109 would amend the con- The proposed amendment would to provide the surviving spouse of appear on the ballot as follows: a 100 percent or totally disabled The proposed amendment would stitution to increase the amount “The constitutional amendment veteran with an exemption from appear on the ballot as follows: of principal that is available for providing for the appraisal for ad ad valorem taxation of all or part “The constitutional amendment withdrawal from the permanent school fund each year and would valorem tax purposes of openof the market value of the surviv- providing for the issuance of genspace land devoted to water-stewing spouse’s residence homestead eral obligation bonds of the State also clarify certain references ardship purposes on the basis of as long as the surviving spouse of Texas to finance educational to that fund in the constitution. Increased access to the princi- its productive capacity.” has not remarried, the property loans to students.” pal of the state public education was the residence homestead of Proposition Number 9 trust fund would be based upon the surviving spouse when the Proposition Number 4 HJR 109 granting the author(SJR 9) qualifying veteran died, and the (HJR 63) ity to consider alternative market property remains the residence SJR 9 would amend the constituhomestead of the surviving HJR 63 would amend the consti- calculations when determining the amount of principal that is tion to authorize the governor, on spouse. tution to authorize the legislature available for distribution to the the written recommendation and to permit a county to issue bonds advice of the Board of Pardons The proposed amendment would or notes to finance the develop- available school fund. HJR 109 would also provide authority to and Paroles, to grant a pardon, appear on the ballot as follows: ment or redevelopment of an reprieve, or commutation of pun“The constitutional amendment unproductive, underdeveloped, distribute to the available school ishment to a person who successauthorizing the legislature to or blighted area within the coun- fund annual revenue from school fully completes a term of deferred provide for an exemption from ty, and to pledge increases in ad fund land or other properties up adjudication community supervito $300 million per year. ad valorem taxation of all or part valorem tax revenues imposed on sion. of the market value of the resi- property in the area by the county dence homestead of the surviving for repayment of such bonds or The proposed amendment would The proposed amendment would spouse of a 100 percent or totally notes. The amendment does not appear on the ballot as follows: “The constitutional amendment appear on the ballot as follows: disabled veteran.” provide independent authority for clarifying references to the per- “The constitutional amendment increasing ad valorem tax rates. manent school fund, allowing the authorizing the governor to grant Proposition Number 2 General Land Office to distribute a pardon to a person who successThe proposed amendment would (SJR 4) revenue from permanent school fully completes a term of deferred appear on the ballot as follows: adjudication community superviSJR 4 would amend the constitu- “The constitutional amendment fund land or other properties to sion.” tion to authorize the Texas Water authorizing the legislature to the available school fund to proDevelopment Board to issue ad- permit a county to issue bonds vide additional funding for pubProposition Number 10 ditional general obligation bonds or notes to finance the develop- lic education, and providing for (SJR 37) on a continuing basis for one or ment or redevelopment of an un- an increase in the market value more accounts of the Texas Water productive, underdeveloped, or of the permanent school fund for SJR 37 would amend the constiDevelopment Fund II, with the blighted area and to pledge for the purpose of allowing increased tution by extending the length of restriction that the total amount repayment of the bonds or notes distributions from the available the unexpired term that causes of bonds outstanding at any time increases in ad valorem taxes im- school fund.” the automatic resignation of cerdoes not exceed $6 billion. posed by the county on property tain local elected officeholders Proposition Number 7 in the area. The amendment does if they announce candidacy or The proposed amendment would not provide authority for increas(SJR 28) become candidates for another appear on the ballot as follows: ing ad valorem tax rates.” SJR 28 would amend the consti- office from one year to one year “The constitutional amendment tution by adding El Paso County and 30 days. providing for the issuance of adProposition Number 5 to the list of counties authorized ditional general obligation bonds (SJR 26) to create conservation and recla- The proposed amendment would by the Texas Water Development appear on the ballot as follows: Board in an amount not to exceed SJR 26 would amend the con- mation districts to develop parks “The constitutional amendment $6 billion at any time outstand- stitution to authorize the legisla- and recreational facilities fito change the length of the unexnanced by taxes. ing.” ture to allow cities and counties pired term that causes the autoto enter into interlocal contracts The proposed amendment would matic resignation of certain electProposition Number 3 with other cities and counties appear on the ballot as follows: ed county or district officeholders without having to assess an ad (SJR 50) “The constitutional amendment if they become candidates for anvalorem tax and set aside a speciother office.” SJR 50 would amend the constitu- fied amount of funds for the pay- authorizing the legislature to tion to authorize the Texas Higher ment of costs under the interlocal permit conservation and reclamation districts in El Paso County Education Coordinating Board or contract. to issue bonds supported by ad Published by Secretary of State its successors to issue and sell Hope Andrade, general obligation bonds on a The proposed amendment would valorem taxes to fund the devel, continuing basis for the purpose appear on the ballot as follows: opment and maintenance of parks 1-800-252-VOTE (8683). of financing educational loans for “The constitutional amendment and recreational facilities.” Brief Explanatory Statements of Proposed PUBLIC NOTICE 25 years of Lions service Lions Club International has awarded a 25 year membership pin to Lion Andy Rodriguez. The award was presented to Rodriguez by Zone Chairman Lion Greg McLain at the club’s regular meeting Monday, Sept. 12. In addition to serving his club in every office position, Lion Rodriguez currently has the distinction of serving as District Governor for Lions District 2-S5. In that capacity, D.G. Rodriguez said that he has issued a plea to all clubs in the district (60+ clubs) to conduct a special fund raising effort to assist Lions club members who have lost their homes and possesions in the area forrest fires. The Noon Lions then approved a $500 donation from the clubs activity fund and conducted a drive during the meeting that raised an additional $665. Roderiguez reported that prior to this local fund raiser, he has already been advised that over $8,000 has been raised by other district clubs, plus a preliminary grant of $2,000 from LCIF (Lions Club Internation Fund). (Courtesy photo) Thursday, September 22, 2011 Mohrmann’s Drug Store Com ly (830) 672-2317 Puzzle Page The Gonzales Cannon Page D5 pe nd Fast, frie ! Get your prescriptions in minutes Pri titive service 413 St. George • Gonzales, TX 78629 cing CANNON KID’S CORNER thing costs more doesn’t mean it is necessarily better. You will learn this on Friday with your next purchase as you do your research. ARIES - Mar 21/Apr 20 Aries, if romance hasn’t been on your mind, it’s time to make it a priority. Do what you have to do -- wine, dine and pull out all the romantic punches. TAURUS - Apr 21/May 21 Another person’s misdeeds will shed some light on your own, Taurus. Recognize your mistakes and work to correct them as soon as possible. GEMINI - May 22/Jun 21 Gemini, you will need an abundance of patience if you are to make it through the next few days. Thursday proves especially challenging when a curveball gets thrown your way. CANCER - Jun 22/Jul 22 Manipulate a difficult situation to your advantage, Cancer. You already have a way with people, now you just have to get them on board with your idea. LEO - Jul 23/Aug 23 Leo, after a few bumps along the road, things will even out to a steady pace for you. That’s a good thing because now you’ll be able to step back and review your actions. VIRGO - Aug 24/Sept 22 Virgo, someone else’s needs will take priority over your own this week. That could put a crimp in your plans. Find out if you will need help to get through the days. LIBRA - Sept 23/Oct 23 Libra, just because someSCORPIO - Oct 24/Nov 22 Find a way to reduce the stress in your life, Scorpio. This way you can enjoy family and friends without a lot of things on your mind at any given time. SAGITTARIUS - Nov 23/ Dec 21 Sagittarius, don’t make too much of a situation because you’re reading into it the wrong way. The truth is much less than you are making things. Excitement awaits you. CAPRICORN - Dec 22/Jan 20 Capricorn, if you don’t take a breather now and then you will be left with little energy. Take advantage of invitations by friends to hang out and enjoy some downtime. AQUARIUS - Jan 21/Feb 18 Aquarius, you may feel like you’re taking two steps back every day, but the truth is you’re making progress just in small doses. Stick with what you’re doing. PISCES - Feb 19/Mar 20 Pisces, few things are more exciting than being surprised by someone you love and respect. That is just what may happen to you. FAMOUS BIRTHDAYS SEPTEMBER 25 Will Smith, Actor (43) SEPTEMBER 26 Olivia Newton-John, Singer (63) SEPTEMBER 27 Gwyneth Paltrow, Actress (39) SEPTEMBER 28 Hilary Duff, Actress (24) SEPTEMBER 29 Mackenzie Crook, Actor (40) SEPTEMBER 30 Jenna Elfman, Actress (40) OCTOBER 1 Puzzle Answers Page D6 Cannon Comics The Gonzales Cannon Thursday, September 22, 2011 “yoyo” means “come-come” in the native language of the Philippines.. The Vaz Clinic, P.A..” 1103 N. Sarah DeWitt Dr., P.O. Box 562 Gonzales, Texas 78629 Clinic Hours: Garth O. Vaz, 24 hrs. a day, 7 days a week - coverage by phone M.D. Family Practice 830-672-2424 THEVAZCLINICPA@stx.rr.com “You will like our fees!” This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue listening from where you left off, or restart the preview.
https://www.scribd.com/document/65937219/Gonzales-Cannon-9-22-11-Issue
CC-MAIN-2016-30
refinedweb
51,403
72.87
Hi all, I am trying to solve the problem below via a python script. I have a lot (600+) of these situation where I have as input surfaces and I would have to remove areas from these surfaces: However I can’t find an easy way to do it via scripting, making the red contours solids and then booleaning them out of the initial surface seems to fail, also I would need to do this on a lot of elements, anyone could suggest a smart way to do it? e.g. script not working and potentially too slow: import rhinoscriptsyntax as rs surface = rs.GetObject("srf", 8) cutpieces = rs.GetObjects("curves") path = rs.AddLine([0,0,0],[3,0,0]) solids = [] for cutpiece in cutpieces: newsolid = rs.ExtrudeCurve(cutpiece, path) rs.CapPlanarHoles(newsolid) rs.MoveObject(newsolid, [-1.5,0,0]) solids.append(newsolid) rs.BooleanDifference(surface,solids) Many thanks in advance removing areas.3dm (77.1 KB)
https://discourse.mcneel.com/t/subtracting-areas-from-surface-python/131691
CC-MAIN-2021-49
refinedweb
157
55.24
Lesson 6 - Don't reinvent the wheel, use CocoaPods In the previous lesson, Introduction to the important TableView component, we learned to use TableView. Have you been programming in a different language or you just think it isn't necessary to invent the wheel over and over again? You're in the right place today. We're going to have a look at how to avoid programming the same things. Swift doesn't officially support packages, unlike e.g. NuGet for .NET projects, but that's no big problem. There are a lot of unofficial solutions and probably the best is CocoaPods. CocoaPods The individual packages are called pods and allow us to simply reuse a functionality other person already programmed. In this lesson, we're going to learn how to get the pods into our project and introduce a few you should know about. To be able to use the CocoaPods power, you have to install CocoaPods on your Mac first (there are two approaches). Then you prepare your project for use with CocoaPods. You specify which packages you want and then you can just use import. Don't be afraid, you'll see it's worth it in the end of this lesson. Terminal approach Are you friends with the terminal? It's one of the ways to install CocoaPods on your Mac and then add it to your project. There is also a simple CocoaPods application that does the same. If you don't want to write commands in the terminal, you can just skip this part of the tutorial. Open the terminal and start by installing the CocoaPods itself: sudo gem install cocoapods The installation may take a while. Application If you decided to install CocoaPods using the application, download the official CocoaPods application from its website. That's all for now. How pods work Now let's explain, how pods work in a project. Until now, we've had one project per app. Pods have its own project, so CocoaPods will create a Workspace from your project first. Simply put, that means you will have more projects "under one roof". We don't have to think about it much. The second important part of CocoaPods is the Podfile file in every project. You can specify in it what pods you want to use and CocoaPods will take care of the rest. Terminal Now, let's have a look at how to prepare our projects for pods. In the terminal, open the folder of your project, where you want to add the pods. You should be in the very root folder, which means you should see a file with the .xcodeproj extension representing your Xcode project. Close Xcode. Now initialize CocoaPods by this command: pod init Again, just wait and your project is ready for pods. Application Click "File" in the application menu and choose "New Podfile from Xcode Project". Now we have to navigate to the project folder and select the file with the .xcodeproj extension. The CocoaPods application will create and open a Podfile. Editing Podfiles You've probably already noticed the changes. Better said a change which is the Podfile file added to the root folder of our project. Right here, we can set what pods we want to use. You can edit the Podfile in any editor you want. If you've decided to use the application, it makes sense to edit it right there. If you've created the Podfile via the application, the file should already be opened. If you took the terminal approach, you can open it in anything, even Xcode will do. The Podfile is basically empty now, it only contains the information about for which project it was created: end Let's do what the comment is telling us to do and uncomment the line with the global platform definition. That means deleting the # which is used for comments: platform :ios, '9.0' Chameleon We can show the use of pods on a small, but very useful pod named Chameleon. It's a framework for iOS colors. You can find the instructions how to install it via CocoaPods on the project's GitHub page. Following those, let's add Chameleon as one of the pods under the # Pods for Cocoapods_ICTsocial comment in our file: pod 'ChameleonFramework/Swift', :git => '', :branch => 'wip/swift4' The command is a bit more complicated to make the framework work with the current version of Xcode and Swift 4. Usually the pod keyword and the name of the project will do. Let's have a look at how the whole file looks now: pod 'ChameleonFramework/Swift', :git => '', :branch => 'wip/swift4' end Terminal If you've used CocoaPods via terminal, save the Podfile now and run this command: pod install Application If you've used the CocoaPods application, just click the "Install" button in the top right corner and wait a while. The project with Chameleon The project looks more interesting now. There's the Pods/ folder where our pods are stored and also the very important file with the .xcworkspace extension. From now on, we'll use this file to open the project, because the pods have its own projects and if we tried to open it through the original .xcodeproj, it wouldn't work. So let's open the project via .xcworkspace and try to use Chameleon right away. We'll move to the ViewController.swift and add an import for the freshly installed pod: import ChameleonFramework We'll try to build the project (Cmd + B) to make sure everything is okay. Using the Chameleon framework doesn't do much in an empty project, so let's make sure it works. Chameleon provides, for example, much prettier colors than the system ones. You can find a list of them on the project page and access them using the UIColor system class which now has more colors available. Xcode will show us e.g. the available pastel "flat" colors: It's worth to consider using the framework just because of the colors. The app will look a bit better with little to no effort. Chameleon can also help us with gradients. It can generate colors from images or set a nice contrast color according to a given background. That's useful when we have dynamic colors and always want the text to be readable. This can be done just by using a single method, which returns a contrast UIColor for the given background. You can also choose whether a flat color should be returned. ContrastColorOf(UIColor.flatBlue, returnFlat: true) In one of the further lessons, we'll use CocoaPods when trying to get data from the Internet (in the JSON format) and to process them. Both can be done in pure Swift, but the AlamoFire and SwiftyJSON libraries make things much easier. We can say that these two have become the unofficial standard for apps that download data from the Internet and process the JSON format. In the next lesson, When a single screen isn't enough - Navigation in iOS, we'll have a look at navigation and start writing the promised TODO app No one has commented yet - be the first!
https://www.ictdemy.com/swift/ios/dont-reinvent-the-wheel-use-cocoapods
CC-MAIN-2021-31
refinedweb
1,198
72.46
31 replies on 3 pages. Most recent reply: Oct 1, 2011 11:59 PM by Qingsheng Gao I'm actually glad I waited this long before beginning to learn the language, because they've sorted out a lot of issues in the meantime. In fact, several versions of the language have made breaking changes with previous versions, requiring code rewrites. Some people have found this shocking; an indication that the language is "immature" and "not ready for the enterprise." I find it one of the most promising things about Scala -- it is not determined to become an instant boat anchor by committing to early decisions that are later revealed to be suboptimal, or outright mistakes. Java is the perfect case study, unable to pry its cold, dead fingers from old decisions made badly in a rush to meet an imagined deadline imposed by the Internet. C++ was admirable when it determined to be C-compatible because it brought legions of C programmers into the world of object-oriented programming, but coping with the resulting hurdles is no longer a good use of programmer time. Indeed, I grew tired of the whole mindset that language design is more important than programmer time; that a programmer should work for the language rather than the reverse. So much so that I thought I had grown out of programming altogether. But now I think I might just have been tired of the old generation of languages and waiting for the next generation -- and especially the forward-thinking around those languages. If you've read my past writings, you know I am unimpressed with arguments about static type checking for its own sake, which typically come down to "if I can't know X is an int, then the world will collapse!" I've written and seen enough robust code in Python to be unswayed by such histrionics; the payoff for all the hoop-jumping in C++ and Java seems small compared to what can be accomplished using far less, and much clearer, Python code. Scala is the first language I've seen where static type-checking seems to pay off. Some of its amazing contortional abilities would not, I think, be possible without static type checking. And, as I shall attempt to show in this article, the static checking is relatively unobtrusive -- so much so that programming in Scala almost feels like programming in a dynamic language like Python. One retort I've gotten a lot when I discuss the shortcomings of Java compared with a language like Python is "oh, you're just complaining about Finger Typing" (as opposed to the "typing" of type-checking). You can trivialize "finger typing" but in my experience it really does make a big difference when you can take an idea and express it in a few keystrokes versus the veritable vomiting of code necessary to express even the simplest concepts in Java. The real problem is not the number of keystrokes, but the mental load. By the time you've jumped through all those hoops, you've forgotten what you were actually trying to do. Often, the ceremony involved in doing something will dissuade you from trying it. Scala removes as much of the overhead (and mental load) as possible, so you can express higher-order concepts as quickly as you can type them. I was amazed to discover that in many cases, Scala is even more succinct than Python. The result of all this is something I've always loved about Python: the level of abstraction is such that you can typically express an idea in code more easily and clearly than you can by making diagrams on a whiteboard. There's no need for that intermediate step. Let's look at an example. Suppose you'd like to model buildings. We can say: class Building val b = new Building Note the absolute minimum amount of ceremony to create a class -- great when you're just sketching out a solution. If you don't need parens, you don't write them. A val is immutable, which is preferred in Scala because it makes concurrent code easier to write (there is also var for variables). And notice that I didn't have to put any type information on b, because Scala has type inference so if it can figure out the type for you, it will. No more jumping through hoops to satisfy a lazy language. If we want the Building to know how many square feet it contains, there's an explicit way: class Building(feet: Int) { val squareFeet = feet } val b = new Building(100) println(b.squareFeet) When you do need to provide type information, you just give it after a colon. Note that println() does not require Java's System.out scoping. And class fields default to public -- which is not a big deal if you can stick to val, since that makes it read-only. You can always make them private if you want, and Scala has more fine-grained access control than any language I've seen. If all you want to do is store the argument in the class, as above, Scala makes it easy. Note the addition of the val in the argument list: class Building(val feet: Int) val b = new Building(100) println(b.feet) Now feet automatically becomes the field. But it doesn't stop there. Scala has the case class which does even more for you. For one thing, arguments automatically become fields, without saying val before them: case class Building(feet: Int) val b = Building(100) println(b) // Result: Building(100) Note the new is no longer necessary to create an object, the same form that Python uses. And case classes rewrite toString for you, to produce nice output. But wait, there's more! A case class automatically gets an appropriate hashcode and == so you can use it in a Map (the -> separates keys from values): val m = Map(Building(5000) -> "Big", Building(900) -> "Small", Building(2500) -> "Medium") m(Building(900)) // Result: Small Note that Map is available (along with List, Vector, Set, println() and more) as part of the "basic Scala building set" that comes without any imports. Again, this feels like Python. Inheritance is also succinct. Suppose we want to subclass Building to make a House class: class House(feet: Int) extends Building(feet) val h = new House(100) println(h.feet) // Result: 100 Although the extends keyword is familiar from Java, notice how the base-class constructor is called -- a pretty obvious way to do it, once you've seen it. And again, you don't write any more code than what is absolutely necessary to describe your system. We can also mix in behavior using traits. A trait is much like an interface, except that traits can contain method definitions, which can then be combined when creating a class. Here are several traits to help describe a house: trait Bathroom trait Kitchen trait Bedroom { def occupants() = { 1 } } class House(feet: Int) extends Building(feet) with Bathroom with Kitchen with Bedroom var h = new House(100) val o = h.occupants() val feet = h.feet occupants() is a typical Scala method definition: the keyword def followed by the method name, argument list, and then an = and the body of the method in curly braces. The last line in the method produces the return value. More type inference is happening here; if we wanted to be more specific we could specify the return type of the method: def occupants(): Int = { 1 } Notice that the method occupants() is now part of House, via the mixin effect of traits. Consider how simple this code is ... and how undistracting. You can talk about what it's doing, rather than explaining meaningless syntactic requirements as you must do in Java. Creating a model takes no more than a few lines of straightforward code. Wouldn't you rather teach this to a novice programmer than Java? Functional programming is often promoted first as a way to do concurrency. However, I've found it to be more fundamentally useful as a way to decompose programming problems. Indeed, C++ has had functional programming virtually from inception, in the form of the STL, without built-in support for concurrency. Python also has significant functional programming libraries but these are independent of its thread support (which, since Python cannot support true parallelism, is primarily for code organization). Scala has the best of both worlds: true multiprocessor parallelism and a powerful functional programming model -- but one that does not force you to program functionally if it's not appropriate. When approaching a functional style of programming, I think it's important to go slow and be gentle with yourself. If you push too hard you can get caught up in knots. In fact, I think one of the great benefits of learning functional programming is that it disciplines you to break a problem into small, provable steps -- and to use existing (and proven) code for each of those steps whenever possible. This not only makes your non-functional code better, but it also tends to make everything you write more testable, since functional programming focuses on transforming data (thus, after each transformation, you have something else to test). Much of functional programming involves performing operations on collections. If, for example, we have a Vector of data: val v = Vector(1.1, 2.2, 3.3, 4.4) You can certainly print this using a for loop: for(n <- v) { println(n) } The left-arrow can be pronounced "in" -- n gets each value in v. This syntax is definitely a step up from having to give every detail as you had to do in C++ and Java (note that Scala does all the creation and type-inference for n). But with functional programming, you extract the looping structures altogether. Scala collections and iterables have a large selection of operations to do this for you. One of the simplest is foreach, which performs an operation on each element in the collection. So the above code becomes: v.foreach(println) This actually uses several shortcuts, and to take full advantage of functional programming you first need to understand the anonymous function -- a function without a name. Here's the basic form: ( function parameters ) => function body The => is often pronounced "rocket," and it means, "Take the parameters on the left and apply them in the code on the right." An anonymous function can be large; if you have multiple lines, just put the body inside curly braces. Here's a simple example of an anonymous function: (x:Int, y:Double) => x * y The previous foreach call is, stated explicitly: v.foreach((n:Double) => println(n)) Usually, you can rely on Scala to do type inference on the argument -- in this case Scala can see that v contains Double so it can infer than n is a Double: v.foreach((n) => println(n)) If you only have a single argument, you can omit the parentheses: v.foreach(n => println(n)) When you have a single argument, you can leave out the parameter list altogether and use an underscore in the anonymous function body: v.foreach(println(_)) And finally, if the function body is just a call to a single function that takes one parameter, you can eliminate the parameter list, which brings us back to: With all these options and the density possible in functional programming, it's easy to succumb to fits of cleverness and end up writing obtuse code that will cause people to reject the language as too complex. But with some effort and focus on readability this doesn't need to happen. foreach relies on side effects and doesn't return anything. In more typical functional programming you'll perform operations (usually on a collection) and return the result, then perform operations on that result and return something else, etc. One of the most useful functional tools is map, rather unfortunately named because it's easy to confuse with the Map data structure. map performs an operation on each element in a sequence, just like foreach, but map creates and returns a new sequence from the result. For example: v.map(n => n * 2) multiplies each element in v by 2 and returns the result, producing: Vector(2.2, 4.4, 6.6, 8.8) Again, using shortcuts we can reduce the call to: v.map(_ * 2) There are a number of operations that are simple enough to be called without parameters, such as: v.reverse v.sum v.sorted v.min v.max v.size v.isEmpty Operations like reverse and sorted return a new Vector and leave the original untouched. It's common to see operations chained together. For example, permutations produces an iterator that selects all the different permutations of v. To display these, we pass the iterator to foreach: v.permutations.foreach(println) Another helpful function is zip, which takes two sequences and puts each adjacent element together, like a zipper. This: Vector(1,2,3).zip(Vector(4,5,6)) produces: Vector((1,4), (2,5), (3,6)) (Yes, the parenthesized groups within the Vector are tuples, just like in Python). We can get fancy, and zip the elements of v together with those elements multiplied by 2: v.zip(v.map(_ * 2)) which produces: Vector((1.1,2.2), (2.2,4.4), (3.3,6.6), (4.4,8.8)) It's important to know that anonymous functions are a convenience, and very commonly used, but they are not essential for doing functional programming. If anonymous functions are making your code too complicated, you can always define a named function and pass that. For example: def timesTwo(d: Double) = d * 2 (This uses another Scala shortcut: if the function body fits on one line, you don't need curly braces). This can be used instead of the anonymous function: v.zip(v.map(timesTwo)) You know you could produce the same effect as the code in this section using for loops. One of the biggest benefits of functional programming is that it takes care of the fiddly code -- the very code that seems to involve the kind of common errors that easily escape our notice. You're able to use the functional pieces as reliable building blocks, and create robust code more quickly. It certainly is easy for functional code to rapidly devolve into unreadability, but with some effort you can keep it clear. For me, one of the best things about functional programming is the mental discipline that it produces. I find it helps me learn to break problems down into small, testable pieces, and clarifies my analysis. For that reason alone, it is a worthwhile practice. It's amazing how long programmers have put up with stone-age (or more appropriately, assembly-age) language constructs. The switch statement is an excellent example. Seriously, jumping around based on an integral value? How much effort does that really save me? People have begged for things as simple as switching on strings, but this is usually met with "no" from the language designers. Scala leapfrogs all that with the match statement, that looks much like a switch statement except that it can select on just about anything. The clarity and code savings is huge: // PatternMatching.scala (Run as script: scala PatternMatching.scala) trait Color case class Red(saturation: Int) extends Color case class Green(saturation: Int) extends Color case class Blue(saturation: Int) extends Color def matcher(arg:Any): String = arg match { case "Chowder" => "Make with clams" case x: Int => "An Int with value " + x case Red(100) => "Red sat 100" case Green(s) => "Green sat " + s case c: Color => "Some Color: " + c case w: Any => "Whatever: " + w case _ => "Default, but Any captures all" } val v = Vector(1, "Chowder", Red(100), Green(50), Blue(0), 3.14) v.foreach(x => println(matcher(x))) A case class is especially useful because the pattern matcher can decompose it, as you'll see. Any is the root class of all objects including what would be "primitive" types in Java. Since matcher() takes an Any we can be confident that it will handle any type that we pass in. Ordinarily you'd see an opening curly brace right after the = sign, to surround the entire function body in curly braces. In this case, the function body is a single statement so I can take a shortcut and leave off the outer braces. A pattern-matching statement starts with the object you want to match against (this can be a tuple), the match keyword and a body consisting of a sequence of case statements. Each case begins with the match pattern, then a rocket and one or more lines of code which execute upon matching. The last line in each case produces a return value. Match expressions can take many forms, only a few of which are shown here. First, you see a simple string match; however Scala has sophisticated regular expression syntax and you can use regular expressions as match expressions, including picking out the pieces into variables. You can capture the result of a match into a variable as in case x: Int. Case classes can produce an exact match as in Red(100) or you can pick out the constructor arguments as in Green(s). You can also match against traits, as in c: Color. You have two choices if you want to catch everything else. To capture into a variable, you can match Any, as in case w: Any. If you don't care what the value is, you can just say case _. Note that no "break" statement is necessary at the end of each case body. Most of what drove me away from programming were things I had figured out but couldn't convincingly express to others. Things that the Ph.D. computer scientists ought to be proving. Such as: Note that all these are issues of scale -- things that work in the small start falling apart as programs get bigger or more complex. That's probably why they're hard to argue about, because demonstration examples can be small and obvious. It turns out I was arguing with the wrong people. Or rather, the right people were not arguing about it, they were off fixing the problems. When it comes to concurrency, the right answer is one that you can't screw up: you live behind a safe wall, and messages get safely passed back and forth over the wall. You don't have to think about whether something is going to lock up (not on a low level, anyway); you live in your little walled garden which happens to run with its own thread. The most object-ish approach to this that I've see is actors. An actor is an object that has an incoming message queue, often referred to as a "mailbox." When someone outside your walled garden wants you to do something, they send you a message that safely appears in your mailbox, and you decide how to handle that message. You can send messages to other actors through their mailboxes. As long as you keep everything within your walls and only communicate through messages, you're safe. To create an actor, you inherit from the Actor class and define an act() method, which is called to handle mailbox messages. Here's the most trivial example I could think of: // Bunnies.scala (Run as script: scala Bunnies.scala) case object Hop case object Stop case class Bunny(id: Int) extends scala.actors.Actor { this ! Hop // Constructor code start() // ditto def act() { loop { react { case Hop => print(this + " ") this ! Hop Thread.sleep(500) case Stop => println("Stopping " + this) exit() } } } } val bunnies = Range(0,10).map(new Bunny(_)) println("Press RETURN to quit") readLine bunnies.foreach(_ ! Stop) The act() method is automatically a match statement, although this is not built into the language -- Scala magic was used to make the Actor library work this way. Because of the match statement, case objects work especially well as messages (although, as with any match statement, you can match on virtually anything) -- a case object is just like a case class except that defining one automatically creates a singleton object. The loop{ react{ construct looks a little strange at first; this is an artifact of the evolution of Scala actors. In the initial design, you only had a loop to open the match statement for mailbox messages. But later, in an act of brilliance, it was determined that the concurrency provided by threads could be combined with cooperative multitasking, wherein a single thread of control is passed around -- cooperatively -- among tasks. Each task does something and then explicitly gives up control, which is then passed to the next task. The benefit of cooperative multitasking is that it requires virtually no stack space or context switching time and thus it can scale up -- often to millions of tasks. By combining this with threaded concurrency, you get the best of both worlds: The speed and scalability of cooperative tasks, which are also distributed across as many processors as are available. This all comes transparently. The loop{ react{ construct should be your default choice, and doesn't cost anything. I suspect if they were creating actors from scratch now, this construct would probably have been simplified into just loop{. Note the two "naked" lines of code at the beginning of class Bunny. In Scala, you don't have to put object initialization code inside a special method, and you can put it anywhere inside the body of the class. The first line uses the Actor operator ! for sending messages, and in this case the object sends a message to itself, to get things going. Then it calls start() to begin the actor's message loop. When the actor receives a Hop message, it prints itself, sends itself another Hop message, then sleeps for half a second. When it gets a Stop message it calls Actor.exit() to stop the event loop. To create all the Bunny objects I use Range() to create a sequence from 0 through 9, which is mapped onto calls to Bunny constructors. readLine waits for the user to press a carriage return, at which point a Stop message is sent to each Bunny. Scala 2.9 includes parallel collections, a powerful way to easily use multiple processors for bulk operations like foreach, map, etc. Suppose you have a collection of data objects called toBeProcessed and an expensive function process. To automatically parallelize the processing, you just add a .par: val result = toBeProcessed.par.map(obj => process(obj)) If you know that you have objects that can be processed in parallel, this construct makes it effortless. You can find out more about parallel collections in this Scala Days 2010 video. Even more powerful is the akka library, which builds concurrent systems that are, among other things, transparently remoteable. Scala is the best solution for concurrent programming that I've seen, and it keeps getting better. Scala does suffer from the mistaken idea that it's complicated, and for good reason. Many early adopters have been language enthusiasts who love to show how clever they are, and this only confuses beginners. But you can see from the code above that learning Scala should be a lot easier than learning Java! There's none of the horrible Java ceremony necessary just to write "Hello, world!" -- in Scala you can actually create a one-line script that says: println("Hello, world!") Or you can run it in Scala's interactive interpreter, which allows you to easily experiment with the language. Or consider opening a file and processing the contents (something that's also very high-ceremony in Java): val fileLines = io.Source.fromFile("Colors.scala").getLines.toList fileLines.foreach(println) (The "processing" in this case is just printing each line). The simplicity of the code required to open a file and read all the lines, combined with the power of the language, suggests that Scala can be very useful for solving scripting problems (also Scala has strong native support for XML). We can make some small modifications to create a word-count program: for(file <- args) { print(file + ": ") val contents = io.Source.fromFile(file).getLines.mkString println(contents.split(" ").length) } args is available to all programs and contains all the command-line arguments, so this program steps through them one at a time. Here, we split words at white space but Scala also has regular expressions. It is possible to write complex code that requires expertise to unravel. But it's totally unnecessary to write such code when teaching beginners. Indeed, if taught right a person should come away from Scala thinking that it is a simpler, more consistent language than the alternatives. All the Scala tutorials I encountered assume that you are a Java programmer. This is unfortunate because, as I've shown above, Scala could be taught as a first language in a much less-confusing way than we are forced to teach Java. But it does make it easier for writers to assume that you know how to program, and in Java. There are language features that I have only touched on here, or not covered at all. What I've shown should either give you the urge to learn and use Scala, or it will have you running back to the safety of your favorite language. v foreach println
http://www.artima.com/forums/flat.jsp?forum=106&thread=328540
CC-MAIN-2016-26
refinedweb
4,259
60.75
Cell magics in IPython In the previous post, I explained what the magic functions are and why they are cool. We have also created a line magic function that interprets mathematical formulas written in Polish notation. Today, we will talk about cell magic functions. Cell magics are similar to line magics, except that they work on cells (blocks of code), not on single lines. IPython comes with a few predefined ones and most of them will let you interpret code written in a different programming language. Need to run some Python 2 code, but IPython is using Python 3 by default? No problem, just type %%python2, paste/type the code and run it: In [1]: print 'hello there' File "<ipython-input-1-202d533f5f80>", line 1 print 'hello there' ^ SyntaxError: Missing parentheses in call to 'print'. Did you mean print('hello there')? # But! In [2]: %%python2 ...: print 'hello there' ...: ...: hello there You can also run code written in Ruby, Bash, JavaScript, and other languages. And those different blocks of code can interact with each other, for example, you can run some JavaScript code and send variables back to Python. Writing a cell magic function Now, let’s try to write our own cell magic function. I initially wanted to continue with the example of Polish notation from the first part of the series. So I started writing a function that translates all the mathematical operations in a block of code into a Polish notation form. Unfortunately, I quickly realized that if I want to write a good example (not some half-assed code that works only for + and -), I would have to write a proper interpreter. And that would no longer be a simple example1. So this time, we are going to do something different. One of the new features that came in Python in version 3.5 are type hints. Some people like them, some people don’t (which is probably true for every new feature in every programming language). The nice thing about Python type hints is that they are not mandatory. If you don’t like them - don’t use them. For fast prototyping or a project that you are maintaining yourself, you are probably fine without them. But for a large code base, with plenty of legacy code maintained by multiple developers - type hints can be tremendously helpful! As you are probably starting to guess, our cell magic function will check types for a block of code. Why? Well, with IPython, you can quickly prototype some code, tweak it and save it to a file using the %save or %%writefile magic functions (or simply copy and paste it, if it’s faster for you). But, at the time of writing this article, there is no built-in type checker in Python. The mypy library is a de facto static type checker, but it’s still an external tool that you run from shell ( mypy filename.py). So let’s make a helper that will allow us to type-check Python code directly in IPython! This is how we expect it to work: In [1]: %%mypy ...: def greet(name: str) -> str: ...: return f"hello {name}" ...: greet(1) ...: ...: Out[1]: # It should print an error message, as 1 is not a string To achieve this, we will simply call the run function from mypy.api (as suggested in the documentation) and pass the -c PROGRAM_TEXT parameter that checks a string. Here is the code for the type checker: from IPython.core.magic import register_cell_magic @register_cell_magic('mypy') def typechecker(line, cell): try: from mypy.api import run except ImportError: return "'mypy' not installed. Did you run 'pip install mypy'?" args = [] if line: args = line.split() result = run(['-c', cell, *args]) if result[0]: print('\nType checking report:\n') print(result[0]) # stdout if result[1]: print('\nError report:\n') print(result[1]) # stderr # Return the mypy exit status return result[2] Let’s go through the code, given that there are a few interesting bits: @register_cell_magic(mypy) def typechecker(line, cell): We start by defining a function called typechecker and registering it as a cell magic function called %%mypy. Why didn’t I just define a function called mypy instead of doing this renaming? Well, if I did that, then our mypy function would shadow the mypy module. In this case, it probably won’t cause any problems. But in general, you should avoid shadowing variables/functions/modules, because one day, it will cause you a lot of headache. try: from mypy.api import run except ImportError: return "`mypy` not found. Did you forget to run `pip install mypy`?" Inside our function, we first try to import the mypy module. If it’s not available, we inform the user that it should be installed, before this magic function can be used. The nice thing about importing mypy in the typechecker function is that the import error will show up only when you run the magic function. If you put the import at the top of the file, then save the file inside IPython startup directory, and you don’t have mypy module installed, you will get the ImportError every time you start IPython. The downside of this approach is that you are running the import code every time you run the typechecker function. This is something that you should avoid doing, if you care about the performance, but in case of our little helper, it’s not a big problem. If you are using Python 3.6 or higher, you can catch the ModuleNotFoundError error instead of ImportError. ModuleNotFoundError is a new subclass of ImportError thrown when a module can’t be located. I want to keep my code compatible with lower versions of Python 3, so I will stick to the ImportError. args = [] if line: args = line.split() result = run(['-c', cell, *args]) Note that the function used for defining a cell magic must accept both a line and cell parameter. Which is great, because this way, we can actually pass parameters to mypy! So here, we are passing additional arguments from the line parameter to the run function. Here is how you could run our magic function with different settings: In [1]: %%mypy --ignore-missing-imports --follow-imports error ...: CODEBLOCK which is equivalent to running the following command in the command line: mypy --ignore-missing-imports --follow-imports error -c 'CODEBLOCK'. The rest of the code is quite similar to the example from the documentation. Testing time! Our cell magic function is ready. Let’s save it in the IPython startup directory (what’s IPython startup directory?), so it will be available next time we start IPython. In my case, I’m saving it in a file called: ~/.ipython/profile_default/startup/magic_functions.py Now, let’s fire up IPython and see if it works: In [1]: %%mypy ...: def greet(name: str) -> str: ...: return f"hello {name}" ...: greet('Bob') ...: ...: Out[1]: 0 In [2]: %%mypy ...: def greet(name: str) -> str: ...: return f"hello {name}" ...: greet(1) ...: ...: Type checking report: <string>:3: error: Argument 1 to "greet" has incompatible type "int"; expected "str" Out[2]: 1 Great, it works! It returns 0 (which is a standard UNIX exit code for a successful command) if everything is fine. Otherwise, it reports what problems have been found. How about passing some additional parameters? In [3]: %%mypy ...: import flask ...: ...: Type checking report: <string>:1: error: No library stub file for module 'flask' <string>:1: note: (Stub files are from) Out[3]: 1 # Ok, this can happen () # Let's ignore this error In [4]: %%mypy --ignore-missing-imports ...: import flask ...: ...: Out[4]: 0 Passing additional parameters also works! Great, we created a nice little helper function that we can use for checking, if the type hints are correct in a given block of code. Line and cell magic function There is one more decorator that we didn’t discuss yet: @register_line_cell_magic. It’s nothing special - especially now that you know how line magic and cell magic works - so there is no need for a separate article. IPython documentation explains this decorator very well: @register_line_cell_magic def lcmagic(line, cell=None): "Magic that works both as %lcmagic and as %%lcmagic" if cell is None: print("Called as line magic") return line else: print("Called as cell magic") return line, cell If you run %lcmagic, this function won’t receive the cell parameter and it will act as a line magic. If you run %%lcmagic, it will receive the cell parameter and - optionally - the line parameter (like in our last example with %%mypy). So you can check for the presence of cell parameter and based on that, control if it should act as a line or cell magic. Conclusion Now you know how to make a line magic and a cell magic functions and how to combine them together into a line and magic function. There is still one more feature that IPython offers - the Magics class. It allows you to write more powerful magic functions, as they can, for example, hold state in between calls. So stay tuned for the last part of this article! Writing a translator is still a great exercise! I recently followed the Let’s Build A Simple Interpreter series, where you would build a Pascal interpreter in Python, and it was a really fun project for someone who never studied the compilers. So, if you are interested in this type of challenge, that blog can help you get started. ↩ Discussion (0)
https://dev.to/switowski/creating-a-type-checker-magic-function-in-ipython-i1j
CC-MAIN-2022-33
refinedweb
1,580
72.46
In this example, we use the masked normalized cross-correlation to identify the relative shift between two similar images containing invalid data. In this case, the images cannot simply be masked before computing the cross-correlation, as the masks will influence the computation. The influence of the masks must be removed from the cross-correlation, as is described in 1. In this example, we register the translation between two images. However, one of the images has about 25% of the pixels which are corrupted. D. Padfield, “Masked object registration in the Fourier domain” IEEE Transactions on Image Processing (2012). DOI:10.1109/TIP.2011.2181402 import numpy as np import matplotlib.pyplot as plt from skimage import data, draw from skimage.feature import masked_register_translation from scipy import ndimage as ndi Define areas of the image which are invalid. Probability of an invalid pixel is 25%. This could be due to a faulty detector, or edges that are not affected by translation (e.g. moving object in a window). See reference paper for more examples image = data.camera() shift = (-22, 13) corrupted_pixels = np.random.choice([False, True], size = image.shape, p = [0.25, 0.75]) # The shift corresponds to the pixel offset relative to the reference image offset_image = ndi.shift(image, shift) offset_image *= corrupted_pixels print("Known offset (row, col): {}".format(shift)) # Determine what the mask is based on which pixels are invalid # In this case, we know what the mask should be since we corrupted # the pixels ourselves mask = corrupted_pixels detected_shift = masked_register_translation(image, offset_image, mask) print("Detected pixel offset (row, col): {}".format(-detected_shift)) fig = plt.figure(figsize=(8, 3)) ax1 = plt.subplot(1, 3, 1) ax2 = plt.subplot(1, 3, 2, sharex=ax1, sharey=ax1) ax3 = plt.subplot(1, 3, 3, sharex=ax1, sharey=ax1) ax1.imshow(image, cmap='gray') ax1.set_axis_off() ax1.set_title('Reference image') ax2.imshow(offset_image.real, cmap='gray') ax2.set_axis_off() ax2.set_title('Corrupted, offset image') ax3.imshow(mask, cmap='gray') ax3.set_axis_off() ax3.set_title('Masked pixels') plt.show() Out: Known offset (row, col): (-22, 13) Detected pixel offset (row, col): [-22. 13.] Solid masks are another illustrating example. In this case, we have a limited view of an image and an offset image. The masks for these images need not be the same. The masked_register_translation function will correctly identify which part of the images should be compared. image = data.camera() shift = (-22, 13) rr1, cc1 = draw.ellipse(259, 248, r_radius = 125, c_radius = 100, shape = image.shape) rr2, cc2 = draw.ellipse(300, 200, r_radius = 110, c_radius = 180, shape = image.shape) mask1 = np.zeros_like(image, dtype = np.bool) mask2 = np.zeros_like(image, dtype = np.bool) mask1[rr1, cc1] = True mask2[rr2, cc2] = True offset_image = ndi.shift(image, shift) image *= mask1 offset_image *= mask2 print("Known offset (row, col): {}".format(shift)) detected_shift = masked_register_translation(image, offset_image, mask1, mask2) print("Detected pixel offset (row, col): {}".format(-detected_shift)) fig = plt.figure(figsize=(8,3)) ax1 = plt.subplot(1, 2, 1) ax2 = plt.subplot(1, 2, 2, sharex=ax1, sharey=ax1) ax1.imshow(image, cmap='gray') ax1.set_axis_off() ax1.set_title('Reference image') ax2.imshow(offset_image.real, cmap='gray') ax2.set_axis_off() ax2.set_title('Masked, offset image') plt.show() Out: Known offset (row, col): (-22, 13) Detected pixel offset (row, col): [-22. 13.] Total running time of the script: ( 0 minutes 2.463 seconds) Gallery generated by Sphinx-Gallery
https://scikit-image.org/docs/dev/auto_examples/transform/plot_masked_register_translation.html
CC-MAIN-2019-18
refinedweb
552
55.1
Results 1 to 3 of 3 Thread: Carbon, n00b problem :) Carbon, n00b problem :) - Member Since - Mar 27, 2007 - 3 - Specs: - MacBook 2Ghz Core2Duo Hi, Thought I'd have a bash at a bit of Carbon today, but Xcode keeps moaning. Ive made this simple program to print the current time. Code: #include <stdio.h> #include <CoreFoundation/CoreFoundation.h> int main (int argc, const char * argv[]) { CFAbsoluteTime time; time = CFAbsoluteTimeGetCurrent(); printf("%d", time); return 0; } ZeroLink: unknown symbol '_CFAbsoluteTimeGetCurrent' Test has exited due to signal 6 (SIGABRT). But with ZeroLink off, i get Undefined symbols: _CFAbsoluteTimeGetCurrent at compile time... I've obviously missed something, any help would be nice - Member Since - Mar 20, 2007 - Location - Grapevine, Tx - 22 - Specs: - Early 2008 Mac Pro - 500 GB HD - 6 GB Ram I've never programmed in this language, but have some experience in VB.net and Java. I wonder if time = CFAbsoluteTimeGetCurrent() shouldn't be time = CFAbsoluteTime.GetCurrent() Note the extra period between what is probably the Object class and the method. - Member Since - Nov 25, 2005 - Location - Nova Scotia - 234 - Specs: - Intel Core solo Mac mini 1.5GHz, Headless Powerbook G4 500MHz Thread Information Users Browsing this Thread There are currently 1 users browsing this thread. (0 members and 1 guests) Similar Threads - Replies: 6Last Post: 06-27-2010, 08:23 PM n00b here...By VictorP in forum Switcher HangoutReplies: 2Last Post: 02-21-2010, 05:58 PM Diablo II Carbon Copy problem with Mac OS XBy Apetk in forum OS X - Apps and GamesReplies: 15Last Post: 12-04-2007, 04:50 PM - Replies: 3Last Post: 08-10-2006, 12:53 AM iTunes Library + aMSN N00b problemBy Going2fast in forum Switcher HangoutReplies: 8Last Post: 04-07-2006, 11:49 AM
http://www.mac-forums.com/os-x-development-and-darwin/57636-carbon-n00b.html?s=78bb04cce1bec0c93ed81e0d1061321d
CC-MAIN-2016-22
refinedweb
288
61.06
Getting Started with Keras In this tutorial, we will learn the basics of the Deep Learning framework – Keras. This tutorial will cover the whole introduction of Keras and it’s functions. Intuition will be provided on the installation of Keras and its modules and built-in datasets. There will be three sections in this tutorial: - Introduction - Installation - Features provided – in which we will be seeing modules and datasets and one example of each topic. KERAS – A Deep Learning Python Framework 1. INTRODUCTION Deep learning is one of the most sought after fields in Machine learning or the Artificial Intelligence domain. And for creating Deep Neural Networks we need a framework because implementing it from scratch using python and numpy will be very tiresome and it’s not practical to implement very large models like Convolution Neural Networks or Recurring Neural Networks. Keras is one of those high-level neural network frameworks. This API is written in python and it supports multiple backend engines. It is easy to understand and user-friendly. This was designed for human beings and not machines. Its main author is a Google Engineer, Mr. Francois Chollet. It can be integrated to at least five backend engines which are : - Tensorflow - CNTK (Microsoft Cognitive Toolkit) - Theano - MXNet - PlaidML while Tensorflow being the primary backend engine. 2. INSTALLATION Before installing Keras, we need to make sure that we have installed python in our machines. For installing Python, click the below link : For installing Keras : pip install keras Or you can install Tensorflow: pip install tensorflow 3. FEATURES PROVIDED MODULES: These are the modules that are present in the Keras API : - Initializers - Regularizers - Constraints - Activations - Losses - Metrics - Optimizers - Callback - Text Processing - image Processing - Sequence Processing - Backend - Utilities If we don’t use Keras API, we would need to write code for every other module which is very time consuming and also a complex process. Here’s an example of using the backend module: First, we need to import the backend module from the keras library. from keras import backend as back Now, let’s use a function from the backend module which is dot() function. a = back.placeholder(shape = (4,2)) b = back.placeholder(shape = (2,3)) c = back.dot(a,b) print(c) This will give us the output: Tensor("MatMul_2:0", shape=(4, 3), dtype=float32) Thus, we have created a vector c which is the product of a and b. DATASETS: Apart from these above modules, Keras also provides a few datasets such as the MNIST digits dataset, IMDB movie review sentiment classification dataset, etc. Here’s an example of loading MNIST dataset: First, we need to import the mnist module from Keras library. from keras.datasets import mnist After this, we need to divide the data into training and testing sets. (X_train, y_train), (X_test, y_test) = mnist.load_data() And now to look at the shape of the dataset: print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_test.shape) This will give us the shape of the dataset as follows: (60000, 28, 28) (60000,) (10000, 28, 28) (10000,) For further information on Keras documentation, you can click here. Also, you may visit: Assigning a value to a TensorFlow variable in Python Thank you for reaching till here.
https://valueml.com/getting-started-with-keras/
CC-MAIN-2021-25
refinedweb
541
61.56
Talk:RoboJogger - [View source↑] - [History↑] Contents I have been using RoboJogger for movement for about one week and it was working perfectly. However, when I tried testing my robot on TCRM and MC2K7 it didn't work. My robojogger's robocode is 1.7.3.0 but I don't think that's the case since it produced similar results to what I had on my robocode 1.9.3.2. It seems like RoboJogger only accepts PERCENTAGE_SCORE. Below there is my TCRM file. Thank you in advance. Targeting Challenge RM AVERAGE_BULLET_DAMAGE 35 rounds Easy { apv.AspidMovement 1.0 dummy.micro.Sparrow 2.5TC kawigi.mini.Fhqwhgads 1.1TC emp.Yngwie 1.0 kawigi.sbf.FloodMini 1.4TC } Medium { abc.Tron 2.01 wiki.etc.HTTC 1.0 wiki.etc.RandomMovementBot 1.0 davidalves.micro.DuelistMicro 2.0TC gh.GrubbmGrb 1.2.4TC } Hard { pe.SandboxDT 1.91 cx.mini.Cigaret 1.31TC kc.Fortune 1.0 simonton.micro.WeeklongObsession 1.5TC jam.micro.RaikoMicro 1.44TC } An addition: Same things happen when I use robocode 1.9.2.5 This question probably has a really simple answer but I have been trying for over a month and still, I couldn't test one robot on it. Oh, I am so dumb. I gave the new Robocode version; now it works perfectly. 2 bugs so far that I have found: 1) A left over java process seems to hang around for each full execution of RoboRunner. This needs to be tracked down and eliminated (could it be the callback queue in RoboRunner?) 2) When RoboRunner throws an error (for example, due to a missing robot), RoboJogger does not realize that RoboRunner has died and the controls to "stop" RoboRunner do not function. This results in RoboJogger being stuck and requiring a force quit. This should be fixed so that either RoboJogger correctly detects the failed start of RoboRunner or at least is able to reset everything if the command is given to stop RoboRunner. Are these bugs present in 0.9.6? I'm finding it quite annoying to have to keep restarting robojogger =) I'll be looking at fixing these bugs soon. I kind of forgot they existed for awhile. I'm currently on hold because my basement server died awhile ago, and it had my source repository on it (in addition to my data backups). I have mostly new hardware now, but I'm still waiting on a new hard drive for the OS, as I at first bought a refurbished one from Newegg and it was dead (last time I ever try to buy a refurbished part). Once the final replacement hard drive arrives, I will have a mostly updated system with an OS drive and 2 1TB data drives. The old system was a Pentium 4 with 1GB DDR1 RAM, so it was definitely due for an update. I will also be installing the latest version of Fedora, which also means I can easily use Java 7 on it and possibly finally set up a distributed Robocode node on it (if that project is still alive). Back on topic, I'm glad to hear that someone is still using RoboJogger. Knowing that gives me encouragement to get back on it and make it better. I will also explore the possibility of moving the source to a public location, should anyone want to tinker with it. Hi, since i am only user of drc, the project is more dead, than alive:) But if you install it and experience problems, then you can email me (see Contacts) or patch it yourself: github repo :) Not sure if its a bug with RoboJogger, RoboRunner, or my robot. However I'm getting spurious results vs some bots. For example vs Tron 2.01 in the TCRM challenge I'm getting an average of 87% percent when running 10 seasons manually, but in RoboJogger I'm getting an average of 61%. I'm running Robojogger on the Mac, latest version of Robocode. I'm aware that Tron seems to be quite a slow bot, I was wondering if my bot was generating skipped turns for some reason as my bot is not too fast either. Notably in Robocode I am not generating any skipped turn events from what I can see. How do I access the Roborunner bot output? Is there a way or not? Manually means running by hand in Robocode directly FYI incase it was unclear. :) Oooh just saw there was a Robocode update that fixes some skipped turns, it literally came out today so I'm going to re-test with that and see what happens! Tried the new Robocode version and I'm still getting the same result - running manually in Robocode gives a higher score by around 25% for my bot vs Tron compared to running in RoboJogger. :( Are you sure you're using the same scoring mechanism? I know the TCs define score as TOTAL_BULLET_DAMAGE/ROUNDS. Yes, the setup for the RoboRunner config file is using AVERAGE_BULLET_DAMAGE. Note that the results for all the other bots in the challenge look correct. It only appears to be vs Tron for some reason, which is why I postulated it was because Tron appears to be a slow bot which may cause skipped turns. It might be because Tron starts firing - if it doesn't get its config file which puts it in TC mode copied correctly, for instance. Yeah, I bet Tron is firing. Pretty dumb we don't just have a non-firing version in the TCRM downloads? Or do we? I remember doing that manually for a long time, anyway. Same with DT. Bot output doesn't go anywhere in RoboRunner. Actually not exactly sure how to catch it but it would be a nice feature. You could log stuff to files though. Makes sense, but I checked the .data directory for tron for the 4 instances of RoboRunner that RoboJogger generates and all of them have a properties file with "challenger" set. :( Oh, hmm. That sucks. 61 is a pretty unthinkably low score vs Tron, looking at the TCRM results. I guess it would be useful if RoboRunner/RoboJogger had a switch for displaying battles to help debug this kind of thing. Can you try running battles manually from one of those Robocode instances? Found the cause of the problem but im not sure why. Running Robocode from inside Eclipse means my robot doesn't get any Skipped Turn events, but running robocode from the robocode.sh file means my robot does get skipped turn events. That means two things: 1) There is a difference between running robocode from inside and outside eclipse 2) My robot is running dead slow (but mainly versus tron?!) The second problem is something that I need to deal with, but the first is interesting. My command line arguments for running Robocode in Eclipse are "-Xmx512M -Dsun.io.useCanonCaches=false -Ddebug=true". Would this cause Robocode to ignore skipped turn events? Yes, I believe setting debug=true disables skip turns. (Also turning on debugging graphics.) -Ddebug=true does disable skipped turns, I checked the engine source. It is made exactly for that, so you can pause and trace execution step-by-step. Ahhhh cheers problem found. I guess I need to look at optimising my bot. Unfortunately I've found that I can greatly increase its score versus a lot of bots by doing a lot more work. Sigh :( There needs to be a way for a third party (e.g. RoboJogger) to access battle results (on the fly results highly preferable). It looks like I could use ScoreLog to read results from the XML result file after it is created, but it would be preferable to have a way to add a listener that is notified each time a new battle result is available. For example, RoboRunner might allow third parties to add their own BattleResultHandler in addition to or in replacement of the one RoboRunner uses internally. Thoughts? Yep, that makes sense. I think when you're listening to RoboRunner, you might want some higher level data too, like avg score and number of battles. So I think adding a new / similar listener to the RoboRunner class makes more sense than just letting you add more custom instances of the existing BattleResultHandler. The new interface method could take the raw scores and elapsed time, as now, plus whatever other summary data you want. At the end of RoboRunner's BattleResultHandler.processResults(), we call the higher level listener, if it's been set. Does that sound about right? Sounds good. Is this a change you would like to make or would you prefer I make the change and send it back to you for review? I say just make the changes you need and go with it, and I can merge them into the main branch whenever. But I'm happy to look over stuff or even write up the interface if you need me to. I still haven't gotten around to working on this. I suppose I would be happier if you added it such that it can be implemented in the manner you feel is best, but I will work on it once I run out of other tasks if you don't have time for it. I've actually gotten quite far into development relying only on what I can get from ScoreLog. Other than not having a nice way to track battles on the fly, the only thing ScoreLog doesn't provide that I wish it did is a way to retrieve or calculate the confidence values. I love the idea of computing confidence values, and am determined to include it somehow. Sure, no problem, I'll take a look sometime today or tomorrow. It will be trivial to pass along the raw scores, some summary data, and the basic per-bot confidence interval stuff. Passing the groups and overall scores and confidence intervals might prompt some refactoring to do it cleanly, since right now they're kind of just calculated in-line before printing them, but it shouldn't be too tough. I see in 0.9.5 notes: "Added a basic output parser for RoboRunner. Challenge run progress information in the main window is now updated on the fly..." Does this mean you took care of this listener thing yourself since I never actually did it? Or are you still lacking some way to get updated with RoboRunner's results (and/or confidence estimates) on the fly? Also, I'd like to merge back whatever changes you made to RoboRunner, at least to the GitHub repo. I guess I'll just wait and take a look at the source after 0.9.6. The only information the output parser extracts is a count of battles completed. It is not able to determine the scores, nor does it parse out the confidence. The only thing it provides is on-the-fly information on how many battles have been run, in order to update the progress in running a challenge. It was a way to do a little more, but does not take the place of having a listener that actually provides score information. The changes to RoboRunner so far have been very minimal. My focus has been almost exclusively on RoboJogger. The changes I have made to RoboRunner include only the following: - In BattleRunner, I made it so that getAllFutures(List<Future<String>>) could be interrupted in order to provide a way to stop RoboRunner. However, my particular implementation needs to be done in a better way than I did it. It sets a flag on InterruptedException that causes all remaining futures to be cancelled (it calls future.cancel(false) on all remaining futures). It needs to be improved such that it can safely call future.cancel(true) without potentially corrupting a score log, or have the futures be able to detect the stop request to shut down cleanly if running. - In ScoreLog, I added an ReentrantLock that is locked whenever loadScoreLog(...) or saveScoreLog(...) is called. Thus, only a single score log can be loaded or saved at a time. This was necessary as both RoboRunner and RoboJogger call the loadScoreLog(...) method and thus it needed some kind of synchronization. I know it would be safe to load and save different score logs at the same time, but I decided that there would not be enough concurrent loading and saving for it to be worth the added complexity. - In ScoreLog, I close the XMLEventReader and input stream after a ScoreLog is loaded. Not sure if this really matters or not in practice. And that's it. Even though you may not have specifically designed it for being used by another program, RoboRunner is quite easy to interact with and it required very little modification to put my UI on top of it. Very nice. I'm still losing chunks of score logs on rare occasions, and I have no idea why. I still need to add checks to ensure RoboRunner is cleanly shut down before exiting RoboJogger, but I've had data loss at a time when I was absolutely sure RoboRunner was finished running, so there is still a problem somewhere that needs to be solved. If I can track down that problem and take care of the other things noted above, I will be very close to having a non-beta-ish release. I just made another change to RoboRunner that will be part of 0.9.7. Again, I kept it pretty minor, but that makes 4 minor things now, so I should probably put the source out there where you can get to it. The new change I made was to add isTerminated() methods to RoboRunner and BattleRunner. These new methods return whether or not the main thread pool and the callback pool are terminated (by checking isTerminated() on both thread pools and then feeding that info back up the hierarchy). isTerminated() returns true on a thread pool when shutdown() has been called and all threads in the pool are done. This is important to RoboJogger in determining whether or not RoboRunner has finished shutting down before starting a new runner or allowing a user to exit the program. Another alternative would be to add a shutdown method to RoboRunner that is blocking, but I didn't think that would work as well, so I didn't take that approach. Updating the result windows on the fly doesn't seem like it would be particularly difficult. But I don't know the structure of your code at the moment, so I could be wrong. Assuming a singular method that displays the windows given a result set. a la "void showResultsWindow(resultSet)", could make it "ResultWindow showResultsWindow(resultSet)", keep it in a map "HashMap<ResultSet,ResultWindow>", and update the window based on updates in the set "ResultWindow window = map.get(resultSet); window.updateData(resultSet); window.markTableDirty();" Mark table dirty would be this piece of thread safe code "RepaintManager.currentManager(table).markCompletelyDirty(table);" Keeping the window in a map also allows you to just move that window to the front if someone tries to open the same result window twice. Though you make require a listener to determine if the window was closed, so you can remove it from the hashmap. Just hiding it could be problematic if someone keeps the application open for a long time and runs many different challenges.. Naturally, I found 2 bugs within an hour of releasing 0.9.5. Both will be fixed in the next version. First, I accidentally left a debug message in that gets written to the RoboRunner output window after every battle. That's already gone. Second, I discovered that running the Remove All function causes exceptions to start happening when trying to run RoboRunner after a Remove All. I still have to look into this, but it will get fixed in the next version. In the meantime, should this happen to anyone, the solution is just to rerun the Setup function. Another bug that is still around, just FYI, is on rare occasions some data can be lost from one of the score logs. I am still not sure under what scenario this happens, but it did happen to me again recently. It didn't totally corrupt the score log, but it did lose some of the battles, and those battles had to be re-run for some of my challenges. Another potential issue -- not really a bug -- is that RoboJogger can be slow to start up if you have a large number of challenge runs, because it recomputes completion information for every challenge run on startup, which can mean reading a lot of score logs. I was thinking for the next version I would store completion information separately (basically store it with the challenge runs instead of recomputing from the score logs) to make initial start up quicker. Big bug due to a fat finger mistake in version 0.9.3. PERCENT_SCORE challenges will give you an Unsupported Challenge error when you try to add them. I'll get this fixed soon. It was due to a typo on my part, along with inadequate testing. To get around this bug, you can change your challenge file to be "PRECENT_SCORE" (note the spelling error) and then it will run. Or just wait for me to put out the next version, which could be tonight or several days from now depending on how busy the new baby in our family keeps me. Sorry for the error. Okay folks. Help me out here. I didn't see any page on the wiki that details how all the challenge scoring types work. I'm basically just guessing on everything but normal scoring and bullet damage scoring. What I'm currently doing is best shown by just posting the class that currently handles scoring, and you all can let me know what needs to be changed. Thanks! //TODO: Verify how each scoring function is supposed to work public class ScoreFunctions { public static ScoreFunction PERCENT_SCORE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.score / (challenger.score + opponent.score); } }; public static ScoreFunction SURVIVAL_FIRSTS = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.survivalRounds / (double)numRounds; } }; public static ScoreFunction SURVIVAL_SCORE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.survivalScore / (challenger.survivalScore + opponent.survivalScore); } }; public static ScoreFunction BULLET_DAMAGE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.bulletDamage / (double)numRounds; } }; public static ScoreFunction MOVEMENT_CHALLENGE = new ScoreFunction() { @Override public double getScore(RobotScore challenger, RobotScore opponent, int numRounds) { return challenger.energyConserved / (double)numRounds; } }; } MOVEMENT_CHALLENGE is generally "100 - (bullet damage taken / total rounds)", or "return 100 - (opponent.bulletDamage / (double)numRounds)". Though if you can get the Energy Conserved, that might be approximately the same. BULLET_DAMAGE is AVERAGE_BULLET_DAMAGE. Otherwise I think it looks correct. Also keep in mind that RoboRunner supports melee battles last I checked, so a single RobotScore opponent may not be sufficient unless you add up all the opponents data into that one entry. Even then, I am not sure if the math works out correctly, especially with my definition MOVEMENT_CHALLENGE. I try and do a tcas/tcrm challenge, but I get a Challenge not supported. Unsupported scoring type: AVERAGE_BULLET_DAMAGE. Doesn't robojogger support this scoring method? Or does it have a different challenge file syntax? If so, what is it? Unfortunately, I need to go back to roboresearch. EDIT: After some research, I found it was "BULLET_DAMAGE", please make it support "AVERAGE_BULLET_DAMAGE" as well, if only as an alias. So that we can just copy and paste roboresearch challenge files. :) It seems to fill in the results with 100, despite the actual scoring in the roborunner output being something else. The results in the results window do not update during the running (not even the erroneous scores). The confidence scores were always 0.0, just as the scoring was always 100.0. Even after a full three seasons, the scores did not correct themselves. The correct results are in RoboRunner output of course. Hehe, while I wish I could believe I made a gun which produced such scores, I didn't. The scores not updating during running is normal. I'm waiting for the next release of RoboRunner before I implement that. As for the messed up scores, I would guess this has something to do with an error in what I'm doing with the scoring type. All of my testing so far has been with PERCENT_SCORE. I'll start testing with other scoring types and fix whatever problems I find. Would anyone like me to add an option to send RoboRunner output to a text file? Okay, so I looked into what is happening with the scores under the BULLET_DAMAGE scoring type. What RoboJogger is doing is taking the score for the scoring type for the challenger and dividing it by the sum of the score for the scoring type for both the challenger and opponent. I think this seems right, but I don't use scoring modes other than PERCENT_SCORE very often, so I'm not entirely sure. Why this comes up with a bad number is because when the scores are loaded from the ScoreLog (ScoreLog is part of RoboRunner), the bullet damage score (which the BULLET_DAMAGE score type relies on) is always 0 for the opponent. This might be a bug in RoboRunner -- either the scores not getting saved correctly or loaded correctly by the ScoreLog. I also noticed that the energy conserved values were also 0 for both the challenger and opponent, so may be a related bug there too. TC scoring is bullet_damage / number_of_rounds. Which produces an output between 0 and 100 (not 0.0 and 1.0). The your_score / (your_score + enemy_score) is a percent index. It can't be used with bullet damage where one of the robots does not fire (which is what happens in a TC). The reference robot (the one moving) will never have any bullet damage. I guess I assumed all scores were just percent scores based on different scoring metrics. I need to figure out where I can find information on exactly how each challenge type is scored. I'm not following your explanation on TC scoring, as I don't see how a good robot that doesn't run itself out of energy with misses would ever score anything other than 100. If the opponent doesn't fire (or hit walls), the challenger would score 100 on every round. Okay, reading a targeting challenge page more closely, given that the challenger is only supposed to fire power 3 bullets, I guess the intent is that on some rounds the challenger will run out of energy such that scores will vary. While I like the idea of not having the opponent firing back as an extra variable, having to alter your challenger's gun to only fire power 3 bullets means this is not a full test of the challenger's gun; it's just a test of the challenger's gun's aim with power 3 bullets, leaving out distance and power controls. I also have to wonder why the movement and targeting challenges are not just inverses of each other. Well, that could happen, except you are only allowed to pass 3.0 as the fire power. Which means that no current robot is able to get 100 bullet damage all of the time. Well, an enemy who doesn't fire has no chance at regaining any energy. So you can only at the absolute most do 100 damage to them in a single round. Of course if the enemy damages itself by hitting the wall or your robot. Then your score will not be 100, since you did not do 100 damage, even if you kill it. This also happens if you don't win the turn. If your robot disables itself from firing, then only the damage you did that round gets counted. As for damage. Well you do more damage then power you put into a bullet. The algorithm is this. Taken from Rules.java double damage = 4 * bulletPower; if (bulletPower > 1) { damage += 2 * (bulletPower - 1); } So you can get 100% without hitting every shot. You do 16 damage for every 3 power bullet you fire. For every 1 power bullet you fire (say after you shot and missed 33 times), is 4. So you only have to hit 7 times to kill the enemy. Though it is often not that simple. Post any bugs you find in 0.9.1 here. Note: If you have 0.9 and want to keep 0.9 data and challenges, just make sure you save and move robojogger.dat and all the files in the data directory. You will probably want to move all the robots from the bots directory too. Note that on first install, there is no bots nor data directory; you can add them manually or just start and stop RoboJogger once to let RoboJogger create them. I had one strange thing happen so far. Once, when I was starting RoboRunner, the CPU hung at 50% and the battles never started. I stopped then restarted, and it ran fine after that. So be on the lookout for that. I'll be exploring to try to find out what might have caused it. Had issues with locking up when RoboRunner is started again. I'll be running a lot more small challenges in development to track down the problem and fix it. Not sure what is going on, but it only seems to happen when RoboRunner starts a new challenge. I had to update my java 1.6_11 to the newest 1.6_37 version to get RoboJogger running. I just mention it, if someone else has trouble to start the jar. I haven't done anything so far, but i plan to give it a look over the next days. Hm... that's on Mac, yes? I bet the Apple Java Extensions changed somewhere between 1.6_11 and 1.6_37. I'll at least see if I can figure out exactly what version of Java this changed with on the Mac so I can put it in the notes. In other news, I found and fixed another bug, which will be fixed in the next release. RoboJogger can fail to load a Robot properly if the robot jar file contains multiple .properties files. Already fixed in my source. FYI -- I'm adding a new little feature to the next release. For each challenge run, you will be able to add a note/description of what the challenge run is for. This is something I will definitely use and I hope some of you might find it useful as well. It will be an extra column on the main window, but I will probably add a way to show/hide various columns. Great idea! I always have a scratch pad open with a short description and overall score for each dev version. I did some more testing since I threw up links to version 0.9. I noticed that if you let RoboRunner process all challenges, once it is complete, the buttons and menu item to start RoboRunner and do things like Setup do not re-enable. This was an issue with the way I was doing locking, and I've already fixed it in my source. I'll put up a version 0.9.1 in a few days where that will be fixed. I also noticed that if you have a challenge where the robots have nicknames (like MC2K7), error messages show up in the log for the nicknames. This doesn't affect usability any, but it shouldn't happen, so I'll get that fixed too. Another minor issue I noticed is that after multiple seasons, the totals RoboRunner shows and the totals RoboJogger show can be a smidge off. I would guess this is due to some rounding error somewhere. I'll investigate it. For the look and feel for Windows, I chose Nimbus, because I think it's pretty cool. But I think Chase is right, I should default to the system look and feel. I will probably change that; however, I will probably also add a dialog for changing the look and feel as a preference (I already have a class available that does that, so it's trivial to add). Totals not showing up in the middle of the first season is normal. A total isn't really valid (imo) until at least one battle has been run against each robot in the group or challenge. However, if totals don't show up after a full season completes, that is a problem. And not a problem I have witnessed. I'll keep an eye on this, but so far I haven't seen this problem. If you continue to see this problem and I don't, you may need to post or send a copy of your challenge file for me to test with. Chase had another good point about removing unnecessary stuff from the robocode_template directory. It is highly unlikely someone will download this and need or want a full copy of Robocode with it. I'll trim it down to the bare minimum. Finally, I know it's a little unnerving not having some kind of progress indication when RoboRunner is running. Note that in the Tools menu there is an option to Show RoboRunner Output. This will show what RoboRunner would normally output to the console (but with each line timestamped), though it is limited to I think the last 300 printed lines. It is the only way to see on-the-fly results right now and have a good indication of progress. Please post any other bugs you come across. Thanks! FYI -- I have not tested any melee at all yet, so there is a higher chance of bugs for that. My original comment was eaten by the reply box closing when I scrolled to hit save. So in short, I did notice the output, and did use it and that is how I know that there were no results, mid season or after it finished. One robot challenge with percent scoring. As for progress you could just parse the output of roborunner and redisplay it in the UI in some way. But this may require another thread. I thought about parsing the RoboRunner output, but that is not a very clean way of interfacing with RoboRunner, and Voidious already seemed willing to provide an update in the future that will provide a more robust interface for getting on-the-fly results. So I am waiting on that (@Voidious -- let me know if you want me to help on this; I have time now that I'm mostly done with RoboJogger 0.9). In the meantime, if you close all result windows for a challenge and reopen the results, you should see full results. RoboJogger reloads everything using the RoboRunner ScoreLog when the results window is opened. I'll do some more testing to see if I can cause a scenario where results are missing. Found another problem. After "stopping" RoboRunner, the RoboRunner threads appear to continue to use CPU. Not sure why yet. But it definitely needs to be looked into. Solved the last bug already. I didn't realize that when shutdown() is called on the thread pool, it will still finish executing any queued tasks. I just had to make a minor update to my modified version of RoboRunner to also cancel all remaining Future's after being interrupted. A question for anyone who cares to chime in. Tonight I created most of a build script for RoboJogger. In the past I have used tools like Launch4J and IzPack to make executables and installers for Java applications for Windows. I could do this for RoboJogger, if anyone prefers. In addition to just making the source available, would you prefer: 1) A zipped archive where the main class is in a jar (unzip and run with javaw -jar robojogger.jar), or 2) A zipped archive where the main class is in an exe (unzip and run robojogger.exe), or 3) An installer that is a jar (run installer with javaw -jar robojogger-installer.jar), or 4) An installer that is an exe (just run robojogger-installer.exe). For 3) and 4), you could also indicate whether you think the main class should be a jar or exe, if that matters to you. Or I could provide it several different ways. So if you care one way or another, let me know. Hi Mate. I'm on a mac here and i would prefer a jar in all cases. It also has to be max java 1.6 to be usable for me. I can set up a Mac build as well. I've done that before. It will even be somewhat Macish, if you will, as I try to follow the Mac application styling guide for Macs by using the Mac menu bar and doing things like reversing the ok/cancel buttons on dialogs (I have support for that kind of stuff built into my code). For a Mac version, I can either have a zip filled with jars, or I can also create a .dmg file if preferred. Do whatever the easiest is for you. I'm fine with .jar or .dmg. If i'm going to use it, i will probable make an .app out of it anyway. Not sure what you mean with 'somewhat Macish' :) - do you mean you have programmed it this way or just using the -Xdock flags? In you are asking specifics, being somewhat Mac-ish to me means setting setting property "apple.laf.useScreenMenuBar" to "true" to use the screen menu bar instead of a menu bar in the Java app, setting system property "com.apple.mrj.application.apple.menu.about.name" to set the application name, using the "com.apple.eawt" classes for setting up Exit, About, and Preferences menu items, and for ok/cancel style dialogs, making the ok button appear to the right of cancel button rather than the other way around. If you are feeling particularly ambitious, you could use launch4j to make a windows exe to launch it (or wrap it). It doesn't change anything for me (I can run it by double clicking the jar). But others might find it useful. I think I recall a recent version of launch4j also supporting making MacOSX executables too. Congrats on the first release! :-) I'll be sure to test it out soon on my systems and let you know how it goes. I think it makes sense for you to just include RoboRunner in your downloads like this. It makes the setup so much easier, and savvy users could still drop in the latest RoboRunner JAR if they want (after next version). I'll try to incorporate your interrupt changes and the new results listener soon, sorry to drag my feet on that. No problem. Instead of trying to handle InterruptedExceptions, you might just consider adding a volatile cancel flag that gets checked before you run each battle. That way, another thread can set the cancel flag to true when it wants RoboRunner to stop, and RoboRunner can shutdown more cleanly the next time it checks the cancel flag. Also, to avoid possible contention over a score log, you might add some way to lock a score log (maybe add a ReentrantLock to control it that both RoboRunner and external threads can access). That way RoboJogger (or anything anyone else might write) can lock a score log when it reads it, unlock it when it's done, without worrying about stepping on RoboRunner trying to write to the score log at the same time. Just some thoughts I had.... Okay, first of all. Good work! Unlike roborunner by itself. Robojogger actually seems to work. On the other hand, the results don't seem to work. Mid-season or end of running. It just never shows up as completed. Not sure what the problem is here. It doesn't feel considerably faster then RoboResearch. But I haven't tested them head to head or anything. It might be because RoboResearch shows the progress on the UI itself (I understand how this might not be possible in RoboJogger, at least on a per turn basis). Other Notes: I notice it uses a different look and feel. Usually people expect programs to use their system look at feel. You can achieve this in java by using UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); This just goes to making the program feel more 'comfortable' to people who use it. If you are releasing robocode with it, you could 'cut down' the robocode version included to reduce the size of the zip. It doesn't need a compiler, javadoc, rumble, templates, sample bots, etc. You also seem to have multiple copies of robocode in there as well. One in template, one in robocode_jars. On the other hand, you may want to include a few default challenge files. Like say the ones RoboResearch has. This will help if someone doesn't have RoboResearch already, and/or doesn't know how to create a challenge file. One small comment about the results dialog. I may be abnormal, but I frequently have huge test beds, like 250 bots, or 60 different sets of 9-bot melee battles. Obviously there's no nice way to display 500 columns of scores, but just making sure not to do something ridiculous (like a 5000px wide window, which I think RoboResearch's UI does) would be nice. :-) It will be in a scroll pane in a window that has a max size limit on it. Beyond that, do you think there is a better way to show results when there is a huge number of bots? Not really, that seems good. At that point you're probably just interested in overall score. But on that note, if "Total" was in some fixed place instead of requiring me to scroll way to the right, that would be nice. :-) Noted. Check out the updated screenshot I posted. I will probably have the preferred size set to something like 800 pixels wide for the center scroll pane. In the screenshot it is set to a somewhat small 400 pixels wide just to make testing easier. Sorry I missed replying to this, but the updated screenshot looks perfect! Something else I'm working on is providing a way to interrupt RoboRunner in the middle of a challenge. I'm not sure in what ways that could potentially mess up RoboRunner yet, but I did have to make a couple of changes to make this work: First, in order to stop RoboRunner completely (and not just the current battle), I had to make an InterruptedException result in the bypass of all queued battles: In BattleRunner: private void getAllFutures(List<Future<String>> futures) { for (Future<String> future : futures) { try { future.get(); } catch (InterruptedException e) { e.printStackTrace(); return; } catch (ExecutionException e) { e.printStackTrace(); } } } There might be some additional modification, but for now, I just added a return statement if an InterruptedException occurs (will probably also get rid of the printStackTrace call). This prevents calling get() on all remaining Futures. While I think this was an unexpected condition in RoboRunner, in RoboJogger an InterruptedException is now an expected result whenever a stop command is issued for RoboRunner. A remaining question is, what, if anything, will be broken as a result of this? Another change I made to ScoreLog, such that trying to access battle results for a "botList" that does not exist will not cause a NullPointerException: In ScoreLog: public List<BattleScore> getBattleScores(String botList) { List<BattleScore> scores = _scores.get(botList); return (scores == null)? null : ImmutableList.copyOf(_scores.get(botList)); } I provided the null check on scores. I was kind of surprised that ImmutableList didn't do that by design. The most likely scenario where this happens is related to my other change -- if a challenge is interrupted before battles have been run against all opponents, when I later access results from the ScoreLog, I am not aware of missing results until the getBattleScores method returns null. I suppose I could have also just added a try/catch in my own code for NullPointerException without having to change RoboRunner, but I felt doing so was not the better way of handling it. These changes are not finalized. I'm just writing about them for the sake of discussion. I just noticed that for getting battle results, there is a method hasBotList(String) method that I could call before trying to get battle scores. This would prevent the NPE without modifying RoboRunner. Given this, I could see arguing either way about whether getBattleScores should throw NPE or return null for a botList that does not exist. I don't have a strong opinion about NPE vs returning null - I think the hasBotList is what made me feel ok with leaving the other one NPE-ing, but that doesn't mean it has to. Seems silly to insist you call 2 methods instead of 1. I'll have to think about the interruption stuff. Are you using the same RoboRunner instance after interrupting and trying to use it again? Certainly that would give me pause and I'd want to look over RoboRunner and BattleRunner to see what internal state might be confused by this. If not, my only worry would be if the interruption came during a file write to the score log. Maybe in the file writing, we need to catch InterruptedException, close the file stream in the catch, and rethrow? I'm not really sure. Maybe Java is already smart enough not to corrupt a file stream when being interrupted? The code you have here makes sense and doesn't raise any red flags besides that. No -- I create a new RoboRunner instance for each challenge started. If RoboRunner is interrupted in the middle of a challenge, when RoboRunner is restarted, a new RoboRunner instance is created. Good point on the potential for RoboRunner to be writing to the score log when interrupted; I do need to take a closer look at that. Instead of dealing with interrupted exceptions, robo runner could just provide a cancel flag that gets checked before each get(). @Voidious -- I'm not sure what your plan for confidence is, but I eagerly went ahead and developed my own confidence calculator. I was looking over your code for calculating confidence and was having trouble following it, so I instead went to my wife's Principles of Biostatistics book and read the chapter on Confidence Intervals. For the sake of simplicity, I will stick with 95% confidence intervals, as that is what you used in your code (that's where the 1.96 comes from) and it seems reasonable. The confidence interval for a single robot turns out to be pretty simple to calculate (in special-character-challenged terms, it is x +- 1.96 * s / sqrt(n), where x is the mean, s is the standard deviation, and n is the sample size). Where it gets more complicated is in calculating the confidence interval for groups and the overall total score. Lets talk groups first. What I did for a group was to take the first score for each opponent, average them all, and that becomes data point 1. Then take the second score for each opponent, average them, and that becomes data point 2. I determine how many data points to use by calculating the average number of battles for an opponent in the group, rounded. This means some data points for opponents with more scores end up getting thrown away, and some data points for opponents with fewer scores don't have enough scores. For the latter, I use as many extra randomly generated scores as I need where the random score falls within the confidence interval of scores for that particular robot. Once I have all of the data points, I then use the original means for calculating a confidence interval on the collected data points. Now for the overall total. If there is only 1 group (or no groups, depending on how you look at it), then there is nothing more to do -- use the values calculated for the 1 group. But if there are multiple groups, then what? We should probably respect that the overall total is an average of the group totals. This would end up being just like calculating the group confidence intervals, only treating the groups like the robots. Did that make sense? How is this different from what what you have done in RoboRunner? Heh, well, what I did is a little complicated, but I think it's about the best you can do for a set of bots that each have their own distributions. Basically I run 1,000 or whatever random simulations of the overall score, based on the averages / standard deviations of each individual bot's score distribution. Then I can take those "overall score" samples, supposedly generated from the same distribution as the real scores, and use them as additional samples to calculate the confidence interval of the overall score. It's a fairly basic Monte Carlo method. I see there was a discussion about it on the RoboRunner page. I should probably go read that. Never heard of the Monte Carlo method, so I'll look into it. I'd heard the term, but it was totally Skilgannon that knew enough to suggest it. Once I looked into it, though, it was pretty simple. But I also wanted to mention, I was planning to pass some object with all the confidence interval info you might need about the current battle in the new listener. I figured that was among the things you'd want in the application output, since it's among the things I print in the console version. But of course you're free to use whatever you like. :-) I'll use it if it's there. I use the ScoreLog to show data from past battles, and wasn't sure if confidence information would also be available from the ScoreLog after your updates. If not, I can keep using my own confidence calculator for past data. Hmm. Well first off, I am pretty sure you should make sure you are using the [t-distribution], not the normal distribution. Using that, I would generate a confidence interval for each individual bot. I am nearly certain that there is a way to generate a confidence interval from the mean of several other intervals. I can't remember off the top of my head but I vaguely recall it being something like the square root of the sum of the squares of the standard errors (not standard deviations since the sample size is presumably fairly small). I'll tell you if I can find it. and I didn't read through them carefully (kind of busy with school), but skimming through them quickly, it appears that the square root of the sum of the variances of the individual distributions is correct. I think that's correct if all of them have the same number of samples. However, with the cool new 'variance minimizer' pairings selection algorithm that isn't necessarily guaranteed. Although you may be right - could you see if your Monte Carlo gives the same results as a root-sum-of-squares, Voidious? I'm finally at the point where I am trying to actually launch RoboRunner. Currently running into an error I will have to debug. Posting part of the stack trace here in case anyone wants to comment. Copying missing bots... 0 JAR copies done! Initializing engine: robocodes\z1... Exception in thread "AWT-EventQueue-0" java.lang.NullPointerException at robowiki.runner.BattleRunner.initEngine(BattleRunner.java:66) at robowiki.runner.BattleRunner.<init>(BattleRunner.java:42) at robowiki.runner.RoboRunner.<init>(RoboRunner.java:172) at org.xandercat.roborunner.runner.RoboRunnerService.startRunner(RoboRunnerService.java:44) at org.xandercat.roborunner.runner.action.LaunchRoboRunnerAction.actionPerformed(LaunchRoboRunnerAction.java:46) at javax.swing.AbstractButton.fireActionPerformed(Unknown Source) And the chunk of relevant code from RoboRunner: System.out.print("Initializing engine: " + enginePath + "... "); ProcessBuilder builder = new ProcessBuilder(command); builder.redirectErrorStream(true); Process battleProcess = builder.start(); BufferedReader reader = new BufferedReader( new InputStreamReader(battleProcess.getInputStream())); String processOutput; do { processOutput = reader.readLine(); } while (!processOutput.equals(BattleProcess.READY_SIGNAL)); System.out.println("done!"); _processQueue.add(battleProcess); Presumably, the input stream never provided the BattleProcess.READY_SIGNAL. I'll have to do some digging to figure out why. I'm not entirely clear on what the RoboRunner requirements are, but at the moment I am running it under Java 6 with Robocode 1.7.3.0. FYI -- line 66 is the while part of the do/while loop. I'll take a deeper look later when I'm home. At a glance, it seems like processOutput is coming up null - maybe the condition should be "processOutput != null && ...". What command are you using to launch this? Looks like the problem was I didn't have one of the needed Robocode jars in the classpath. Thanks for including source in the RoboRunner jar; that made debugging easier. Fixing the classpath fixed the problem I was having. Also, I am running RoboRunner via new RoboRunner(...) and then calling the runBattles() method. I need to dig a little deeper to determine how best to extract the battle results; at the moment it is just letting RoboRunner barf them on System.out. :-). Cool, good to hear! I don't think I kept any real test results of speed vs RoboResearch, but I think it was in the range of 20% less time for my bot / system. The smart battles stuff helps too, but it's hard to measure. Similarly, I had long wanted to update RoboResearch to use the control API instead of launching external Java processes. When I started digging into it, it just looked easier / better / more fun to rewrite from scratch.
https://robowiki.net/wiki/Talk:RoboJogger
CC-MAIN-2020-40
refinedweb
8,396
72.26
- NAME - SYNOPSIS - DESCRIPTION - Utility functions - NON-OO INTERFACE - METHODS - EXAMPLE - Non-OO - Object oriented - AUTHOR NAME PDL::Options - simplifies option passing by hash in PerlDL SYNOPSIS use PDL::Options; %hash = parse( \%defaults, \%user_options); use PDL::Options (); $opt = new PDL::Options; $opt = new PDL::Options ( \%defaults ); $opt->defaults ( \%defaults ); $opt->synonyms ( { 'COLOR' => 'COLOUR' } ); $hashref = $opt->defaults; $opt->options ( \%user_options ); $hashref = $opt->options; $opt->incremental(1); $opt->full_options(0); DESCRIPTION Object to simplify option passing for PerlDL subroutines. Allows you to merge a user defined options with defaults. A simplified (non-OO) interface is provided. Utility functions ifhref parse({Ext => 'TIF', ifhref($opt)}); just return the argument if it is a hashref otherwise return an empty hashref. Useful in conjunction with parse to return just the default values if argument is not a hash ref NON-OO INTERFACE A simplified non-object oriented interface is provided. These routines are exported into the callers namespace by default. - parse( \%defaults, \%user_options) This will parse user options by using the defaults. The following settings are used for parsing: The options are case-sensitive, a default synonym table is consulted (see "Default Synonyms"), minimum-matching is turned on, and translation of values is not performed. A hash (not hash reference) containing the processed options is returned. %options = parse( { LINE => 1, COLOUR => 'red'}, { COLOR => 'blue'}); - iparse( \%defaults, \%user_options) Same as parsebut matching is case insensitive Default Synonyms The following default synonyms are available in the non-OO interface: COLOR => COLOUR COLOUR => COLOR CENTER => CENTRE CENTRE => CENTER METHODS The following methods are available to PDL::Options objects. - new() Constructor. Creates the object. With an optional argument can also set the default options. - extend (\%options) This will copy the existing options object and extend it with the requested extra options. - defaults( \%defaults ) Method to set or return the current defaults. The argument should be a reference to a hash. The hash reference is returned if no arguments are supplied. The current values are reset whenever the defaults are changed. - add_synonym (\%synonyms) Method to add another synonym to an option set The argument should be a reference to a hash. - add_translation (\%translation) Method to add another translation rule to an option set. The argument should be a reference to a hash. - synonyms( \%synonyms ) Method to set or return the current synonyms. The argument should be a reference to a hash. The hash reference is returned if no arguments are supplied. This allows you to provide alternate keywords (such as allowing 'COLOR' as an option when your defaults uses 'COLOUR'). - current Returns the current state of the options. This is returned as a hash reference (although it is not a reference to the actual hash stored in the object). If full_options() is true the full options hash is returned, if full_options() is false only the modified options are returned (as set by the last call to options()). - clear_current This routine clears the 'state' of the PDL::Optionsobject so that the next call to current will return an empty list - translation Provide translation of options to more specific values that are recognised by the program. This allows, for example, the automatic translation of the string 'red' to '#ff0000'. This method can be used to setup the dictionary and is hash reference with the following structure: OPTIONA => { 'string1' => decode1, 'string2' => decode2 }, OPTIONB => { 's4' => decodeb1, } etc.... Where OPTION? corresponds to the top level option name as stored in the defaults array (eg LINECOLOR) and the anonymous hashes provide the translation from string1 ('red') to decode1 ('#ff0000'). An options string will be translated automatically during the main options() processing if autotrans() is set to true. Else translation can be initiated by the user using the translate() method. - incremental Specifies whether the user defined options will be treated as additions to the current state of the object (1) or modifications to the default values only (0). Can be used to set or return this value. Default is false. - full_options Governs whether a complete set of options is returned (ie defaults + expanded user options), true, or if just the expanded user options are returned, false (ie the values specified by the user). This can be useful when you are only interested in the changes to the options rather than knowing the full state. (For example, if defaults contains keys for COLOUR and LINESTYLE and the user supplied a key of COL, you may simply be interested in the modification to COLOUR rather than the state of LINESTYLE and COLOUR.) Default is true. - casesens Specifies whether the user defined options will be processed independent of case (0) or not (1). Default is to be case insensitive. Can be used to set or return this value. - minmatch Specifies whether the user defined options will be minimum matched with the defaults (1) or whether the user defined options should match the default keys exactly. Defaults is true (1).. - autotrans Specifies whether the user defined options will be processed via the translate() method immediately following the main options parsing. Default is to autotranslate (1). Can be used to set or return this value. - casesenstrans Specifies whether the keys in the options hash will be matched insensitive of case (0) during translation() or not (1). Default is to be case insensitive. Can be used to set or return this value. - minmatchtrans Specifies whether the keys in the options hash will be minimum matched during translation(). Default is false (0).. - warnonmissing Turn on or off the warning message printed when an options is not in the options hash. This can be convenient when a user passes a set of options that has to be parsed by several different option objects down the line. - debug Turn on or off debug messages. Default is off (0). Can be used to set or return this value. - options Takes a set of user-defined options (as a reference to a hash) and merges them with the current state (or the defaults; depends on the state of incremental()). The user-supplied keys will be compared with the defaults. Case sensitivity and minimum matching can be configured using the mimatch() and casesens() methods. A warning is raised if keys present in the user options are not present in the defaults unless warnonmissing is set. A reference to a hash containing the merged options is returned. $merged = $opt->options( { COL => 'red', Width => 1}); The state of the object can be retrieved after this by using the current() method or by using the options() method with no arguments. If full_options() is true, all options are returned (options plus overrides), if full_options() is false then only the modified options are returned. Synonyms are supported if they have been configured via the synonyms() method. - translate Translate the current option values (eg those set via the options() method) using the provided translation(). This method updates the current state of the object and returns the updated options hash as a reference. $ref = $opt->translate; EXAMPLE Two examples are shown. The first uses the simplified interface and the second uses the object-oriented interface. Non-OO use PDL::Options (':Func'); %options = parse( { LINE => 1, COLOUR => 'red', }, { COLOR => 'blue' } ); This will return a hash containg %options = ( LINE => 1, COLOUR => 'blue' ) Object oriented The following example will try to show the main points: use PDL::Options (); # Create new object and supply defaults $opt = new PDL::Options( { Colour => 'red', LineStyle => 'dashed', LineWidth => 1 } ); # Create synonyms $opt->synonyms( { Color => 'Colour' } ); # Create translation dictionary $opt->translation( { Colour => { 'blue' => '#0000ff', 'red' => '#ff0000', 'green'=> '#00ff00' }, LineStyle => { 'solid' => 1, 'dashed' => 2, 'dotted' => 3 } } ); # Generate and parse test hash $options = $opt->options( { Color => 'green', lines => 'solid', } ); When this code is run, $options will be the reference to a hash containing the following: Colour => '#00ff00', LineStyle => 1, LineWidth => 1 If full_options() was set to false (0), $options would be a reference to a hash containing: Colour => '#00ff00', LineStyle => 1 Minimum matching and case insensitivity can be configured for both the initial parsing and for the subsequent translating. The translation can be turned off if not desired. Currently synonyms are not available for the translation although this could be added quite simply. AUTHOR Copyright (C) Tim Jenness 1998 (t.jenness@jach.hawaii.
https://metacpan.org/pod/release/CHM/PDL-2.007_10/Basic/Options.pm
CC-MAIN-2018-26
refinedweb
1,355
53.41
Writer: Joe Davies Abstract This chapter describes the details of the Domain Name System (DNS) and its use for private intranets and the Internet. DNS is required to provide name resolution for domain names such as for all types of network applications from Internet browsers to the Active Directory® directory service. A network administrator's understanding of DNS names, domains, zones, name server roles, and replication is vital to the configuration and maintenance of a properly functioning private intranet and the Internet. For a download of the entire "TCP/IP Fundamentals for Microsoft Windows" online book, which contains a version of this chapter that has been updated for Windows Vista and Windows Server 2008, click here. Chapter Objectives The Domain Name System Name Resolution Name Server Roles Resource Records and Zones Zone Transfers DNS Dynamic Update Chapter Summary Chapter Glossary After completing this chapter you will be able to:. The initial solution for name resolution on the Internet was a file named Hosts.txt that was used on the now obsolete Advanced Research Projects Agency network (ARPANET), the predecessor of the modern day Internet. When the number of hosts on the ARPANET was small, the Hosts.txt file was easy to manage because it consisted of unstructured names and their corresponding IPv4 addresses. Computers on the ARPANET periodically downloaded Hosts.txt from a central location and used it for local name resolution. As the ARPANET grew into the Internet, the number of hosts began to increase dramatically and the centralized administration and manual distribution of a text file containing the names for computers on the Internet became unwieldy. The replacement for the Hosts.txt file needed to be distributed, to allow for a hierarchical name space, and require minimal administrative overhead. The original design goal for DNS was to replace the existing cumbersome, centrally administered text file with a lightweight, distributed database that would allow for a hierarchical name space, delegation and distribution of administration, extensible data types, virtually unlimited database size, and reasonable performance. DNS defines a namespace and a protocol for name resolution and database replication: The DNS namespace is based on a hierarchical and logical tree structure. The DNS protocol defines a set of messages sent over either User Datagram Protocol (UDP) port 53 or Transmission Control Protocol (TCP) port 53. Hosts that originate DNS queries send name resolution queries to servers over UDP first because it is faster. These hosts, known as DNS clients, resort to TCP only if the returned data is truncated. Hosts that store portions of the DNS database, known as DNS servers, use TCP when replicating database information. Historically, the most popular implementation of the DNS protocol is Berkeley Internet Name Domain (BIND), which was originally developed at the University of California at Berkeley for the 4.3 Berkeley Software Distribution release of the UNIX operating system. Requests for Comments (RFCs) 974, 1034, and 1035 define the primary specifications for DNS. From RFC 1034, DNS comprises the following three components: The domain namespace and resource records DNS defines a specification for a structured namespace as an inverted tree in which each node and leaf of the tree names a set of information. Resource records are records in the DNS database that can be used to configure the DNS database server (such as the Start of Authority [SOA] record) or to contain information of different types to process client queries (such as Address [A] records or Mail Exchanger [MX] records). Typical resource records contain resources by name and their IP addresses. Name queries to DNS database servers are attempts to extract information of a certain type from the namespace. The name query requests a name of interest and a specific type of record. For example, a name query would provide a host name and ask for the corresponding IPv4 or IPv6 address. Name servers Name servers store resource records and information about the domain tree structure and attempt to resolve received client queries. DNS database servers, hereafter referred to as name servers or DNS servers, either contain the requested information in their resource records or have pointer records to other name servers that can help resolve the client query. If the name server contains the resource records for a given part of the namespace, the server is said to be authoritative for that part of the namespace. Authoritative information is organized into units called zones. Resolvers Resolvers are programs that run on DNS clients and DNS servers and that create queries to extract information from name servers. A DNS client uses a resolver to create a DNS name query. A DNS server uses a resolver to contact other DNS servers to resolve a name on a DNS client's behalf. Resolvers are usually built into utility programs or are accessible through library functions, such as the Windows Sockets gethostbyname() or getaddrinfo() functions. DNS names have a very specific structure, which identifies the location of the name in the DNS namespace. A fully qualified domain name (FQDN) is a DNS domain name that has been constructed from its location relative to the root of the namespace (known as the root domain). FQDNs have the following attributes: FQDNs consist of the series of names from the name of the host or computer to the root domain. A period character separates each name. Each FQDN ends with the period character, which indicates the root domain. Each name within the FQDN can be no more than 63 characters long. The entire FQDN can be no more than 255 characters long. FQDNs are not case-sensitive. RFC 1034 requires the names that make up a FQDN to use only the characters a-z, A-Z, 0-9, and the dash or minus sign (-). RFC 2181 allows additional characters and is supported by the DNS Server service in Microsoft® Windows Server™ 2003 operating systems. The DNS namespace is in the form of a logical inverted tree structure. Each branch point (or node) in the tree is given a name that is no more than 63 characters long. Each node of the tree is a portion of the namespace called a domain. A domain is a branch of the tree and can occur at any point in the tree structure. Domains can be further partitioned at node points within the domain into subdomains for the purposes of administration or load balancing. The domain name identifies the domain's position in the DNS hierarchy. The FQDN identifies the domain relative to the root. You create domain names and FQDNs by combining the names of the nodes from the designated domain node back to the root and separating each node with a period (.). The root of the tree has the special reserved name of "" (null), which you indicate by placing a final period at the end of the domain name (such as.). Domains and subdomains are grouped into zones to allow for distributed administration of the DNS namespace. Figure 8-1 shows the DNS namespace as it exists for the Internet. Figure 8-1 shows a few of the top-level domains and example hosts in the "microsoft.com." domain. A trailing period designates a domain name of a host relative to the root domain. To connect to that host, a user would specify the name "." If the user does not specify the final period, the DNS resolver automatically adds it to the specified name. Individual organizations manage second-level domains (subdomains of the top level domains) and their name servers. For example, Microsoft manages the "microsoft.com." domain. Domains define different levels of authority in a hierarchical structure. The top of the hierarchy is called the root domain. The DNS namespace on the Internet, as shown in Figure 8-1, has the following structure: Root domain Top-level domains Second-level domains The root domain uses a null label, which you write as a single period (.). In the United States, the Internet Assigned Names Authority (IANA) manages several root domain name servers. The next level in the hierarchy is divided into a series of nodes called the top-level domains. The top-level domains are assigned by organization type and by country/region. Some of the more common top-level domains are the following: com – Commercial organizations in the United States (for example, microsoft.com for the Microsoft Corporation). edu – Educational organizations in the United States. gov – United States governmental organizations. int – International organizations. mil – United States military organizations. net - Networking organizations. org – Noncommercial organizations. xx – Two-letter country code names that follow the International Standard 3166. For example, “.fr” is the country code for France. arpa – Used to store information for DNS reverse queries. Each top-level domain has name servers that IANA administers. Top-level domains can contain second-level domains and hosts. Second-level domains contain the domains and names for organizations and countries/regions. The names in second-level domains are administered by the organization or country/region either directly (by placing its own DNS server on the Internet) or by using an Internet service provider (ISP) who manages the names for an organization or country/region on its customer's behalf. A zone is a contiguous portion of a domain of the DNS namespace whose database records exist and are managed in a particular DNS database file stored on one or multiple DNS servers. You can configure a single DNS server to manage one or multiple zones. Each zone is anchored at a specific domain node, referred to as the zone's root domain. Zone files do not necessarily contain the complete branch (that is, all subdomains) under the zone's root domain. For example, you can partition a domain into several subdomains, which are controlled by separate DNS servers. You might break up domains across multiple zone files if you want to distribute management of the domain across different groups or make data replication more efficient. Figure 8-2 shows the difference between domains and zones. In the example, "microsoft.com" is a domain (the entire branch of the DNS namespace that starts with the microsoft.com. node), but the entire domain is not controlled by one zone file. Part of the domain is in a zone for "microsoft.com." and part of the domain is in a zone for the "dev.microsoft.com." domain. These zones correspond to different DNS database files that can reside on the same or different DNS servers. The two types of queries that a DNS resolver (either a DNS client or another DNS server) can make to a DNS server are the following: Recursive queries In a recursive query, the queried name server is requested to respond with the requested data or with an error stating that data of the requested type or the specified domain name does not exist. The name server cannot just refer the DNS resolver to a different name server. A DNS client typically sends this type of query. Iterative queries In an iterative query, the queried name server can return the best answer it currently has back to the DNS resolver. The best answer might be the resolved name or a referral to another name server that is closer to fulfilling the DNS client's original request. DNS servers typically send iterative queries to query other DNS servers. To show how recursive and iterative queries are used for common DNS name resolutions, consider a computer running a Microsoft Windows® XP operating system or Windows Server 2003 connected to the Internet. A user types in the Address field of their Internet browser. When the user presses the ENTER key, the browser makes a Windows Sockets function call, either gethostbyname() or getaddrinfo(), to resolve the name to an IP address. For the DNS portion of the Windows host name resolution process, the following occurs: The DNS resolver on the DNS client sends a recursive query to its configured DNS server, requesting the IP address corresponding to the name "". The DNS server for that client is responsible for resolving the name and cannot refer the DNS client to another DNS server. The DNS server that received the initial recursive query checks its zones and finds no zones corresponding to the requested domain name; the DNS server is not authoritative for the example.com domain. Because the DNS server has no information about the IP addresses of DNS servers that are authoritative for example.com. or com., it sends an iterative query for. to a root name server. The root name server is authoritative for the root domain and has information about name servers that are authoritative for top-level domain names. It is not authoritative for the example.com. domain. Therefore, the root name server replies with the IP address of a name server for the com. top-level domain. The DNS server of the DNS client sends an iterative query for. to the name server that is authoritative for the com. top-level domain. The com. name server is authoritative for the com. domain and has information about the IP addresses of name servers that are authoritative for second-level domain names of the com. domain. It is not authoritative for the example.com. domain. Therefore, the com. name server replies with the IP address of the name server that is authoritative for the example.com. domain. The DNS server of the DNS client sends an iterative query for. to the name server that is authoritative for the example.com. domain. The example.com. name server replies with the IP address corresponding to the FQDN. The DNS server of the DNS client sends the IP address of to the DNS client. Figure 8-3 shows this process. All DNS queries are DNS Name Query Request messages. All DNS replies are DNS Name Query Response messages. In practice, DNS servers cache the results of queries on an ongoing basis. If a DNS server finds an entry matching the current request in its cache, it does not send an iterative DNS query. This example assumes that no cache entries were in any of the DNS servers to prevent the sending of the iterative name queries. Forward lookups are queries in which a DNS client attempts to resolve an FQDN to its corresponding IP address. Zones that contain FQDN-to-IP address mappings are known as forward lookup zones. In a reverse query, instead of supplying a name and asking for an IP address, the DNS client provides the IP address and requests the corresponding host name. Reverse queries are also known as reverse lookups, and zones that contain IP address-to-FQDN mappings are known as reverse lookup zones. Because you cannot derive the IP address from a domain name in the DNS namespace, only a thorough search of all domains could guarantee a correct answer. To prevent an exhaustive search of all domains for a reverse query, reverse name domains and pointer (PTR) resource records were created. An example of an application that uses reverse queries is the Tracert tool, which by default uses reverse queries to display the names of the routers in a routing path. If you are going to use reverse queries, you must create reverse lookup zones and PTR records when you administer a DNS server so that reverse queries can be satisfied. To support reverse lookups for IPv4 addresses, a special domain named in-addr.arpa. was created. Nodes in the in-addr.arpa domain are named after the numbers in the dotted decimal representation of IPv4 addresses. But because IPv4 addresses get more specific from left to right and domain names get more specific from right to left, the order of IPv4 address octets must be reversed when building the in-addr.arpa domain name corresponding to the IPv4 address. For example, for the generalized IPv4 address w.x.y.z, the corresponding reverse query name is z.y.x.w.in-addr.arpa. IANA delegates responsibility for administering the reverse query namespace below the in-addr.arpa domain to organizations as they are assigned IPv4 address prefixes. Figure 8-4 shows an example of the reverse lookup portion of the DNS namespace. Within the in-addr.arpa domain, special pointer (PTR) resource records are added to associate the IPv4 addresses to their corresponding host names. To find a host name for the IPv4 address 157.54.200.2, a DNS client sends a DNS query for a PTR record for the name 2.200.54.157.in-addr.arpa. Reverse queries use the same name resolution process previously described for forward lookups (a combination of recursive and iterative queries). The DNS server finds the PTR record that contains the FQDN that corresponds to the IPv4 address 157.54.200.2 and sends that FQDN back to the DNS client. IPv6 reverse lookups use the ip6.arpa. domain. To create the domains for reverse queries, each hexadecimal digit in the fully expressed 32-digit IPv6 address becomes a separate level in the reverse domain hierarchy in inverse order. For example, the reverse lookup domain name for the address 3ffe:ffff::1:2aa:ff:fe3f:2a1c (fully expressed as 3ffe:ffff:0000:0001:02aa:00ff:fe3f:2a1c) is c.1.a.2.f.3.e.f.f.f.0.0.a.a.2.0.1.0.0.0.0.0.0.0.f.f.f.f.e.f.f.3.ip6.arpa. Just as in IPv4 addresses, PTR records in the reverse IPv6 domain map IPv6 addresses to FQDNs. For each resolved query (either recursive or iterative), the DNS resolver caches the returned information for a time that is specified in each resource record in the DNS response. This is known as positive caching. The amount of time in seconds to store the record data in the cache is referred to as the Time To Live (TTL). The network administrator of the zone that contains the record decides on the default TTL for the data in the zone. Smaller TTL values help ensure that data about the domain is more consistent across the network if the zone data changes often. However, this practice also increases the load on name servers because positive cache entries time out more quickly. After a DNS resolver caches data, it must start counting down from the received TTL so that it will know when to remove the data from its cache. For queries that can be satisfied by this cached data, the TTL that is returned is the current amount of time left before the data is flushed from the DNS cache. DNS client resolvers also have data caches and honor the TTL value so that they know when to remove the data. The DNS Client service in Windows XP and Windows Server 2003 and the DNS Server service in Windows Server 2003 support positive caching. As originally defined in RFC 1034, negative caching is the caching of failed name resolutions. A failed name resolution occurs when a DNS server returns a DNS Name Query Response message with an indication that the name was not found. Negative caching can reduce response times for names that DNS cannot resolve for both the DNS client and DNS servers during an iterative query process. Like positive caching, negative cache entries eventually time out and are removed from the cache based on the TTL in the received DNS Name Query Response message. The DNS Client service in Windows XP and Windows Server 2003 and the DNS Server service in Windows Server 2003 support negative caching. DNS Name Query Response messages can contain multiple resource records. For example, for a simple forward lookup, the DNS Name Query Response message can contain multiple Address (A) records that contain the IPv4 addresses associated with the desired host. When multiple resource records for the same resource record type exist, the following issues arise: For the DNS server, how to order the resource records in the DNS Name Query Response message For the DNS client, how to choose a specific resource record in the DNS Name Query Response message To address these issues, RFC 1794 describes a mechanism named round robin or load sharing to share and distribute loads for network resources. The central assumption of RFC 1794 is that when multiple resource records for the same resource record type and the same name exist, multiple servers are offering the same type of service to multiple users. For example, the Web site is actually hosted by multiple Web servers with different IPv4 addresses. To attempt to distribute the load of servicing all the users who access, the DNS servers that are authoritative for microsoft.com modify the order of the resource records for the name in successive DNS Name Query Response messages. The DNS client uses the data in the first resource record in the response. For example, if there were three A records for with the IPv4 addresses of 131.107.0.99, 131.107.0.100, and 131.107.0.101, the round robin scheme works as follows: For the first request, the order of the resource records in the DNS Name Query Response message is 131.107.0.99-131.107.0.100-131.107.0.101. For the second request, the order of the resource records in the DNS Name Query Response message is 131.107.0.100-131.107.0.101-131.107.0.99. For the third request, the order of the resource records in the DNS Name Query Response message is 131.107.0.101-131.107.0.99-131.107.0.100. The pattern repeats for subsequent queries. For an arbitrary number of resource records, the rotation process cycles through the list of resource records. A DNS server running Windows Server 2003 that is responding to a recursive query by default attempts to order the resource records according to the addresses that most closely match the IP address of the originating DNS client, and you can configure that server for round robin according to RFC 1794. To determine the addresses that are the closest match to the IPv4 address of the DNS client, the DNS Server service in Windows Server 2003 orders the addresses by using a high-order bit-level comparison of the DNS client's IPv4 address and the IPv4 addresses associated with the queried host name. This comparison technique is similar to the route determination process, in which IPv4 or IPv6 examines the IPv4 or IPv6 routing table to determine the route that most closely matches the destination address of a packet being sent or forwarded. DNS servers store information about portions of the domain namespace. When name servers have one or more zones for which they are responsible, they are said to be authoritative servers for those zones. Using the example in Figure 8-2, the name server containing the dev.microsoft.com zone is an authoritative server for dev.microsoft.com. Configuration of a DNS server includes adding name server (NS) resource records for all the other name servers that are in the same domain. Using the example on the previous page, if the two zones were on different name servers, each would be configured with an NS record about the other. These NS records provide pointers to the other authoritative servers for the domain. DNS defines two types of name servers, each with different functions: Primary A primary name server gets the data for its zones from locally stored and maintained files. To change a zone, such as adding subdomains or resource records, you change the zone file at the primary name server. Secondary A secondary name server gets the data for its zones across the network from another name server (either a primary name server or another secondary name server). The process of obtaining this zone information (that is, the database file) across the network is referred to as a zone transfer. Zone transfers occur over TCP port 53. The following are reasons to have secondary name servers within an enterprise network: Redundancy: At least two DNS servers, a primary and at least one secondary, serving each zone are needed for fault tolerance. Remote locations: Secondary name servers (or other primary servers for subdomains) are needed in remote locations that have a large number of DNS clients. Clients should not have to communicate across slower wide area network (WAN) links for DNS queries. Load distribution: Secondary name servers reduce the load on the primary name server. Because information for each zone is stored in separate files, the primary or secondary name server designation is defined at a zone level. In other words, a specific name server may be a primary name server for certain zones and a secondary name server for other zones. When defining a zone on a secondary name server, you configure the zone with the name server from which the zone information is to be obtained. The source of the zone information for a secondary name server is referred to as a master name server. A master name server can be either a primary or secondary name server for the requested zone. Figure 8-5 shows the relationship between primary, secondary, and master name servers. When a secondary name server starts up, it contacts the master name server and initiates a zone transfer for each zone for which it is acting as a secondary name server. Zone transfers also can occur periodically (provided that data on the master name server has changed) as specified in the SOA record of the zone file. The "Resource Records and Zones" section of this chapter describes the SOA resource record. When a DNS server receives a query, it attempts to locate the requested information within its own zone files. If this attempt fails because the server is not authoritative for the domain of the requested name and it does not have the record cached from a previous lookup, it must communicate with other name servers to resolve the request. On a globally connected network such as the Internet, DNS queries for names that do not use the second-level domain name of the organization might require interaction with DNS servers across WAN links outside of the organization. To prevent all the DNS servers in the organization from sending their queries over the Internet, you can configure forwarders. A forwarder sends queries across the Internet. Other DNS servers in the organization are configured to forward their queries to the forwarder. Figure 8-6 shows an example of intranet servers using a forwarder to resolve Internet names. A name server can use a forwarder in non-exclusive or exclusive mode. In non-exclusive mode, when a name server receives a DNS query that it cannot resolve through its own zone files, it sends a recursive query to its forwarder. The forwarder attempts to resolve the query and returns the results to the requesting name server. If the forwarder is unable to resolve the query, the name server that received the original query attempts to resolve the query using iterative queries. A name server using a forwarder in non-exclusive mode does the following when attempting to resolve a name: Checks its local cache. Checks its zone files. Sends a recursive query to a forwarder. Attempts to resolve the name through iterative queries to other DNS servers. In exclusive mode, name servers rely on the name-resolving ability of the forwarders. When a name server in exclusive mode receives a DNS query that it cannot resolve through its own zone files, it sends a recursive query to its designated forwarder. The forwarder then carries out whatever communication is necessary to resolve the query and returns the results to the originating name server. If the forwarder is unable to resolve the request, the originating name server returns a query failure to the original DNS client. Name servers in exclusive mode make no attempt to resolve the query on their own if the forwarder is unable to satisfy the request. A name server using a forwarder in exclusive mode does the following when attempting to resolve a name: Although all DNS servers cache queries that they have resolved, caching-only servers are DNS servers that only perform queries, cache the answers, and return the results. Caching-only servers are not authoritative for any domains and contain only the information that they have cached while attempting to resolve queries. When caching-only servers are started, they do not perform any zone transfers because they have no zones and no entries exist in their caches. Initially, the caching-only server must forward queries until the cache has been built up to a point where it can service commonly used queries by just using its cache entries. If your organization is connected to the Internet, in many cases you do not need to maintain a DNS infrastructure. For small networks, DNS name resolution is simpler and more efficient by having the DNS client query a DNS server that is maintained by an ISP. Most ISPs will maintain domain information for a fee. If your organization wants to have control over its domain or not incur the costs of using an ISP, you can set up your organization's own DNS servers. In both cases, either going through an ISP or setting up separate DNS servers, the IANA must be informed of the domain name of the organization and the IP addresses of at least two DNS servers on the Internet that service the domain. An organization can also set up DNS servers within itself independent of the Internet. At least two computers as DNS servers are recommended for reliability and redundancy—a primary and a secondary name server. The primary name server maintains the database of information, which is then replicated from the primary name server to the secondary name server. This replication allows name queries to be serviced even if one of the name servers is unavailable. Replication is scheduled based on how often names change in the domain. Replication should be frequent enough so that changes are reflected on both servers. However, excessive replication can have a negative impact on the performance of the network and name servers. Resource records have the following format: owner TTL type class RDATA owner The domain name of the resource record. TTL (Time to Live) The length of time in seconds that a DNS resolver should wait before it removes from its cache an entry that corresponds to the resource record. type The type of resource record. class The protocol family in use, which is typically IN for the Internet class. RDATA The resource data for the resource record type. For example, for an address (A) resource record, RDATA is the 32-bit IPv4 address that corresponds to the FQDN in the owner field. Resource records are represented in binary form in DNS request and response messages. In text-based DNS database files, most resource records are represented as a single line of text. For readability, blank lines and comments are often inserted in the database files and are ignored by the DNS server. Comments always start with a semicolon (;) and end with a carriage return. The following is an example A resource record stored in a DNS database file: srv1.dev.microsoft.com. 3600 A IN 157.60.221.205 Each resource record starts with the owner in the first column (srv1.dev.microsoft.com.). If the first column is blank, then it is assumed that the owner for this record is the owner of the previous record. The owner is followed by the TTL (3600 seconds = 1 hour), type (A = Address record), class (IN = Internet), and then the RDATA (Resource Data = 157.60.221.205). If the TTL value is not present, the DNS server sets the value to the TTL specified in the SOA (Start of Authority) record of the zone. The DNS standards define many types of resource records. The most commonly used resource records are the following: SOA Identifies the start of a zone of authority. Every zone contains an SOA resource record at the beginning of the zone file, which stores information about the zone, configures replication behavior, and sets the default TTL for names in the zone. A Maps an FQDN to an IPv4 address. AAAA Maps an FQDN to an IPv6 address. NS Indicates the servers that are authoritative for a zone. NS records indicate primary and secondary servers for the zone specified in the SOA resource record, and they indicate the servers for any delegated zones. Every zone must contain at least one NS record at the zone root. PTR Maps an IP address to an FQDN for reverse lookups. CNAME Specifies an alias (synonymous name). MX Specifies a mail exchange server for a DNS domain name. A mail exchange server is a host that receives mail for the DNS domain name. SRV Specifies the IP addresses of servers for a specific service, protocol, and DNS domain. RFCs 1035, 1034, 1183, and others define less frequently used resource records. The DNS Server service in Windows Server 2003 is fully compliant with RFCs 1034, 1035, and 1183. The DNS Server service in Windows Server 2003 also supports the following resource record types that are Microsoft-specific: WINS Indicates the IPv4 address of a Windows Internet Name Service (WINS) server for WINS forward lookup. The DNS Server service in Windows Server 2003 can use a WINS server for looking up the host portion of a DNS name. WINS-R Indicates the use of WINS reverse lookup, in which a DNS server uses a NetBIOS Adapter Status message to find the host portion of the DNS name given its IPv4 address. For detailed information about the structure and contents of various types of DNS resource records, including examples, see DNS Reference Information. You add delegation and glue records to a zone file to indicate the delegation of a subdomain to a separate zone. For example, in Figure 8-2, the DNS server that is authoritative for the microsoft.com zone must be configured so that, when resolving names for the dev.microsoft.com, the DNS server can determine the following: That a separate zone for that domain exists. A delegation is an NS record in the parent zone that lists the name server that is authoritative for the delegated zone. Where the zone for that domain resides. A glue record is an A record for the name server that is authoritative for the delegated zone. For example, in Figure 8-2, the name server for the microsoft.com. domain has delegated authority for the dev.microsoft.com zone to the name server devdns.dev.microsoft.com at the IPv4 address of 157.60.41.59. In the zone file for the microsoft.com. zone, the following records must be added: dev.microsoft.com. IN NS devdns.dev.microsoft.com. devdns.dev.microsoft.com. IN A 157.60.41.59 Without the delegation record for dev.microsoft.com, queries for all names ending in dev.microsoft.com would fail. Glue records are needed when the name of the name server that is authoritative for the delegated zone is in the domain of the name server attempting name resolution. In the example above, we need the A record for devdns.dev.microsoft.com. because that FQDN is within the microsoft.com. portion of the DNS namespace. Without this A record, the microsoft.com. DNS server would be unable to locate the name server for the dev.microsoft.com. zone, and all name resolutions for names in the dev.microsoft.com domain would fail. A glue record is not needed when the name of the authoritative name server for the delegated zone is in a domain that is different than the domain of the zone file. In this case, the DNS server would use normal iterative queries to resolve the name to an IP address. The DNS Server service in Windows Server 2003 automatically adds delegation and glue records when you delegate a subdomain. The root hints file, also known as the cache file, contains the names and addresses of root name servers. For resolving domain names on the Internet, the default file provided with the DNS Server service in Windows Server 2003 has the records for the root servers of the Internet. For installations not connected to the Internet, the file should be replaced to contain the name servers authoritative for the root of the private network. This file is named Cache.dns and is stored in the systemroot/System32/Dns folder. For the current Internet cache file, see the FTP site for InterNIC. Secondary name servers obtain zone files from a master name server using a zone transfer. The zone transfer replicates the set of records in the zone file from the master server to the secondary server. Zone transfers occur for all zones for which a DNS server is a secondary name server upon startup and on an ongoing basis to ensure that the most current information about the zone is reflected in the local zone file. The two types of zone transfers are full and incremental. The original DNS RFCs defined zone transfers as a transfer of the entire zone file, regardless of how the file has changed since the last time it was transferred. In a full zone transfer, the following process occurs: The secondary server waits until the next refresh time (as specified in the SOA resource record) and then queries the master server for the SOA resource record for the zone. The master server responds with the SOA resource record. The secondary server checks the Serial Number field of the returned SOA resource record. If the serial number in the SOA resource record is higher than the serial number of the SOA resource record of the locally stored zone file, then there have been changes to the zone file on the master server and a zone transfer is needed. Whenever a resource record is changed on the master name server, the serial number in the SOA resource record is updated. The secondary server sends an AXFR request (a request for a full zone transfer) to the master server. The secondary server initiates a TCP connection with the master server and requests all of the records in the zone database. After the zone transfer, the Serial Number field in the SOA record of the local zone file matches the Serial Number field in the SOA record of the master server. Figure 8-7 shows a full zone transfer. If the secondary server does not receive a response to the SOA query, it retries SOA queries using a retry time interval specified in the SOA resource record in the local zone file. The secondary server continues to retry until the time elapsed since attempting to perform a zone transfer reaches an expiration time specified in the SOA resource record in the local zone file. After the expiration time, the secondary server closes the zone file and does not use it to answer subsequent queries. The secondary server keeps attempting to perform the zone transfer. When the zone transfer succeeds, the local zone file is opened and used for subsequent queries. In a full zone transfer, the entire zone file is transferred. This can consume a substantial portion of processing resources and network bandwidth when the zone files are large and when zone records are frequently changed. To minimize the amount of information that is sent in a zone transfer for changes to zone records, RFC 1995 specifies a standard method of performing incremental zone transfers. In an incremental zone transfer, only the resource records that have changed (been added, deleted, or modified) are sent during the zone transfer. In an incremental zone transfer, the secondary server performs the same query for the SOA record of the master server and comparison of the Serial Number field. If changes exist, the secondary server sends an IXFR request (a request for an incremental zone transfer) to the master server. The master server sends the records that have changed, and the secondary server builds a new zone file from the records that have not changed and the records in the incremental zone transfer. Figure 8-8 shows an incremental zone transfer. For the master server to determine the records that have changed, it must maintain a history database of changes made to its zone files. The zone file changes are linked to a serial number so that the master server can determine which changes were made to the zone past the serial number indicated in the IXFR request from the secondary server. The DNS Server service in Windows Server 2003 supports incremental zone transfer. For both full and incremental zone transfers, the secondary server always initiates the zone transfer based on periodically querying the master server for its SOA record. The original DNS RFCs do not define a notification mechanism if the master server wanted to immediately propagate a large number of changes to its secondary servers. To improve the consistency of data among secondary servers, RFC 1996 specifies DNS Notify, an extension of DNS that allows master servers to send notifications to secondary servers that a zone transfer might be needed. Upon receipt of a DNS notification, secondary servers request the SOA record of their master server and initiate a full or incremental zone transfer as needed. Figure 8-9 shows the DNS notify process. To determine the secondary servers to which notifications should be sent, the master server maintains a notify list (a list of IP addresses) for each zone. The master server sends notifications to only the servers in the notify list when the zone is updated. The DNS Server service in Windows Server 2003 supports the configuration of a notify list (a list of IPv4 addresses) for each zone. DNS was originally defined as a name resolution scheme for relatively static names and addresses; DNS records contained information about servers, whose name and address configuration did not change often. Therefore, the manual administration of resource records in zone files was manageable. These original assumptions work well for an environment that is based on server and client computers that are statically configured, in which the client computers communicate only with the server computers and address configuration does not change. With the advent of peer-to-peer communications and applications and the Dynamic Host Configuration Protocol (DHCP), both of the assumptions of static DNS are challenged. In a Windows-based environment, client computers often communicate directly with each other and are automatically configured using DHCP. To communicate with each other, client computers must be able to resolve each other's names; therefore they must have corresponding DNS resource records. With DHCP, the address configuration of client computers could change every time they start. Manually administering DNS records for this environment is obviously impractical. Therefore, RFC 2136 defines DNS dynamic update to provide an automated method to populate the DNS namespace with the current names and addresses for client and server computers by dynamically updating zone data on a zone's primary server. With DNS dynamic update, DNS records are automatically created, modified, and removed by either host computers or DHCP servers on their behalf. For example, a client computer that supports DNS dynamic update sends UPDATE messages to its DNS server to automatically add A, AAAA, and PTR records. The DNS server, which must also support DNS dynamic update, verifies that the sender is permitted to make the updates and then updates its local zone files. The DNS Client service in Windows XP and Windows Server 2003 and the DNS Server service in Windows Server 2003 support DNS dynamic update. The chapter includes the following pieces of key information: DNS is a namespace and a protocol for replicating databases and resolving FQDNs used on the Internet and intranets. DNS consists of the domain namespace, name servers that store resource records, and DNS resolvers. A domain is a branch of the DNS namespace beginning at its root node. All of the resource records in a domain are stored in zones on DNS servers. A zone is a contiguous portion of a DNS domain whose information is stored in a file on a DNS server. On the Internet, DNS consists of the root domain, top-level domains, and second-level domains. IANA manages the names and DNS servers of the root domain and the top-level domains. Individual organizations are responsible for managing the names in their second-level domains. DNS resolvers use either recursive or iterative queries. A recursive query is used to request definitive information about a name, and DNS clients typically use them for FQDN resolution. An iterative query is used to request best-effort information about a name, and DNS servers typically use them to query other DNS servers. Forward lookups provide an IP address based on an FQDN. Reverse lookups provide an FQDN based on an IP address. DNS servers can have the role of a primary server (in which the records are modified by the DNS administrator) or a secondary server (in which the records are obtained from another server) for each zone for which they are authoritative. A master server is a server from which a secondary server obtains a zone transfer. DNS defines many types of resource records, the most common of which are SOA, A, AAAA, NS, PTR, CNAME, MX, and SRV. Zone transfers can transfer either the entire zone file (known as a full zone transfer) or just the records that have changed (known as an incremental zone transfer). DNS Notify is a standard mechanism by which a master name server notifies secondary name servers to check for changes in zone files. DNS dynamic update is a standard method for hosts, or DHCP servers on behalf of hosts, to automatically update the zones of primary DNS servers with resource records that correspond to current names and address configurations. DNS – See Domain Name System (DNS). DNS dynamic update - An update to the DNS standard that permits DNS clients to dynamically register and update their resource records in the zones of the primary server. DNS server – A server that maintains a database of mappings of FQDNs to various types of data, such as IP addresses. domain – Any branch of the DNS namespace. Domain Name System (DNS) – A hierarchical, distributed database that contains mappings of DNS domain names to various types of data, such as IP addresses. DNS enables the location of computers and services by user-friendly names and the discovery of other information stored in the database. forward lookup – A DNS query that maps an FQDN to an IP address. forwarder - A DNS server designated by other internal DNS servers to be used to forward queries for resolving external or offsite DNS domain names, such as those used on the Internet. FQDN – See fully qualified domain name. fully qualified domain name (FQDN) - A DNS name that has been stated to indicate its absolute location in the domain namespace tree. An FQDN has a trailing period (.) to qualify its position relative to the root of the namespace. An example is host.example.microsoft.com. host name – The DNS name of a host or interface on a network. For one computer to find another, the name of the computer to locate must either appear in the Hosts file on the computer that is looking, or the name must be known by a DNS server. For most Windows-based computers, the host name and the computer name are the same. Host name resolution – The process of resolving a host name to a destination IP address. Hosts file – A local text file in the same format as the 4.3 BSD release of UNIX /etc/hosts file. This file maps host names to IP addresses, and it is stored in the systemroot\System32\Drivers\Etc folder. iterative query - A query made to a DNS server for the best answer the server can provide. master server – A DNS server that is authoritative for a zone and that is also a source of zone information for other secondary servers. A master server can be either a primary or secondary master server, depending on how the server obtains its zone data. primary server – A DNS server that is authoritative for a zone and will then use separate iterative queries to other DNS servers on behalf of the requester to assist in completing an answer for the recursive query. reverse lookup – A DNS query that maps an IP address to an FQDN. root domain - The beginning of the DNS namespace. secondary server - A DNS server that is authoritative for a zone and. subdomain - A DNS domain located directly beneath another domain (the parent domain) in the namespace tree. For example, example.microsoft.com would be a subdomain of the domain microsoft.com. administered by a DNS server. A zone stores the domain names and data of the domain with a corresponding name, except for domain names stored in delegated subdomains. zone transfer - The synchronization of authoritative DNS data between DNS servers. A DNS server configured with a secondary zone periodically queries its master server to synchronize its zone data.
http://technet.microsoft.com/en-us/library/bb727007.aspx
crawl-003
refinedweb
8,202
61.26
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> BOOL http_file_access ( U8* fname, /* Requested file name. */ U8 user_id ); /* User identification number. */ The http_file_access function checks if a file access is allowed for a specified user. This allows access protection of sensitive web pages. The protected web pages will not display for unpriviliged users. Instead of this, the Web server will show Error page 403 - Forbidden. The argument fname points to a buffer containing the file name of a file, which the user is trying to access. The file name is a 0-terminated string. The argument user_id is an user identification number, the same as it was returned from the http_check_account function. It identifies the user, who is trying to access the specified file. The http_file_access function is in the HTTP_MultiUser.c module. The prototype is defined in net_config.h. note The http_file_access function returns __TRUE if access to file is allowed, and __FALSE if access is forbidden. http_check_account, http_get_user_id BOOL http_file_access (U8 *fname, U8 user_id) { /* This function checks if file access for the user is allowed. */ if (user_id == 3) { /* Check if "Guest" is trying to access the "system.cgi" */ if (strcmp ((char *)fname, "system.cgi") == 0) { /* The access to this file is not allowed for user "Guest". */ /* Web server will return the error code 403 - Forbidden. */ return (__FALSE); } }.
https://www.keil.com/support/man/docs/rlarm/rlarm_http_file_access.htm
CC-MAIN-2020-34
refinedweb
225
59.9
What's wrong with the following program? public class SomethingIsWrong { public static void main(String[] args) { Rectangle myRect; myRect.width = 40; myRect.height = 50; System.out.println("myRect's area is " + myRect.area()); } } The following code creates one array and one string object. How many references to those objects exist after the code executes? Is either object eligible for garbage collection? ... String[] students = new String[10]; String studentName = "Peter Parker"; students[0] = studentName; studentName = null; ... How does a program destroy an object that it creates? Fix the program called SomethingIsWrong shown in Question 1. Given the following class, called NumberHolder, write some code that creates an instance of the class, initializes its two member variables, and then displays the value of each member variable. public class NumberHolder { public int anInt; public float aFloat; }
http://docs.oracle.com/javase/tutorial/java/javaOO/QandE/objects-questions.html
CC-MAIN-2016-50
refinedweb
133
61.93